Location: Seattle WA or Palo Alto CA (Hybrid/Remote)
Type: Full-time Senior Technical Leadership
About the Team
Clients Physical AI Lab is building the next generation of embodied intelligence at the intersection of multimodal foundation models simulation agentic AI and real-world robotics. Our mission is to move from perception and reasoning to robust real-world action across safety industrial healthcare warehouse autonomous systems and smart environment use cases.
We are looking for a Research Leader with deep foundational model-building experience a strong publication record and the ability to translate frontier research into deployable systems and long-term IP.
The Role
As Lead Research Scientist you will define and drive research agenda in Vision AI multimodal foundation models simulation-first learning agentic AI and embodied intelligence. You will lead a small team of researchers engineers and interns while contributing directly to model design large-scale training benchmarking and external scientific visibility.
This role is for someone who has gone beyond applying existing models and has materially advanced architectures training methods datasets or evaluation frameworks in AI robotics vision autonomous driving or multimodal learning.
What Youll Do
Lead high-impact research in multimodal foundation models world models embodied AI vision-language-action systems and agentic AI.
Develop new approaches for perception temporal reasoning spatial intelligence affordance understanding autonomous decision-making and sim2real transfer.
Advance challenging robotics capabilities including dexterous manipulation contact-rich interaction bimanual coordination long-horizon task execution navigation in dynamic environments and robust action under uncertainty.
Contribute to large-scale model building including multimodal pretraining distributed training fine-tuning distillation and evaluation of models for vision robotics and autonomous systems.
Help shape research relevant to autonomous driving and mobile autonomy including scene understanding multimodal sensor reasoning planning-aware perception and edge-case robustness.
Guide integration of research with simulation and digital twin platforms such as Isaac Sim Isaac Lab MuJoCo Omniverse or related environments.
Establish rigorous benchmarks and reproducible evaluation frameworks for robustness safety generalization manipulation success policy performance and real-world deployment readiness.
Mentor Ph.D. interns and engineers and help build a strong research culture grounded in rigor speed originality and scientific excellence.
Minimum Qualifications
Ph.D. in Computer Science Robotics Machine Learning Computer Vision Autonomous Systems or a related field.
Strong publication record in top venues such as CVPR ICCV ECCV NeurIPS ICLR ICML CoRL RSS or leading autonomous driving/robotics venues.
5 years of research experience in academia industry or advanced R&D environments.
Demonstrated experience building or advancing large-scale foundational models novel architectures or training methods in multimodal AI vision robotics autonomous driving embodied AI world models or simulation-based learning.
Deep expertise in PyTorch and/or JAX GPU training distributed experimentation and large-scale model development.
Proven ability to lead ambitious technical programs and mentor junior researchers.
Preferred Qualifications
Publications or patents in multimodal foundation models dexterous robotics autonomous driving spatial intelligence simulation-based learning manipulation or embodied AI.
Strong experience in Vision AI including perception tracking grounding 3D scene understanding video understanding sensor fusion or multimodal reasoning.
Familiarity with agentic AI systems tool-using agents planning frameworks and memory-based architectures; experience with agentic memory knowledge graphs or long-horizon reasoning systems is a plus.
Experience with Isaac Sim MuJoCo OpenUSD/Omniverse Open3D PyTorch3D NeRF/3DGS or related simulation and 3D stacks.
Familiarity with imitation learning reinforcement learning planning MPC control teleoperation data pipelines or policy learning for robotics and autonomous systems.
Experience with Ray Kubernetes Triton TensorRT Docker W&B or large-scale training and deployment infrastructure.
Background in trustworthy AI robotics safety evaluation or explainability for autonomous systems.
What Success Looks Like
Publishable reproducible and deployable research that strengthens Centifics Physical AI portfolio.
New technical IP in multimodal AI simulation dexterous robotics autonomous systems and embodied intelligence.
Strong mentorship and research leadership across a growing team.
Demonstrable impact on model robustness large-scale training capability sim2real performance manipulation capability and real-world deployability.
Our Stack
Modeling: PyTorch JAX Hugging Face xFormers
Simulation: Isaac Sim Isaac Lab MuJoCo OpenUSD Omniverse Open3D
Systems: Python Ray FastAPI Docker Kubernetes Triton TensorRT
Multimodal AI: CLIP SAM VLMs world models vision-language-action architectures agent frameworks
Location: Seattle WA or Palo Alto CA (Hybrid/Remote)Type: Full-time Senior Technical LeadershipAbout the TeamClients Physical AI Lab is building the next generation of embodied intelligence at the intersection of multimodal foundation models simulation agentic AI and real-world robotics. Our mission...
Location: Seattle WA or Palo Alto CA (Hybrid/Remote)
Type: Full-time Senior Technical Leadership
About the Team
Clients Physical AI Lab is building the next generation of embodied intelligence at the intersection of multimodal foundation models simulation agentic AI and real-world robotics. Our mission is to move from perception and reasoning to robust real-world action across safety industrial healthcare warehouse autonomous systems and smart environment use cases.
We are looking for a Research Leader with deep foundational model-building experience a strong publication record and the ability to translate frontier research into deployable systems and long-term IP.
The Role
As Lead Research Scientist you will define and drive research agenda in Vision AI multimodal foundation models simulation-first learning agentic AI and embodied intelligence. You will lead a small team of researchers engineers and interns while contributing directly to model design large-scale training benchmarking and external scientific visibility.
This role is for someone who has gone beyond applying existing models and has materially advanced architectures training methods datasets or evaluation frameworks in AI robotics vision autonomous driving or multimodal learning.
What Youll Do
Lead high-impact research in multimodal foundation models world models embodied AI vision-language-action systems and agentic AI.
Develop new approaches for perception temporal reasoning spatial intelligence affordance understanding autonomous decision-making and sim2real transfer.
Advance challenging robotics capabilities including dexterous manipulation contact-rich interaction bimanual coordination long-horizon task execution navigation in dynamic environments and robust action under uncertainty.
Contribute to large-scale model building including multimodal pretraining distributed training fine-tuning distillation and evaluation of models for vision robotics and autonomous systems.
Help shape research relevant to autonomous driving and mobile autonomy including scene understanding multimodal sensor reasoning planning-aware perception and edge-case robustness.
Guide integration of research with simulation and digital twin platforms such as Isaac Sim Isaac Lab MuJoCo Omniverse or related environments.
Establish rigorous benchmarks and reproducible evaluation frameworks for robustness safety generalization manipulation success policy performance and real-world deployment readiness.
Mentor Ph.D. interns and engineers and help build a strong research culture grounded in rigor speed originality and scientific excellence.
Minimum Qualifications
Ph.D. in Computer Science Robotics Machine Learning Computer Vision Autonomous Systems or a related field.
Strong publication record in top venues such as CVPR ICCV ECCV NeurIPS ICLR ICML CoRL RSS or leading autonomous driving/robotics venues.
5 years of research experience in academia industry or advanced R&D environments.
Demonstrated experience building or advancing large-scale foundational models novel architectures or training methods in multimodal AI vision robotics autonomous driving embodied AI world models or simulation-based learning.
Deep expertise in PyTorch and/or JAX GPU training distributed experimentation and large-scale model development.
Proven ability to lead ambitious technical programs and mentor junior researchers.
Preferred Qualifications
Publications or patents in multimodal foundation models dexterous robotics autonomous driving spatial intelligence simulation-based learning manipulation or embodied AI.
Strong experience in Vision AI including perception tracking grounding 3D scene understanding video understanding sensor fusion or multimodal reasoning.
Familiarity with agentic AI systems tool-using agents planning frameworks and memory-based architectures; experience with agentic memory knowledge graphs or long-horizon reasoning systems is a plus.
Experience with Isaac Sim MuJoCo OpenUSD/Omniverse Open3D PyTorch3D NeRF/3DGS or related simulation and 3D stacks.
Familiarity with imitation learning reinforcement learning planning MPC control teleoperation data pipelines or policy learning for robotics and autonomous systems.
Experience with Ray Kubernetes Triton TensorRT Docker W&B or large-scale training and deployment infrastructure.
Background in trustworthy AI robotics safety evaluation or explainability for autonomous systems.
What Success Looks Like
Publishable reproducible and deployable research that strengthens Centifics Physical AI portfolio.
New technical IP in multimodal AI simulation dexterous robotics autonomous systems and embodied intelligence.
Strong mentorship and research leadership across a growing team.
Demonstrable impact on model robustness large-scale training capability sim2real performance manipulation capability and real-world deployability.
Our Stack
Modeling: PyTorch JAX Hugging Face xFormers
Simulation: Isaac Sim Isaac Lab MuJoCo OpenUSD Omniverse Open3D
Systems: Python Ray FastAPI Docker Kubernetes Triton TensorRT
Multimodal AI: CLIP SAM VLMs world models vision-language-action architectures agent frameworks
View more
View less