Research Scientist Controlled 3D Generation
Location: Remote
About the Role
Were seeking a Research Scientist passionate about 3D generation flow matching and diffusion models. Youll help advance the frontier of controllable 3D content creationbuilding models that generate consistent editable and physically grounded 3D assets and scenes.
What Youll Do
- Conduct cutting-edge research on flow-matching diffusion and score-based methods for 3D generation and reconstruction.
- Design and implement scalable training pipelines for controllable 3D generation (meshes Gaussians NeRFs voxels implicit fields).
- Develop techniques for conditioning and control (text sketch pose camera physics) and multi-view consistency.
- Analyse model behaviour through ablations visualisations and quantitative metrics.
- Collaborate with cross-disciplinary research graphics and infrastructure teams to translate research into production-ready systems.
- Publish results at top-tier venues and work with interns.
What You Bring
- PhD (or equivalent experience) in Machine Learning Computer Vision or Computer Graphics.
- Published work on diffusion flow-matching or score-based generative models (2D or 3D).
- Strong engineering and problem-solving abilities: experience with PyTorch JAX or CUDA-level optimisation.
- Understanding of 3D representations (meshes Gaussians signed-distance fields volumetric grids implicit networks).
- Solid grasp of geometry processing multi-view consistency and differentiable rendering.
- Ability to scale experiments efficiently and communicate complex results clearly.
Bonus / Preferred
- Experience generating coherent 3D scenes with multiple interacting objects lighting and spatial layout.
- Familiarity with scene-level control (object placement camera path simulation or text-to-scene composition).
- Knowledge of video-to-3D image-to-scene or 4D temporal generation.
- Background in physically-based rendering simulation or world-model architectures.
- Track record of impactful publications or open-source releases.
Equal Employment Opportunity:
We are an equal opportunity employer and do not discriminate on the basis of race religion national origin gender sexual orientation age veteran status disability or other legally protected statuses.
Research Scientist Controlled 3D GenerationLocation: RemoteAbout the RoleWere seeking a Research Scientist passionate about 3D generation flow matching and diffusion models. Youll help advance the frontier of controllable 3D content creationbuilding models that generate consistent editable and phys...
Research Scientist Controlled 3D Generation
Location: Remote
About the Role
Were seeking a Research Scientist passionate about 3D generation flow matching and diffusion models. Youll help advance the frontier of controllable 3D content creationbuilding models that generate consistent editable and physically grounded 3D assets and scenes.
What Youll Do
- Conduct cutting-edge research on flow-matching diffusion and score-based methods for 3D generation and reconstruction.
- Design and implement scalable training pipelines for controllable 3D generation (meshes Gaussians NeRFs voxels implicit fields).
- Develop techniques for conditioning and control (text sketch pose camera physics) and multi-view consistency.
- Analyse model behaviour through ablations visualisations and quantitative metrics.
- Collaborate with cross-disciplinary research graphics and infrastructure teams to translate research into production-ready systems.
- Publish results at top-tier venues and work with interns.
What You Bring
- PhD (or equivalent experience) in Machine Learning Computer Vision or Computer Graphics.
- Published work on diffusion flow-matching or score-based generative models (2D or 3D).
- Strong engineering and problem-solving abilities: experience with PyTorch JAX or CUDA-level optimisation.
- Understanding of 3D representations (meshes Gaussians signed-distance fields volumetric grids implicit networks).
- Solid grasp of geometry processing multi-view consistency and differentiable rendering.
- Ability to scale experiments efficiently and communicate complex results clearly.
Bonus / Preferred
- Experience generating coherent 3D scenes with multiple interacting objects lighting and spatial layout.
- Familiarity with scene-level control (object placement camera path simulation or text-to-scene composition).
- Knowledge of video-to-3D image-to-scene or 4D temporal generation.
- Background in physically-based rendering simulation or world-model architectures.
- Track record of impactful publications or open-source releases.
Equal Employment Opportunity:
We are an equal opportunity employer and do not discriminate on the basis of race religion national origin gender sexual orientation age veteran status disability or other legally protected statuses.
View more
View less