Principal Research Scientist – Scaling
San Francisco, CA - USA
Job Summary
Principal Research Scientist Scaling
P-1227
About Databricks AI
At Databricks we are obsessed with enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development by building and running the worlds best data and AI platform. The Databricks AI Research organization enables companies to develop AI models and systems using their own data; from pre-training LLMs from scratch to state-of-the-art retrieval-augmented generation by producing novel science and putting it into production.
We believe a companys AI models are a core part of their IP and that highquality AI models should be available to all.
About the Scaling Research Team
The Databricks AI Scaling team focuses on pushing the boundaries of large language model (LLM) training and inference efficiency beyond what is required to support existing models. The team explores novel avenues for scaling and efficiency improvements across algorithms systems and infrastructure requiring researchers who can both drive independent research agendas and dive deep into lowlevel implementation details with engineering partners.
Role Summary
As a Principal Research Scientist Scaling you will lead a team of worldclass researchers and engineers to advance the state of the art in largescale machine learning focusing on post-training RL and inference efficiency optimization and scaling. You will define and execute a research roadmap that advances the Databricks AI platform and delivers tangible improvements to how customers train serve and adapt LLMs at scale working closely with product data and engineering leaders to bring cuttingedge methods into production.
The Impact You Will Have
- Lead and grow a multidisciplinary research team focused on foundational and applied AI problems with a particular emphasis on LLM scaling efficiency and systems performance.
- Define the scaling research roadmap in alignment with Databricks strategic objectives prioritizing advances in foundation model efficiency and largescale training and inference.
- Drive algorithmic innovations for largescale neural network training and inference including novel optimizers lowprecision techniques and model adaptation methods and guide your team in rigorous empirical validation against stateoftheart approaches.
- Optimize endtoend ML systems for distributed training and RL memory efficiency and compute efficiency through close collaboration with core systems and platform teams ensuring that research ideas translate into performant reliable infrastructure.
- Partner with product and engineering to translate research breakthroughs especially around scaling and efficiency into customerimpacting capabilities in the Databricks AI platform.
- Foster a culture of scientific excellence and openness including highquality research practices reproducible experimentation and effective internal knowledge sharing across Databricks AI.
- Represent Databricks AI research externally through toptier publications conference talks and collaborations with academia and the opensource community with a focus on optimization and efficiency for largescale models.
- Mentor and develop talent providing both technical guidance (research agendas experimentation implementation) and career development support for research scientists and engineers.
What You Will Do
- Define and lead independent research programs on foundation model efficiency covering topics such as optimizer design lowprecision training/inference scalable model architectures and efficient adaptation methods.
- Oversee the design and execution of largescale experiments including benchmarking against stateoftheart methods and evaluating tradeoffs in quality latency throughput and cost.
- Work handson with your team on highquality efficient code in Python and PyTorch for research implementation rapid prototyping and integration with Databricks production systems.
- Collaborate with distributed systems and infra teams to push the limits of distributed training parallelism strategies memory management and hardware utilization for LLMs and other large models.
- Establish metrics evaluation protocols and best practices for scalingfocused research (e.g. training efficiency inference cost energy usage) and drive their adoption across Databricks AI.
- Champion responsible and robust deployment of scaling innovations ensuring that model behavior reliability and safety remain firstclass considerations.
What We Look For
- Proven ability to lead a research team to develop novel techniques for foundation model efficiency and related topics with a strong track record of industry impact.
- Deep expertise in at least one of: generative AI LLMs distributed ML systems model optimization or responsible AI with a strong emphasis on scaling and efficiency for largescale neural networks.
- Hands on leadership - strong programming skills and demonstrated ability to write highquality efficient code in Python and PyTorch for research implementation and experimentation.
- Demonstrated ability to translate research innovation into scalable product capabilities in partnership with product and engineering teams.
- Excellent communication leadership and stakeholder management skills with experience influencing crossfunctional roadmaps and aligning research with business impact.
Nice to Have
- Prior work at the intersection of systems and ML such as distributed training frameworks compiler and kernel optimization for deep learning workloads or memory/computeefficient model design.
- Strong industry and academic network in largescale ML with ongoing collaborations or service (e.g. PC/area chair) at top conferences in ML and systems.
- A strong record of research impactsuch as firstauthor publications at top ML/systems conferences (e.g. ICLR ICML NeurIPS MLSys) influential opensource contributions or widely used deployed systemsespecially in optimization or efficiency.
Required Experience:
Staff IC
About Company
The Databricks Platform is the world’s first data intelligence platform powered by generative AI. Infuse AI into every facet of your business.