At Databricks we are obsessed with enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions.
The Mosaic AI organization enables companies to develop AI models and systems using their own data with technologies ranging from pre-training LLMs from scratch to augmented generation using the latest retrieval techniques. Mosaic AI does so by producing novel science and putting it into production. Mosaic AI is committed to the belief that a companys AI models are just as valuable as any other core IP and that high-quality AI models should be available to all.
Job Description
As a research engineer on the Scaling team you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art. You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. And most importantly you will love our customers: our goal is to make our customers successful in applying state-of-the-art LLMs and AI systems and we encode our scientific expertise into our products to make that possible.
The Impact you will have
As a research engineer on the Scaling Team at Databricks you will:
- Drive performance improvements through advanced optimization techniques including kernel fusion mixed precision memory layout optimization tiling strategies and tensorization for training-specific patterns
- Design implement and optimize high-performance GPU kernels for training workloads (e.g. attention mechanisms custom layers gradient computation activation functions) targeting NVIDIA architectures
- Design and implement distributed training frameworks for large language models including parallelism strategies (data tensor pipeline ZeRO-based) and optimized communication patterns for gradient synchronization and collective operations
- Profile debug and optimize end-to-end training workflows to identify and resolve performance bottlenecks applying memory optimization techniques like activation checkpointing gradient sharding and mixed precision training.
What We Look for
- BS/MS/PhD in Computer Science or related field with hands-on experience writing and tuning CUDA kernels for ML training applications or hands-on experience in distributed training frameworks (PyTorch DDP DeepSpeed Megatron-LM FSDP)
- Strong understanding of NVIDIA GPU architecture (memory hierarchy tensor cores warp scheduling SM occupancy) and proficiency with CUDA debugging/profiling tools (Nsight NVProf)
- Deep understanding of parallelism techniques and memory optimization strategies for large-scale model training with proven ability to debug and optimize distributed workloads
- Strong software engineering skills in Python and PyTorch with experience supporting production training workflows and knowledge of LLM training dynamics including hyperparameter tuning and optimization strategies.
Required Experience:
Senior IC
At Databricks we are obsessed with enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI platform so our customers can focus on the high-value challenges that are central to ...
At Databricks we are obsessed with enabling data teams to solve the worlds toughest problems from security threat detection to cancer drug development. We do this by building and running the worlds best data and AI platform so our customers can focus on the high-value challenges that are central to their own missions.
The Mosaic AI organization enables companies to develop AI models and systems using their own data with technologies ranging from pre-training LLMs from scratch to augmented generation using the latest retrieval techniques. Mosaic AI does so by producing novel science and putting it into production. Mosaic AI is committed to the belief that a companys AI models are just as valuable as any other core IP and that high-quality AI models should be available to all.
Job Description
As a research engineer on the Scaling team you will be responsible for keeping up with the latest developments in deep learning and advancing the scientific frontier by creating new techniques that go beyond the state of the art. You will work together on a collaborative team of researchers and engineers with diverse backgrounds and technical training. And most importantly you will love our customers: our goal is to make our customers successful in applying state-of-the-art LLMs and AI systems and we encode our scientific expertise into our products to make that possible.
The Impact you will have
As a research engineer on the Scaling Team at Databricks you will:
- Drive performance improvements through advanced optimization techniques including kernel fusion mixed precision memory layout optimization tiling strategies and tensorization for training-specific patterns
- Design implement and optimize high-performance GPU kernels for training workloads (e.g. attention mechanisms custom layers gradient computation activation functions) targeting NVIDIA architectures
- Design and implement distributed training frameworks for large language models including parallelism strategies (data tensor pipeline ZeRO-based) and optimized communication patterns for gradient synchronization and collective operations
- Profile debug and optimize end-to-end training workflows to identify and resolve performance bottlenecks applying memory optimization techniques like activation checkpointing gradient sharding and mixed precision training.
What We Look for
- BS/MS/PhD in Computer Science or related field with hands-on experience writing and tuning CUDA kernels for ML training applications or hands-on experience in distributed training frameworks (PyTorch DDP DeepSpeed Megatron-LM FSDP)
- Strong understanding of NVIDIA GPU architecture (memory hierarchy tensor cores warp scheduling SM occupancy) and proficiency with CUDA debugging/profiling tools (Nsight NVProf)
- Deep understanding of parallelism techniques and memory optimization strategies for large-scale model training with proven ability to debug and optimize distributed workloads
- Strong software engineering skills in Python and PyTorch with experience supporting production training workflows and knowledge of LLM training dynamics including hyperparameter tuning and optimization strategies.
Required Experience:
Senior IC
View more
View less