Software Engineer GenAI inference

Databricks

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

P-1284

About This Role

As a software engineer for GenAI inference you will help design develop and optimize the inference engine that powers Databricks Foundation Model API. Youll work at the intersection of research and production ensuring our large language model (LLM) serving systems are fast scalable and efficient. Your work will touch the full GenAI inference stack from kernels and runtimes to orchestration and memory management.

What You Will Do

  • Contribute to the design and implementation of the inference engine and collaborate on model-serving stack optimized for large-scale LLMs inference
  • Collaborate with researchers to bring new model architectures or features (sparsity activation compression mixture-of-experts) into the engine
  • Optimize for latency throughput memory efficiency and hardware utilization across GPUs and accelerators
  • Build and maintain instrumentation profiling and tracing tooling to uncover bottlenecks and guide optimizations
  • Develop and enhance scalable routing batching scheduling memory management and dynamic loading mechanisms for inference workloads
  • Support reliability reproducibility and fault tolerance in the inference pipelines including A/B launches rollback and model versioning
  • Integrate with federated distributed inference infrastructure orchestrate across nodes balance load handle communication overhead
  • Collaborate cross-functionally: with platform engineers cloud infrastructure and security/compliance teams
  • Document and share learnings contributing to internal best practices and open-source efforts when possible

What We Look For

  • BS/MS/PhD in Computer Science or a related field
  • Strong software engineering background (3 years or equivalent) in performance-critical systems
  • Solid understanding of ML inference internals: attention MLPs recurrent modules quantization sparse operations etc.
  • Hands-on experience with CUDA GPU programming and key libraries (cuBLAS cuDNN NCCL etc.)
  • Comfortable designing and operating distributed systems including RPC frameworks queuing RPC batching sharding memory partitioning
  • Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel memory networking scheduler)
  • Experience building instrumentation tracing and profiling tools for ML models
  • Ability to work closely with ML researchers translate novel model ideas into production systems
  • Ownership mindset and eagerness to dive deep into complex system challenges
  • Bonus: published research or open-source contributions in ML systems inference optimization or model serving

P-1284About This RoleAs a software engineer for GenAI inference you will help design develop and optimize the inference engine that powers Databricks Foundation Model API. Youll work at the intersection of research and production ensuring our large language model (LLM) serving systems are fast scala...
View more view more

Key Skills

  • Spring
  • .NET
  • C/C++
  • Go
  • React
  • OOP
  • C#
  • Data Structures
  • JavaScript
  • Software Development
  • Java
  • Distributed Systems

About Company

Company Logo

The Databricks Platform is the world’s first data intelligence platform powered by generative AI. Infuse AI into every facet of your business.

View Profile View Profile