Staff Software Engineer, Foundational Model Serving

Databricks

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

At Databricks we are passionate about enabling data teams to solve the worlds toughest problems from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the worlds best data and AI infrastructure platform so our customers can use deep data insights to improve their business.

Foundation Model Serving is the API Product for hosting and serving frontier AI model inference for open source models like Llama Qwen and GPT OSS as well as proprietary models like Claude and OpenAI GPT. For this role no prior ML or AI experience is necessary. Were looking for engineers who have owned high scale operational sensitive systems like customer facing APIs Edge Gateways ML Inference or similar services and have an interest in getting deep building LLM APIs and runtimes at scale.

As a Staff Engineer youll play a critical role in shaping both the product experience and core infrastructure. You will design and build systems that enable high-throughput low-latency inference on GPU workloads with frontier models influence architectural direction and collaborate closely across platform product infrastructure and research teams to deliver a world-class foundation model API product.

The impact you will have:

  • Design and implement core systems and APIs that power Databricks Foundation Model Serving ensuring scalability reliability and operational excellence.
  • Partner with product and engineering leadership to define the technical roadmap and long-term architecture for serving workloads.
  • Drive architectural decisions and trade-offs to optimize performance throughput autoscaling and operational efficiency for GPU serving workloads.
  • Contribute directly to key components across the serving infrastructure from working in systems like vLLM and SGLang to creating token based rate limiters and optimizers ensuring smooth and efficient operations at scale.
  • Collaborate cross-functionally with product platform and research teams to translate customer needs into reliable and performant systems.
  • Establish best practices for code quality testing and operational readiness and mentor other engineers through design reviews and technical guidance.
  • Represent the team in cross-organizational technical discussions and influence Databricks broader AI platform strategy.

What we look for:

  • 10 years of experience building and operating large-scale distributed systems.
  • Experience leading high-scale operationally sensitive backend systems.
  • A track record of up-leveling teams engineering excellence.
  • Strong foundation in algorithms data structures and system design as applied to large-scale low-latency serving systems.
  • Proven ability to deliver technically complex high-impact initiatives that create measurable customer or business value.
  • Strong communication skills and ability to collaborate across teams in fast-moving environments.
  • Strategic and product-oriented mindset with the ability to align technical execution with long-term vision.
  • Passion for mentoring growing engineers and fostering technical excellence.

Required Experience:

Staff IC

At Databricks we are passionate about enabling data teams to solve the worlds toughest problems from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the worlds best data and AI infrastructure platform so o...
View more view more

Key Skills

  • Campaigns
  • JSP
  • Dhtml
  • Loans
  • Automobile

About Company

Company Logo

The Databricks Platform is the world’s first data intelligence platform powered by generative AI. Infuse AI into every facet of your business.

View Profile View Profile