drjobs Software Engineer, Inference - TL

Software Engineer, Inference - TL

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

San Francisco, CA - USA

Yearly Salary drjobs

USD 460000 - 685000

Vacancy

1 Vacancy

Job Description

About the Team

Our team brings OpenAIs most capable research and technology to the world through our products. We empower consumers enterprises and developers alike to access stateoftheart AI models unlocking new capabilities across productivity creativity and more. We focus on highperformance model inference and accelerating research through efficient and reliable infrastructure.

About the Role

Were looking for a handson Tech Lead to drive the design optimization and scaling of our inference systems. In this role youll lead engineering efforts to ensure our largest models run with exceptional efficiency in highthroughput lowlatency environments. Youll be responsible for shaping our CUDA strategy driving performance at the kernel level and collaborating across teams to deliver endtoend production readiness.

In this role you will:

  • Lead the design and implementation of core inference infrastructure for serving frontier AI models in production.

  • Own and optimize CUDAbased systems and kernels to maximize performance across our fleet.

  • Partner with researchers to integrate novel model architectures into performant scalable inference pipelines.

  • Build tooling and observability to detect bottlenecks guide system tuning and ensure stable deployment at scale.

  • Collaborate crossfunctionally to align technical direction across research infra and product teams.

  • Mentor engineers on GPU performance CUDA development and distributed inference best practices.

You may thrive in this role if you:

  • Have deep expertise in CUDA including writing and optimizing highperformance kernels for inference or training workloads.

  • Have experience leading complex engineering efforts particularly at the systems and performance layer of largescale ML infrastructure.
    Understand the full inference stack from model loading and memory management to communication libraries and deployment orchestration.

  • Are comfortable working in large distributed GPU environments and debugging performance issues across hardware and software layers.

  • Have strong familiarity with PyTorch and NVIDIAs GPU software stack (NCCL NVLink MIG etc.).

  • Take a systemslevel view but arent afraid to dive into lowlevel code when performance is on the line.

Bonus:

  • Experience with inference frameworks like TensorRT vLLM SGLang or custom model parallelism infrastructure.

  • Familiarity with TPU AMD GPUs ROCm HIP TensorRTLLM Ray Serve Megatron MPI or Horovod.

  • Familiarity with profiling tools (Nsight nvprof or custom observability stacks).
    Background in HPC or largescale distributed systems engineering.

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that generalpurpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.

We are an equal opportunity employer and do not discriminate on the basis of race religion national origin gender sexual orientation age veteran status disability or any other legally protected status.

OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement

For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records.

We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via thislink.

OpenAI Global Applicant Privacy Policy

At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Employment Type

Full-Time

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.