Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailAbout this role
As a Platform engineer MLOps you will be critical to deploying and managing cuttingedge infrastructure crucial for AI/ML operations and you will collaborate with AI/ML engineers and researchers to develop a robust CI/CD pipeline that supports safe and reproducible experiments. Your expertise will also extend to setting up and maintaining monitoring logging and alerting systems to oversee extensive training runs and clientfacing APIs. You will ensure that training environments are optimally available and efficiently managed across multiple clusters enhancing our containerization and orchestration systems with advanced tools like Docker and Kubernetes.
This role demands a proactive approach to maintaining large Kubernetes clusters optimizing system performance and providing operational support for our suite of software solutions. If you are driven by challenges and motivated by the continuous pursuit of innovation this role offers the opportunity to make a significant impact in a dynamic fastpaced environment.
Your responsibilities:
Work closely with AI/ML engineers and researchers to design and deploy a CI/CD pipeline that ensures safe and reproducible experiments.
Set up and manage monitoring logging and alerting systems for extensive training runs and clientfacing APIs.
Ensure training environments are consistently available and prepared across multiple clusters.
Develop and manage containerization and orchestration systems utilizing tools such as Docker and Kubernetes.
Operate and oversee large Kubernetes clusters with GPU workloads.
Improve reliability quality and timetomarket of our suite of software solutions
Measure and optimize system performance with an eye toward pushing our capabilities forward getting ahead of customer needs and innovating for continual improvement
Provide primary operational support and engineering for multiple largescale distributed software applications
Is this you
You have professional experience with:
Model training
Huggingface Transformers
Pytorch
vLLM
TensorRT
Infrastructure as code tools like Terraform
Scripting languages such as Python or Bash
Cloud platforms such as Google Cloud AWS or Azure
Git and GitHub workflows
Tracing and Monitoring
Familiar with highperformance largescale ML systems
You have a knack for troubleshooting complex systems and enjoy solving challenging problems
Proactive in identifying problems performance bottlenecks and areas for improvement
Take pride in building and operating scalable reliable secure systems
Are comfortable with ambiguity and rapid change
Preferred skills and experience:
Familiar with monitoring tools such as Prometheus Grafana or similar
5 years building core infrastructure
Experience running inference clusters at scale
Experience operating orchestration systems such as Kubernetes at scale
#LIRemote
Full-Time