Were looking for an MLOps Engineer to help us build ML infrastructure that scales dynamically from dozens to thousands of GPUs reliably and efficiently.
Youll be part of the AI R&D team working closely with researchers and engineers to design systems for training evaluating and monitoring machine learning models at scale. This isnt a research position but your work will directly support researchers running large-scale experiments. Youll help build fault-tolerant pipelines that preserve progress even when things break (like OOMs) and ensure model development flows can iterate with confidence.
Our current focus is on large-scale non-interactive workloads: batch training dataset-wide model evaluation and metric-driven improvement loops. That said the infrastructure you build may later support interactive tools and APIs.
Youll be contributing to system design under the guidance of senior ML researchers and infra engineers. Your role is to bring modern tooling and practical engineering to a demanding GPU-heavy environment.
Responsibilities:
Build and maintain ML pipelines for data processing training evaluation and model deployment.
Orchestrate batch and training jobs in Kubernetes handling retries failures and resource constraints.
Design systems that scale dynamically from small GPU jobs to thousands of GPUs on-demand.
Collaborate with researchers to productionize their experiments into reproducible robust workflows.
Implement model serving endpoints (REST/gRPC) and integrate with internal tooling.
Set up monitoring logging and KPI tracking for ML pipelines and compute jobs.
Automate CI/CD and infra provisioning for ML workloads.
Manage experiment tracking model versioning and metadata with tools like MLflow or W&B.
Support model serving infrastructure that may be used by internal UIs or tools in the future.
Required Skills:
Kubernetes: Strong experience orchestrating jobs not just deploying services. You should be confident in managing training workloads GPU scheduling job retries and Helm-based deployments.
Python: Comfortable writing scripts and services that glue systems together. You dont need to be a full-stack dev but notebooks wont cut it. Automation is the word here.
ML Workflows: Familiarity with data preprocessing training evaluation and deployment pipelines.
Model Serving: Ability to expose models via FastAPI TorchServe or equivalent serving stacks.
Linux: Strong CLI skills you should know your way around debugging compute-heavy jobs.
Experience with ML metadata systems (MLflow W&B Neptune).
Know how to work side by side with AI assistants and agents.
Ability to communicate and debate in English and Portuguese.
Nice-to-have skills:
Experience with orchestration tools (Airflow Argo Workflows Prefect).
Fluency in cloud environments (GCP AWS Azure).
Ability to write lean and customized Dockerfiles and Helm charts that run smoothly.
Exposure to distributed training frameworks (Ray Horovod Dask).
Deep understanding of GPU scheduling and tuning in Kubernetes environments.
Experience supporting LLM workloads or inference systems powering internal tools.
What Youll Need to Succeed:
Curiosity about how things fail and how to make them not.
Strong debugging chops especially in distributed resource-constrained environments.
A practical mindset you know when to patch and when to fix.
Ability to collaborate across ML research and backend teams.
Ownership: you care about keeping systems reliable scalable and clean.
Diversity and inclusion:
We believe in social inclusion respect and appreciation of all people. We promote a welcoming work environment where each CloudWalker can be authentic regardless of gender ethnicity race religion sexuality mobility disability or education.
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.