MLOps Intern (Agent Engineer)
Own ML agent lifecycle (training registry serving and evals) in a multi-agent system.
About the role
Own the lifecycle of ML agents within our multi-agent architecture from training and evaluation to model storage versioning and serving. Youll make our agents reproducible observable and continuously improving.
What youll do
- Design training/evaluation pipelines; set up experiment tracking model registry and artifact storage.
- Build reliable online/offline inference services feature pipelines and A/B or canary rollouts.
- Integrate ML agents into an orchestration layer (task routing memory tools safety/guardrails).
- Establish data contracts datasets and evaluation harnesses (unit tests for models regression checks).
- Implement monitoring for drift latency cost and quality; drive iterative improvements.
What youll bring
- Strong software engineering ML foundation (Python required; bonus for Go/TS).
- Experience with MLOps stacks (e.g. MLflow/Weights & Biases Ray Kubeflow Vertex AI/AWS Bedrock/Azure ML).
- Familiar with vector stores embeddings retrieval and tool-use for agents; prompt/program synthesis.
- Comfortable with cloud infra containers and CI/CD for ML.
Nice to have
- Graph data/knowledge representation; reinforcement/online learning; evaluation frameworks for LLM agents.
- Familiarity with meta-heuristics and heuristics algorithm design.
MLOps Intern (Agent Engineer)Own ML agent lifecycle (training registry serving and evals) in a multi-agent system.About the roleOwn the lifecycle of ML agents within our multi-agent architecture from training and evaluation to model storage versioning and serving. Youll make our agents reproducible ...
MLOps Intern (Agent Engineer)
Own ML agent lifecycle (training registry serving and evals) in a multi-agent system.
About the role
Own the lifecycle of ML agents within our multi-agent architecture from training and evaluation to model storage versioning and serving. Youll make our agents reproducible observable and continuously improving.
What youll do
- Design training/evaluation pipelines; set up experiment tracking model registry and artifact storage.
- Build reliable online/offline inference services feature pipelines and A/B or canary rollouts.
- Integrate ML agents into an orchestration layer (task routing memory tools safety/guardrails).
- Establish data contracts datasets and evaluation harnesses (unit tests for models regression checks).
- Implement monitoring for drift latency cost and quality; drive iterative improvements.
What youll bring
- Strong software engineering ML foundation (Python required; bonus for Go/TS).
- Experience with MLOps stacks (e.g. MLflow/Weights & Biases Ray Kubeflow Vertex AI/AWS Bedrock/Azure ML).
- Familiar with vector stores embeddings retrieval and tool-use for agents; prompt/program synthesis.
- Comfortable with cloud infra containers and CI/CD for ML.
Nice to have
- Graph data/knowledge representation; reinforcement/online learning; evaluation frameworks for LLM agents.
- Familiarity with meta-heuristics and heuristics algorithm design.
View more
View less