Senior ML Systems Engineer, Frameworks & Tooling

Cohere

Not Interested
Bookmark
Report This Job

profile Job Location:

London - UK

profile Monthly Salary: Not Disclosed
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

Who are we

Our mission is to scale intelligence to serve humanity. Were training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation semantic search RAG and agents. We believe that our work is instrumental to the widespread adoption of AI.

We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do whats best for our customers.

Cohere is a team of researchers engineers designers and more who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.

Join us on our mission and shape the future!

Were looking for a senior engineer to help build maintain and evolve the training framework that powers our frontier-scale language models. This role sits at the intersection of large-scale training distributed systems and HPC infrastructure. You will design and maintain the core components that enable fast reliable and scalable model training and build the tooling that connects research ideas to thousands of GPUs.

If you enjoy working across the full stack of ML systems this role gives you the opportunity and autonomy to have massive impact.

What Youll Work On

  • Build and own the training framework responsible for large-scale LLM training.

  • Design distributed training abstractions (data/tensor/pipeline parallelism FSDP/ZeRO strategies memory management checkpointing).

  • Improve training throughput and stability on multi-node clusters (e.g. GB200/300 AMD H200/100).

  • Develop and maintain tooling for monitoring logging debugging and developer ergonomics.

  • Collaborate closely with infra teams to ensure Slurm setups container environments and hardware configurations support high-performance training.

  • Investigate and resolve performance bottlenecks across the ML systems stack.

  • Build robust systems that ensure reproducible debuggable large-scale runs.

You Might Be a Good Fit If You Have

  • Strong engineering experience in large-scale distributed training or HPC systems.
    Deep familiarity with JAX internals distributed training libraries or custom kernels/fused ops.

  • Experience with multi-node cluster orchestration (Slurm Ray Kubernetes or similar).

  • Comfort debugging performance issues across CUDA/NCCL networking IO and data pipelines.

  • Experience working with containerized environments (Docker Singularity/Apptainer).

  • A track record of building tools that increase developer velocity for ML teams.

  • Excellent judgment around trade-offs: performance vs complexity research velocity vs maintainability.

  • Strong collaboration skills youll work closely with infra research and deployment teams.

Nice to Have

  • Experience with training LLMs or other large transformer architectures.

  • Contributions to ML frameworks (PyTorch JAX DeepSpeed Megatron xFormers etc.).

  • Familiarity with evaluation and serving frameworks (vLLM TensorRT-LLM custom KV caches).

  • Experience with data pipeline optimization sharded datasets or caching strategies.

  • Background in performance engineering profiling or low-level systems.

Bonus: paper at top-tier venues (such as NeurIPS ICML ICLR AIStats MLSys JMLR AAAI Nature COLING ACL EMNLP).

Why Join Us

  • Youll work on some of the most challenging and consequential ML systems problems today.

  • Youll collaborate with a world-class team working fast and at scale.

  • Youll have end-to-end ownership over critical components of the training stack.

  • Youll shape the next generation of infrastructure for frontier-scale models.

  • Youll build tools and systems that directly accelerate research and model quality.

Sample Projects:

  • Build a high-performance data loading and caching pipeline.

  • Implement performance profiling across the ML systems stack

  • Develop internal metrics and monitoring for training runs.

  • Build reproducibility and regression testing infrastructure.

  • Develop a performant fault-tolerant distributed checkpointing system.

If some of the above doesnt line up perfectly with your experience we still encourage you to apply!

We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process please submit an Accommodations Request Form and we will work together to meet your needs.

Full-Time Employees at Cohere enjoy these Perks:

An open and inclusive culture and work environment

Work closely with a team on the cutting edge of AI research

Weekly lunch stipend in-office lunches & snacks

Full health and dental benefits including a separate budget to take care of your mental health

100% Parental Leave top-up for up to 6 months

Personal enrichment benefits towards arts and culture fitness and well-being quality time and workspace improvement

Remote-flexible offices in Toronto New York San Francisco London and Paris as well as a co-working stipend

6 weeks of vacation (30 working days!)


Required Experience:

Senior IC

Who are weOur mission is to scale intelligence to serve humanity. Were training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation semantic search RAG and agents. We believe that our work is instrumental to th...
View more view more

Key Skills

  • Feed
  • Apache Commons
  • Maintenance
  • Business Support
  • Clinical

About Company

Company Logo

Deploy multilingual models, advanced retrieval, and intelligent agents securely and privately — without the risks of ordinary AI.

View Profile View Profile