Who are we
Our mission is to scale intelligence to serve humanity. Were training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation semantic search RAG and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do whats best for our customers.
Cohere is a team of researchers engineers designers and more who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Why this role
Our team is a fast-growing group of researchers and engineers focused on building reliable ML systems and pushing the boundaries of LLM inference efficiency. We develop techniques that improve how models execute in production driving lower latency higher throughput and consistent quality across diverse workloads.
As an engineer on this team youll work across the inference stack to improve core performance metrics by diving deep into model execution identifying bottlenecks and developing innovative optimizations. Youll collaborate closely with modeling and systems teams to experiment measure and ship improvements that meaningfully accelerate inference. As the team evolves youll have opportunities to build expertise in advanced performance techniques including GPU/CUDA optimizations kernel-level improvements and model execution strategies for MoE and large-scale architectures.
Please Note: We have offices in Toronto Montreal San Francisco New York Paris Seoul and London. We embrace a remote-friendly environment and as part of this approach we strategically distribute teams based on interests expertise and time zones to promote collaboration and flexibility. Youll find the Model Efficiency team concentrated in the EST and PST time zones these are our preferred locations.
You may be a good fit for the Model Efficiency team if you have:
5 years of experience writing high-performance production-quality code
Strong programming skills in C or Python (Rust/Go also welcome)
Experience working with large language models and familiarity with the LLM inference ecosystem (e.g. vLLM SGLang etc.)
Ability to diagnose and resolve performance bottlenecks across the model execution stack
A strong bias for action you ship fast measure impact and iterate
Its a big plus if you have experience with:
GPU programming CUDA or low-level systems optimization
Language modeling with transformers (MoE speculative decoding KV-cache optimizations)
Scaling performance-critical distributed systems (e.g. computation search storage)
If some of the above doesnt line up perfectly with your experience we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process please submit an Accommodations Request Form and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
An open and inclusive culture and work environment
Work closely with a team on the cutting edge of AI research
Weekly lunch stipend in-office lunches & snacks
Full health and dental benefits including a separate budget to take care of your mental health
100% Parental Leave top-up for up to 6 months
Personal enrichment benefits towards arts and culture fitness and well-being quality time and workspace improvement
Remote-flexible offices in Toronto New York San Francisco London and Paris as well as a co-working stipend
6 weeks of vacation (30 working days!)
Deploy multilingual models, advanced retrieval, and intelligent agents securely and privately — without the risks of ordinary AI.