Who are we
Our mission is to scale intelligence to serve humanity. Were training and deploying frontier models for developers and enterprises who are building AI systems to power magical experiences like content generation semantic search RAG and agents. We believe that our work is instrumental to the widespread adoption of AI.
We obsess over what we build. Each one of us is responsible for contributing to increasing the capabilities of our models and the value they drive for our customers. We like to work hard and move fast to do whats best for our customers.
Cohere is a team of researchers engineers designers and more who are passionate about their craft. Each person is one of the best in the world at what they do. We believe that a diverse range of perspectives is a requirement for building great products.
Join us on our mission and shape the future!
Location: Whilst Cohere is a remote friendly organisation to ensure close collaboration with key partners this hire must be based between East Coast US / CAN and Central Europe. If office working is preferred then we welcome candidates to join us in our Toronto Montreal NYC London or Paris locations.
About The Role
Were seeking a Safety Operations Lead to join our team on the East Coast or in this role youll be at the intersection of AI safety and operational excellence managing end-to-end data annotation projects that are critical to developing safer more reliable AI systems. Youll work closely with our Safety Modeling team to translate complex safety concepts (from identifying harmful bias to preventing misuse) into clear scalable labeling projects. Your responsibilities will span the full lifecycle of data operations: designing and piloting annotation projects building robust quality assurance frameworks managing our in-house team of expert annotators and overseeing relationships with external vendors to ensure they meet our rigorous standards. Beyond execution youll drive strategic planning by forecasting data needs managing budgets and establishing KPIs that ensure quality throughput and cost-effectiveness. This is an opportunity to shape how AI safety data is collected managed and delivered at scale requiring strong leadership meticulous project management and a deep commitment to AI safety principles.
What Youll Do
Collaborate and Identify Data Needs: Work closely with our Safety Modeling team to deeply understand their data requirements for e.g. model training evaluation or red-teaming.
Operationalize Data Needs: Translate complex safety concepts (e.g. identifying harmful bias preventing misuse) into clear actionable data annotation instructions and scalable labeling projects.
End-to-End Project Management: Manage the entire lifecycle of data annotation projects from initial design and pilot phases through to final data delivery ensuring we meet ambitious quality and timeline goals.
Build Quality Systems: Design implement and refine robust quality assurance (QA) frameworks and feedback loops to monitor data accuracy track annotator performance and continuously improve our annotation workflows.
Lead an Internal Team: Manage our in-house team of data annotators. This includes hiring onboarding continuous training performance management and day-to-day operations to foster a high-performing expert team.
Manage External Partners: Oversee relationships with external annotation vendors including project scoping setup tracking execution against SLAs and performing regular quality audits to ensure they meet our rigorous standards.
Define and Drive Success: Establish monitor and report on key performance indicators (KPIs) for all data annotation efforts. You will hold both internal and external teams accountable for quality throughput and cost-effectiveness.
Strategic Planning: Drive resource planning by creating roadmaps that anticipate future data needs forecasting internal hiring requirements and managing budgets for external vendor engagements.
What Were Looking For
3 years of experience in data annotation data operations or a similar operational role within a technology-focused environment.
A strong background or deep domain expertise in areas crucial to AI Safety such as LLM safety principles content moderation trust & safety policy development or online risk analysis.
Proven track record of managing complex data labeling or content review projects from inception to completion.
2 years of direct people management experience with a demonstrated ability to hire train and develop high-performing operational teams.
Exceptional organizational and project management skills with an ability to manage multiple complex projects simultaneously.
A data-driven mindset with experience in defining and tracking metrics to improve operational performance.
Excellent communication and interpersonal skills with the ability to work effectively with both technical and non-technical stakeholders.
Bonus Points
Experience working directly with third-party data labeling vendors and platforms (e.g. Scale AI Appen Surge AI).
Familiarity with the challenges of subjective data and developing annotation guidelines for nuanced tasks.
Basic proficiency in SQL or a scripting language (like Python) for data analysis and workflow automation.
If some of the above doesnt line up perfectly with your experience we still encourage you to apply!
We value and celebrate diversity and strive to create an inclusive work environment for all. We welcome applicants from all backgrounds and are committed to providing equal opportunities. Should you require any accommodations during the recruitment process please submit an Accommodations Request Form and we will work together to meet your needs.
Full-Time Employees at Cohere enjoy these Perks:
An open and inclusive culture and work environment
Work closely with a team on the cutting edge of AI research
Weekly lunch stipend in-office lunches & snacks
Full health and dental benefits including a separate budget to take care of your mental health
100% Parental Leave top-up for up to 6 months
Personal enrichment benefits towards arts and culture fitness and well-being quality time and workspace improvement
Remote-flexible offices in Toronto New York San Francisco London and Paris as well as a co-working stipend
6 weeks of vacation (30 working days!)
Deploy multilingual models, advanced retrieval, and intelligent agents securely and privately — without the risks of ordinary AI.