Research Engineer, RL Infrastructure and Reliability (Knowledge Work)
San Francisco, CA - USA
Job Summary
About Anthropic
Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.
About the role
The Knowledge Work team builds the training environments and evaluations that make Claude effective at real-world professional workflows searching analyzing and creating across the tools and documents knowledge workers use every day. As that work scales the systems behind it need to be as rigorous as the research itself.
We are looking for a Research Engineer to own the reliability observability and infrastructure foundation that the teams research depends on. You will be responsible for ensuring our training and evaluation runs remain stable well-instrumented and high-quality as they grow in scale and complexity.
A core part of this role is shifting reliability work from reactive to proactive: hardening systems stress-testing at realistic scale and building the observability and tooling that surface problems early so researchers can stay focused on research rather than incident response. You will be the teams stable context-rich owner for environment health and evaluation integrity and the primary point of contact for partner teams when issues arise.
Where this role focuses: While youll work closely with researchers building new training environments the priority for this role is the reliability those environments depend on. Its best suited to an engineer who finds real ownership and impact in making critical systems dependable and in being the person behind trustworthy evaluation results the entire organization relies on.
Key Responsibilities:
Serve as the dedicated reliability owner for the Knowledge Work training environments providing continuity of context and reducing the operational overhead of rotating ownership
Own a clean canonical set of evaluation tools and processes for Knowledge Work capabilities including the process used for model releases
Build and automate observability dashboards and operational tooling for our training environments and evaluation systems with an emphasis on high signal-to-noise: a small set of trusted metrics and alerts rather than sprawling instrumentation
Proactively harden environments and evaluation systems through load testing fault injection and stress testing at realistic scale so failures surface early rather than during critical training work
Act as the primary point of contact for partner training and infrastructure teams when issues in our environments arise and drive incidents to resolution
Reduce the operational burden on researchers so they can stay focused on research
Minimum Qualifications:
Highly experienced Python engineer who ships reliable well-instrumented code that teammates trust in production
Demonstrated experience operating ML or distributed systems at scale including significant on-call and incident-response experience
Strong SRE or production-engineering mindset reaching for SLOs load tests and failure injection before reaching for more dashboards
Foundational ML knowledge sufficient to understand what a training environment or evaluation is actually measuring and recognize when an evaluation has become stale or gameable
- Able to read research code and reason evaluation integrity
Preferred Qualifications:
- 5 years of experience operating ML or distributed systems at scale
Experience building or operating RL environments agent harnesses or LLM evaluation frameworks
Familiarity with reward modeling evaluation design or detecting and mitigating reward hacking
Experience with bservability stacks (metrics tracing structured logging) and operational dashboard tooling
Background in chaos engineering fault injection or large-scale load testing
Experience with data quality pipelines drift detection or evaluation-set curation and versioning
Familiarity with large-scale training or inference infrastructure (schedulers multi-agent orchestration sandboxed execution)
Prior experience as a dedicated reliability or operations owner embedded within a research team
The annual compensation range for this role is listed below.
For sales roles the range provided is the roles On Target Earnings (OTE) range meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$350000 - $850000 USD
Logistics
Minimum education: Bachelors degree or an equivalent combination of education training and/or experience
Required field of study:A field relevant to the role as demonstrated through coursework training or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams remember that Anthropic recruiters only contact you some cases we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.
How were different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process
Required Experience:
IC
About Company
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.