Research Engineer, Search and Knowledge Post-Training
San Francisco, CA - USA
Job Summary
About Anthropic
Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.
About the role
We want future AI systems to have superhuman epistemics: the ability to parse evidence at enormous scale and draw rigorous conclusions for both itself and the user. Search is the capability that determines whether a model can pick a signal out of noise weigh conflicting evidence and know what it doesnt know. Every higher-order capability we care about depends on search being trustworthy. If we want Claude to be a trustworthy collaborator on real knowledge work it has to be a trustworthy searcher.
Were hiring a Research Engineer to advance the science and engineering that goes into making Claude this trustworthy searcher. This is a research role for someone who is unusually rigorous: youll define hypotheses about what makes a model an epistemically sound searcher design the experiments that test them and turn search post-training from a craft into a measurable science. Youll be the person who insists on cleanly isolated variables calibrated metrics and reproducible signal while also having the engineering skill to build the infrastructure necessary to get them.
This work sits at the intersection of reinforcement learning retrieval and evaluation and it directly shapes how Claude behaves in any setting where evidence matters: research analysis agentic workflows and beyond.
What youll do
- Own a research direction for a class of search post-training problems end-to-end: form hypotheses about latent capabilities design experiments that isolate them run training and decide what to try next.
- Build the instrumentation that turns environment design into a controlled experiment so we can study how each environment factor contributes to the capabilities we care about rather than overfitting to any one regime.
- Design frontier-discriminating evaluations that distinguish genuine reasoning over evidence from plausible pattern matching and that hold up as models improve.
- Drive optimization rigor across the stack: efficient experiment design ablations training run economics and the discipline to know when a result is real.
- Collaborate deeply with researchers across post-training RL infrastructure and product to translate model behavior in the wild into concrete training signals and back again.
- Set the bar for the teams experimental standards what we measure how we measure it how we know a result is real.
Minimum (must-have)
- Have an unusually rigorous quantitative mindset
- Are an outstanding software engineer in Python comfortable across the stack from data pipelines to RL training to evaluation infrastructure
- Have shipped real ML research repeatedly with taste for which experiments are worth running.
- You instinctively reach for ablations controls and confidence intervals to understand why
- Operate well with high autonomy and ambiguity and can identify the most impactful problem to work on next without being told
- Want to set research direction advocate for experimental rigor and raise the bar for the people around you
- Communicate research clearly in writing and in person; you can defend a design choice and update on evidence
Preferred (nice-to-have)
- Hands-on experience with RL on large language models environments reward design training stability scaling behavior.
- Background in search retrieval RAG or agents that reason over external information sources.
- Experience building evaluations for open-ended or knowledge-intensive LLM behavior
- Prior work in a research-heavy environment frontier AI lab quant research firm or similarly demanding empirical setting where rigor is the default.
- Published research on LLMs RL retrieval calibration or related topics.
- Experience with distributed training systems and large-scale experimentation infrastructure.
Representative projects
- Designing a controlled-noise search environment where you can dial up failure rates conflicting sources and adversarial content independently and using it to characterize how each factor shapes the policy a model learns.
- Building an evaluation suite that distinguishes calibrated source judgment from confident-sounding guesswork and that stays discriminating as models get
The annual compensation range for this role is listed below.
For sales roles the range provided is the roles On Target Earnings (OTE) range meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$500000 - $850000 USD
Logistics
Minimum education: Bachelors degree or an equivalent combination of education training and/or experience
Required field of study:A field relevant to the role as demonstrated through coursework training or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.
Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams remember that Anthropic recruiters only contact you some cases we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.
How were different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process
Required Experience:
IC
About Company
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.