Anthropic Fellows Program — AI Safety

Anthropic

Not Interested
Bookmark
Report This Job

profile Job Location:

London, KY - USA

profile Monthly Salary: Not Disclosed
Posted on: 7 days ago
Vacancies: 1 Vacancy

Job Summary

About Anthropic

Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.

Apply using this link. The next cohort of Anthropic fellows starts on July 20 2026. Apply by April 26 2026 to be considered for this cohort. We will continue accepting applications for later cohorts on a rolling exceptional circumstances we may be able to accommodate fellows starting outside of usual cohort timelines.

This page is specific to one of the Anthropic Fellows Workstreams see also the mainAnthropic Fellows posting.

Anthropic Fellows Program overview

The Anthropic Fellows Program is designed to foster AI research and engineering talent. We provide funding and mentorship to promising technical talent - regardless of previous experience.

Fellows will primarily use external infrastructure (e.g. open-source models public APIs) to work on an empirical project aligned with our research priorities with the goal of producing a public output (e.g. a paper submission). In one of our earlier cohorts over 80% of fellows produced papers.

We run multiple cohorts of Fellows each year and review applications on a rolling basis. This application is for cohorts starting in July 2026 and beyond.

What to expect

  • 4 months of full-time research
  • Direct mentorship from Anthropic researchers
  • Access to a shared workspace (in either Berkeley California or London UK)
  • Connection to the broader AI safety and security research community
  • Weekly stipend of 3850 USD / 2310 GBP / 4300 CAD benefits (these vary by country)
  • Funding for compute ($15k/month) and other research expenses

Interview process

The interview process will include an initial application & reference check technical assessments & interviews and a research discussion.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.

Compensation

The expected base stipend for this role is 3850 USD / 2310 GBP / 4300 CAD per week with an expectation of 40 hours per week for 4 months (with possible extension).

Fellows workstreams

Due to the success of the Anthropic Fellows for AI Safety Research program we are now expanding it across teams at Anthropic. We expect there to be significant overlap in the types of skills and responsibilities across the roles and will by default consider candidates for all the workstreams.

Some of the workstreams may include unique assessment steps; we therefore ask you for workstream preferences in the application. You can see an overview of the current workstreams below:

  1. AI Safety Fellows
  2. AI Security Fellows
  3. ML Systems & Performance Fellows
  4. Reinforcement Learning Fellows
  5. Economics & Societal Impacts Fellows

This page is specific to one of the Anthropic Fellows Workstreams see also the mainAnthropic Fellows posting.

Across the workstreams you may be a good fit if you:

  • Are motivated by making sure AI is safe and beneficial for society as a whole
  • Are excited to transition into empirical AI research and would be interested in a full-time role at Anthropic
  • Have a strong technical background in computer science mathematics or physics
  • Thrive in fast-paced collaborative environments
  • Can implement ideas quickly and communicate clearly

Strong candidates may also have:

  • Strong background in a discipline relevant to a specific Fellows workstream (e.g. economics social sciences or cybersecurity)
  • Experience in areas of research or engineering related to their workstream

Candidates must be:

  • Fluent in Python programming
  • Available to work full-time on the Fellows program

AI Safety Fellows

Mentors research areas & past projects

Fellows will undergo a project selection & mentor matching process. Potential mentors include:

  • Sam Bowman
  • Sara Price
  • Alex Tamkin
  • Nina Panickssery
  • Trenton Bricken
  • Logan Graham
  • Jascha Sohl-Dickstein
  • Joe Benton
  • Collin Burns
  • Fabien Roger
  • Samuel Marks
  • Kyle Fish
  • Ethan Perez

Our mentors will lead projects in select AI safety research areas such as:

  • Scalable Oversight: Developing techniques to keep highly capable models helpful and honest even as they surpass human-level intelligence in various domains.
  • Adversarial Robustness and AI Control: Creating methods to ensure advanced AI systems remain safe and harmless in unfamiliar or adversarial scenarios.
  • Model Organisms: Creating model organisms of misalignment to improve our empirical understanding of how alignment failures might arise.
  • Model Internals / Mechanistic Interpretability: Advancing our understanding of the internal workings of large language models to enable more targeted interventions and safety measures.
  • AI Welfare: Improving our understanding of potential AI welfare and developing related evaluations and mitigations.

On our Alignment Science and Frontier Red Team blogs you can read about past projects including:

For a full list of representative projects for each area please see these blog posts: Introducing the Anthropic Fellows Program for AI Safety Research Recommendations for Technical AI Safety Research Directions.

Unique candidate criteria

You might be a particularly great fit for this workstream if you:

  • Are motivated by reducing catastrophic risks from advanced AI systems
  • Have experience with empirical ML research projects
  • Have experience working with large language models
  • Have experience in one of the research areas mentioned above
  • Have a track record of open-source contributions

Logistics

Logistics Requirements: To participate in the Fellows program you must have work authorization in the US UK or Canada and be located in that country during the program.

Workspace Locations: We have designated shared workspaces in London and Berkeley where fellows will work from and mentors will visit. We are also open to remote fellows in the UK US or Canada. We will ask you about your availability to work from Berkeley or London (full- or part-time) during the program.

Visa Sponsorship: We are not currently able to sponsor visas for fellows. To participate in the Fellows program you need to have or independently obtain full-time work authorization in the UK the US or Canada.

Program Duration: The program runs for 4 months full-time. If you cant commit to the full duration please still apply and note your constraints in the application. We review these requests on a case-by-case basis.

Please note: We do not guarantee that we will make any full-time offers to fellows. However strong performance during the program may indicate that a Fellow would be a good fit for full-time previous cohorts 25-50% of fellows received a full-time offer and weve supported many more to go on to do great work on AI safety and security at other organizations.

Applications and interviews are managed by Constellation our official recruiting partner for this program. Constellation also runs the Berkeley workspace that hosts fellows. Clicking Apply here will redirect you to Constellations application portal. You can expect to receive emails from Constellation with application updates.

Apply here

The below are Anthropics policies for full time roles. These do NOT apply to the Fellows Program.

Logistics

Minimum education: Bachelors degree or an equivalent combination of education training and/or experience

Required field of study:A field relevant to the role as demonstrated through coursework training or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.

Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams remember that Anthropic recruiters only contact you some cases we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.

How were different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process

About AnthropicAnthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together t...
View more view more

About Company

Company Logo

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

View Profile View Profile