Research Engineer Scientist, Alignment Science

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: $ 315000 - 340000
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

About Anthropic

Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.

About the role:

You want to build and run elegant and thorough machine learning experiments to help us understand and steer the behavior of powerful AI systems. You care about making AI helpful honest and harmless and are interested in the ways that this could be challenging in the context of human-level capabilities. You could describe yourself as both a scientist and an engineer. As a Research Engineer on Alignment Science youll contribute to exploratory experimental research on AI safety with a focus on risks from powerful future systems (like those we would designate as ASL-3 or ASL-4 under our Responsible Scaling Policy) often in collaboration with other teams including Interpretability Fine-Tuning and the Frontier Red Team.
Our blog provides an overview of topics that the Alignment Science team is either currently exploring or has previously explored. Our current topics of focus include...

Note: For this role we conduct all interviews in Python and prefer candidates to be based in the Bay Area.

Representative projects:

  • Testing the robustness of our safety techniques by training language models to subvert our safety techniques and seeing how effective they are at subverting our interventions.
  • Run multi-agent reinforcement learning experiments to test out techniques likeAI Debate.
  • Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  • Write scripts and prompts to efficiently produce evaluation questions to test models reasoning abilities in safety-relevant contexts.
  • Contribute ideas figures and writing to research papers blog posts and talks.
  • Run experiments that feed into key AI safety efforts at Anthropic like the design and implementation of ourResponsible Scaling Policy.

You may be a good fit if you:

  • Have significant software ML or research engineering experience
  • Have some experience contributing to empirical AI research projects
  • Have some familiarity with technical AI safety research
  • Prefer fast-moving collaborative projects to extensive solo efforts
  • Pick up slack even if it goes outside your job description
  • Care about the impacts of AI

Strong candidates may also:

  • Have experience authoring research papers in machine learning NLP or AI safety
  • Have experience with LLMs
  • Have experience with reinforcement learning
  • Have experience with Kubernetes clusters and complex shared codebases

Candidates need not have:

  • 100% of the skills needed to perform the job
  • Formal certifications or education credentials

The expectedbase compensation for this position is below. Our total compensation package for full-time employees includes equity benefits and may include incentive compensation.

Annual Salary:

$315000 - $340000 USD

Logistics

Education requirements: We require at least a Bachelors degree in a related field or equivalent experience.

Location-based hybrid policy:
Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.

Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.

Your safety matters to us.To protect yourself from potential scams remember that Anthropic recruiters only contact you addresses. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.

How were different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process


Required Experience:

IC

About AnthropicAnthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together t...
View more view more

Key Skills

  • Laboratory Experience
  • Machine Learning
  • Python
  • AI
  • Bioinformatics
  • C/C++
  • R
  • Biochemistry
  • Research Experience
  • Natural Language Processing
  • Deep Learning
  • Molecular Biology

About Company

Company Logo

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

View Profile View Profile