Software Engineer, Safeguards Foundations (Internal Tooling)

Anthropic

Not Interested
Bookmark
Report This Job

profile Job Location:

London - UK

profile Monthly Salary: £ 255000 - 325000
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

About Anthropic

Anthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together to build beneficial AI systems.

About the role

The Safeguards team is responsible for the systems that detect review and act on misuse of Anthropics models work that sits at the very centre of our mission to develop AI safely. Within Safeguards the Foundations team builds the platforms infrastructure and internal tools that the rest of the organisation depends on to do this well.

We are looking for a software engineer to own and extend the internal tooling that powers human review the case management labelling investigation and enforcement interfaces our analysts and policy specialists use every day. These are back-office tools but they are anything but low-stakes: the speed clarity and reliability of this tooling directly determines how quickly Anthropic can identify harmful behaviour make sound enforcement decisions and feed signal back into model training. Youll work closely with Trust & Safety operations policy and detection-engineering teams to turn messy operational workflows into well-designed durable software.

This is a hands-on full-stack role for someone who enjoys building products for internal users sweats the details of usability and correctness and wants their engineering work to have a clear line to real-world safety outcomes.

Responsibilities

  • Design build and maintain the internal review and enforcement tooling used by Safeguards analysts including case queues content review surfaces decision/audit logging and account-actioning workflows
  • Understand user workflows and establish tooling for well processes that may be distributed across a number of tools and UIs
  • Develop the base layer of reusable APIs data storage and backend services that let new review workflows be stood up quickly and safely
  • Partner with operations and policy teams to understand reviewer pain points then translate them into clear product improvements that reduce handling time and decision error
  • Integrate tooling with upstream detection systems and downstream enforcement infrastructure so that flagged behaviour flows cleanly from signal human review action
  • Build in the guardrails that sensitive internal tools require: granular permissions audit trails data-access controls and reviewer wellbeing features (e.g. content blurring exposure limits)
  • Instrument the tools you ship surfacing metrics on queue health reviewer throughput and decision quality so the team can see whats working
  • Contribute to the Foundations teams shared platform and on-call responsibilities

You may be a good fit if you

  • Have 4 years of experience as a software engineer with meaningful time spent building internal tools operations platforms or back-office products
  • Are comfortable using agentic coding tools (e.g. Claude Code) as a core part of your workflow and can direct them to ship well-tested production-quality software at a high cadence without lowering the bar (our stack is mostly React/TypeScript and Python)
  • Take a product-minded approach to internal users: you work with the people using your tools watch where they struggle and fix it
  • Are results-oriented with a bias towards flexibility and impact
  • Pick up slack even if it goes outside your job description
  • Communicate clearly with non-engineering stakeholders and can explain technical trade-offs to operations and policy partners
  • Care about the societal impacts of your work and want to apply your engineering skills directly to AI safety

Strong candidates may also

  • Have built tooling in a trust & safety content moderation fraud integrity or risk-operations setting
  • Have experience designing case-management or workflow systems (queues SLAs escalation paths audit logs)
  • Have worked with sensitive data and understand the privacy access-control and reviewer-wellbeing considerations that come with it
  • Have experience with GCP/AWS Postgres/BigQuery and CI/CD in a production environment
  • Have used LLMs as a building block inside operational tools (e.g. assisted triage summarisation or classification in the review loop)

Representative projects

  • Rebuilding the analyst review queue so cases are routed by severity and skill with full decision history and one-click escalation
  • Shipping a unified account-investigation view that pulls signals from multiple detection systems into a single permissioned surface
  • Adding content-obfuscation and exposure-tracking features to protect reviewers working with harmful material
  • Building an internal labelling tool that feeds high-quality ground truth back to the detection and research teams

Candidates need not have

  • 100% of the skills listed above
  • Prior experience in AI or machine learning
  • Formal certifications or education credentials

Deadline to apply: None. Applications will be reviewed on a rolling basis.

The annual compensation range for this role is listed below.

For sales roles the range provided is the roles On Target Earnings (OTE) range meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.

Annual Salary:

255000 - 325000 GBP

Logistics

Minimum education: Bachelors degree or an equivalent combination of education training and/or experience

Required field of study:A field relevant to the role as demonstrated through coursework training or professional experience

Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position

Location-based hybrid policy: Currently we expect all staff to be in one of our offices at least 25% of the time. However some roles may require more time in our offices.

Visa sponsorship:We do sponsor visas! However we arent able to successfully sponsor visas for every role and every candidate. But if we make you an offer we will make every reasonable effort to get you a visa and we retain an immigration lawyer to help with this.

We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy so we urge you not to exclude yourself prematurely and to submit an application if youre interested in this work. We think AI systems like the ones were building have enormous social and ethical implications. We think this makes representation even more important and we strive to include a range of diverse perspectives on our team.

Your safety matters to us. To protect yourself from potential scams remember that Anthropic recruiters only contact you some cases we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money fees or banking information before your first day. If youre ever unsure about a communication dont click any linksvisit for confirmed position openings.

How were different

We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact advancing our long-term goals of steerable trustworthy AI rather than work on smaller and more specific puzzles. We view AI research as an empirical science which has as much in common with physics and biology as with traditional efforts in computer science. Were an extremely collaborative group and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such we greatly value communication skills.

The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic including: GPT-3 Circuit-Based Interpretability Multimodal Neurons Scaling Laws AI & Compute Concrete Problems in AI Safety and Learning from Human Preferences.

Come work with us!

Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits optional equity donation matching generous vacation and parental leave flexible working hours and a lovely office space in which to collaborate with colleagues. Guidance on Candidates AI Usage:Learn aboutour policyfor using AI in our application process


Required Experience:

IC

About AnthropicAnthropics mission is to create reliable interpretable and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers engineers policy experts and business leaders working together t...
View more view more

About Company

Company Logo

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

View Profile View Profile