Cybersecurity Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

London - UK

profile Monthly Salary: £ 65000 - 145000
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

About the AI Security Institute

The AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 (the Prime Ministers office) and we work with frontier developers and governments globally.

Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action.

About the Team

The Cyber and Autonomous Systems Team (CAST) is looking to research and map the evolving frontier of AI capabilities and propensities to inform critical security decisions that reduce loss-of-control risks from frontier AI. We focus on preventing harms from high-impact cybersecurity capabilities and highly capable autonomous AI systems.

Our team is a blend of high-velocity generalists and technical staff from organisations such as Meta Amazon Palantir DSTL and Jane Street. Our recent work has included building model evaluations suites such asReplibench- the worlds most comprehensive evaluation suite for understanding the risk of a model autonomously replicating itself over the regularlytestthe cyber and other relevant capabilities of frontier models before they are released to understand their risks.

As AI systems become more advanced the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security this roleyoulljoin a strongly collaborative team to help create new kinds of capability and safety evaluations to evaluate frontier AI systems as they are released.

About the Role

This is a cybersecurity engineer position focused on building environments and challenges to benchmark the cyber capabilities of AI systems.Youlldesign cyber ranges CTF-style tasks and evaluation infrastructure that allows us to rigorously measure how well frontier AI models perform on real-world cybersecurity tasks.

This work belongs inside UK government because understanding AI cyber capabilities is critical to national security and robust empirical testing requires coordination across government industry and international partners to inform policy decisions on AI safety.

Youllwork closely with research engineers infrastructure engineers and machine learning researchers across AISI. As a small fast-moving team building first-of-its-kind evaluation infrastructureyoullbeable toinfluenceresearch directionsown whole pieces of work and bring your ideas to the table.

Core Responsibilities

  • Evaluation Design & Development (60%)
    • Design cyber ranges and CTF-style challenges for automatically grading AI system performance on cybersecurity tasks
    • Build agentic scaffolding to evaluate frontier models equipping them with tools such as network packet capture utilities penetration testing frameworks and reverse engineering/disassembly tools
    • Design metrics and interpret results of cyber capability evaluations
  • Infrastructure engineering (30%)
    • Work alongside other engineers to ensure evaluation environments are robust and scalable
  • Research & Communication (10%)
    • Writereports researchpapersand blog posts to share findings with stakeholders
    • Keepup-to-datewith related research taking place in other organisations
    • Contribute to AISIs broader understanding of AI cyber risks

Example Projects

  • Onboard and integrate new cyber ranges into our evaluation pipeline
  • Conduct agent research to improve the cyber capabilities of our agents
  • Improve grading and scoring methodologies for automated evaluation tasks
  • Integrate defensive telemetry and simulated users into ranges to increase their realism
  • Collaborate with government partners on joint research publications

Impact

Your work will directly shape the UK governments understanding of AI cyber capabilities inform safety standards for frontier AI systems and contribute to the global effort to develop rigorous evaluation methodologies. The evaluations you build will helpdeterminehow advanced AI systems are assessed before deployment

What we are looking for

Wereflexible on the exact profile and expect successful candidates will meet many (but not necessarily all) of the criteria below.

Essential

  • Strong Python skills with experience writing scripts for automation or security tooling
  • Proven experience in at least one of the following areas of cybersecurity red-teaming:
    • Penetration testing
    • Cyber range design
    • Competing in or designing CTFs
    • Developing automated security testing tools
    • Bug bounties vulnerability research or exploit discovery and patching
  • Strong interest in helping improve the safety of AI systems

Preferred

  • Familiarity with virtualisation technologies such asProxmoxVE and infrastructure-as-code approaches to enable reproducible test environments to be rapidly spun up for testing
  • Ability to communicate the outcomes of cybersecurity research to a range of technical and non-technical audiences
  • Familiarity with cybersecurity tools such as network packet capture utilities penetration testing frameworks and reverse engineering/disassembly tools
  • Active in the cybersecurity community witha track recordof keeping up to date with new research
  • Previousexperience building or measuring the impact of automation tools on cyber red-teaming workflows

Example backgrounds

  • Penetration tester with 1years experience; has designed CTF challenges or cyber ranges; strong Python skills; interested in AI safety
  • Content engineer at a cybersecurity training platform; experienced in building vulnerable machines CTF challenges and automated deployment infrastructure
  • Security researcher with experience in vulnerability research or bug bounties; familiar with penetration testing frameworks and reverse engineering tools; has communicated findings to mixed audiences

Core requirements

  • This is afull timerole.
  • You should be able to join us for at least 24 months.
  • You should be able work from our office in London (Whitehall) forseveral days eachweek but we provide flexibility for remote work.
  • We would like candidates to be able to start in Q2 2026

What We Offer

Impact youcouldnthave anywhere else

  • Incredibly talented mission-drivenand supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Ministers AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.

Resources & access

  • Pre-release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security policy AIresearchand adjacent sciences.

Growth & autonomy

  • Ifyouretalented and drivenyoullown important problems early.
  • 5 days off learning and development annual stipends for learning and development and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & family*

  • Modern central London office (cafes food court gym) oroptionto work in similar government offices in Birmingham Cardiff Darlington EdinburghSalfordor Bristol.
  • Hybrid working flexibility for occasional remote work abroad and stipends for work-from-home equipment.
  • At least 25 days annual leave 8 public holidays extra team-widebreaksand 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents 3 extra paid weeks option foradditionalunpaid time).
  • On top of your salary we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work donations and retail/gyms.

*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.

Salary

Annual salary is benchmarked to role scope and relevant experience. Most offers land between 65000 and 145000 made up of a base salary plus a technical allowance (take-home salary base technical allowance). Anadditional28.97% employer pension contribution is paid on the base salary.

This role sits outside of theDDaT pay frameworkgiven the scope of this role requires in depth technicalexpertisein frontier AI safetyrobustnessand advanced AI architectures.

The full range of salaries are available below:

  • Level 3:(Base35720 Technical Allowance)
  • Level 4:(Base42495 Technical Allowance)
  • Level 5:(Base55805 Technical Allowance)
  • Level 6:(Base68770 Technical Allowance)
  • Level 7:145000(Base68770 Technical Allowance76230)

Selection Process

In accordance withtheCivil Service Commissionrules the following listcontainsall selection criteria for the interview process.

The interview process may vary candidate tocandidatehowever you should expect a typical process to include some technicalproficiencytests discussions with a cross-section of our team at AISI (including non-technical staff) conversations with your team lead. The process will culminate in a conversation with members of the senior team here at AISI.

Candidates should expect to go throughsome orallofthe following stages once an application has beensubmitted:

  • Initial interview
  • Technical take home test
  • Second interview and review oftake hometest
  • Third interview
  • Final interview with members of the senior team

Additional Information

Use of AI in Applications

Artificial Intelligence can be a useful tool to support your application however all examples and statements provided must be truthful factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others or generated by artificial intelligence as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see ourcandidate guidancefor more information on appropriate and inappropriate use.

Internal Fraud Database

The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and getbaseline personnel security standard (BPSS)clearancebefore they can be appointed. Additionally there is a strong preference for eligibility forcounter-terrorist check (CTC)clearance. Some roles may require higher levels of clearance and we will state this by exception in the job advertisement.See our vetting charter here.

Nationality requirements

We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).

Diversity and Inclusion

The Civil Service is committed to attract retain and invest in talent wherever it is found. To learn more please see theCivil Service People Plan (opens in a new window)and theCivil Service Diversity and Inclusion Strategy (opens in a new window).

Required Experience:

IC

About the AI Security InstituteThe AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 (the Prime Ministers office) and we work wit...
View more view more

Key Skills

  • ASP.NET
  • Health Education
  • Fashion Designing
  • Fiber
  • Investigation