Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications including the potential of AI to assist with the development of chemical and biological weapons how it can be used to carry out cyber-attacks enable crimes such as fraud and the possibility of loss of control.
The risks from AI are not sci-fi they are urgent. By combining the agility of a tech start-up with the expertise and mission-driven focus of government were building a unique and innovative organisation to prevent AIs harms from impeding its potential.
About the Team
As AI systems become more advanced the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Security Institutes Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers machine learning researchers research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations. As such to scale up we require all candidates to be able to to evaluate frontier AI systems as they are released.
JOB SUMMARY
As AI systems become more advanced the potential for misuse of their cyber capabilities may pose a threat to the security of organisations and individuals. Cyber capabilities also form common bottlenecks in scenarios across other AI risk
areas such as harmful outcomes from biological and chemical capabilities and from autonomous systems. One approach to better understanding these risks is by conducting robust empirical tests of AI systems so we can better understand how capable they currently are when it comes to performing cyber security tasks.
The AI Security Institutes Cyber Evaluations Team is developing first-of-its-kind government-run infrastructure to benchmark the progress of advanced AI capabilities in the domain of cyber security. Our goal is to carry out and publish scientific research supporting a global effort to understand the risks and improve the safety of advanced AI systems. Our current focus is on doing this by building difficult cyber tasks that we can measure the performance of AI agents against.
We are building a cross-functional team of cybersecurity researchers machine learning researchers research engineers and infrastructure engineers to help us create new kinds of capability and safety evaluations and to scale up our capacity to evaluate frontier AI systems as they are released.
We are also open to hiring technical generalists with a background spanning many of these areas as well as threat intelligence experts with a focus on researching novel cyber security risks from advanced AI systems.
RESPONSIBILITIES
As a Cyber Security Researcher at AISI your role will range from helping design our overall research strategy and threat model to working with research and infrastructure engineers to build environments and challenges against which to benchmark the capabilities of AI systems. You may also be involved in coordinating teams of internal and external cyber security experts for open-ended probing exercises to explore the capabilities of AI systems or with exploring the interactions between narrow cyber automation tools and general purpose AI systems.
Your day-to-day responsibilities could include:
PERSON SPECIFICATION
You will need experience in at least one of the following areas:
This role might be a great fit if:
Core requirements
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below salaries comprise of a base salary technical allowance plusadditional benefitsas detailed on this page.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
Government Digital and Data Profession Capability Framework - Government Digital and Data Profession Capability Framework
There are a range of pension options available which can be found through the Civil Service website.
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal fraud. In instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Full Time