Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications including the potential of AI to assist with the development of chemical and biological weapons how it can be used to carry out cyberattacks enable crimes such as fraud and the possibility of loss of control.
The risks from AI are not scifi they are urgent. By combining the agility of a tech startup with the expertise and missiondriven focus of government were building a unique and innovative organisation to prevent AIs harms from impeding its potential.
Your team will initially include 34 research scientists including researchers with experience in the control agenda and/or experience at frontier labs. Your responsibilities will encompass setting the research direction and agenda ambitiously advancing the state of control research as well as managing and developing an exceptional team. The ultimate goal is to make substantial improvements in the robustness of control protocols across major labs particularly as we progress towards AGI.
Research partnerships with frontier AI labs will be a core part of your role. This will include collaborating on promising research directions (e.g. more realistic empirical experiments in settings that closely mimic lab infrastructure) as well as supporting development of controlbased safety cases.
The role will involve close collaboration with our research directors including Geoffrey Irving and Yarin Gal. From a compute perspective you will have excellent access to resources from both our research platform team and the UKs Isambard supercomputer (5000 H100s).
You may be a good fit if you have some of the following skills experience and attitudes. Please note that you dont need to meet all of these criteria and if youre unsure we encourage you to apply.
We are primarily hiring individuals at the more senior ranges of the following scale (L5L7). The full range of salaries are available below.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
There are a range of pension options available which can be found through the Civil Service website.
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal fraud. In instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Full Time