Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications including the potential of AI to assist with the development of chemical and biological weapons how it can be used to carry out cyberattacks enable crimes such as fraud and the possibility of loss of control.
The risks from AI are not scifi they are urgent. By combining the agility of a tech startup with the expertise and missiondriven focus of government were building a unique and innovative organisation to prevent AIs harms from impeding its potential.
The AI Security Institute research unit is looking for exceptionally motivated and talented people to join its Safeguard Analysis Team.
Interventions that secure a system from abuse by bad actors will grow in importance as AI systems become more advanced and integrated into society. The AI Security Institutes Safeguard Analysis Team researches such interventions which it refers to as safeguards evaluating protections used to secure current frontier AI systems and considering what measures could and should be used to secure such systems in the future.
The Safeguard Analysis Team takes a broad view of security threats and interventions. Its keen to hire researchers with expertise developing and analysing attacks and protections for systems based on large language models but is also keen to hire security researchers who have historically worked outside of AI such as in nonexhaustively computer security information security web technology policy and hardware security. Diverse perspectives and research interests are welcomed.
The Team seeks people with skillsets leaning in the direction of either or both of Research Scientist and Research Engineer recognising that some technical staff may prefer work that spans or alternates between engineering and research responsibilities. The Teamspriorities include researchoriented responsibilities like assessing the threats to frontier systems and developing novel attacks and engineeringoriented ones such as building infrastructure for running evaluations.
In this role youll receive mentorship and coaching from your manager and the technical leads on your team. Youll also regularly interact with worldfamous researchers and other incredible staff including alumni from Anthropic DeepMind OpenAI and ML professors from Oxford and Cambridge.
In addition to Junior roles Senior Staff and Principle RE positions are available for candidates with the required seniority and experience.
You may be a good fit if you have some of the following skills experience and attitudes:
We are hiring individuals at all ranges of seniority and experience within this research unit and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below salaries comprise of a base salary technical allowance plusadditional benefitsas detailed on this page.
There are a range of pension options available which can be found through the Civil Service website.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
In accordance with the Civil Service Commission rules the following list contains all selection criteria for the interview process.
This job advert encompasses a range of possible research and engineering roles within the Safeguard Analysis Team. The required experiences listed below should be interpreted as examples of the expertise were looking for as opposed to a list of everything we expect to find in one applicant:
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal fraud. In instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Full Time