Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe AI Security Institute is the largest team in a government dedicated to understanding AI capabilities and risks in the world.
Our mission is to equip governments with an empirical understanding of the safety of advanced AI systems. We conduct research to understand the capabilities and impacts of advanced AI and develop and test risk mitigations. We focus on risks with security implications including the potential of AI to assist with the development of chemical and biological weapons how it can be used to carry out cyberattacks enable crimes such as fraud and the possibility of loss of control.
The risks from AI are not scifi they are urgent. By combining the agility of a tech startup with the expertise and missiondriven focus of government were building a unique and innovative organisation to prevent AIs harms from impeding its potential.
About the Team
The PostTraining Team is dedicated to optimising AI systems to achieve stateoftheart performance across the various risk domains that AISI focuses on. This is accomplished through a combination of scaffolding prompting supervised and RL finetuning of the AI models which AISI has access to.
One of the main focuses of our evaluation teams is estimating how new models might affect the capabilities of AI systems in specific domains. To improve confidence in our assessments we make significant effort to enhance the models performance in the domains of interest.
For many of our evaluations this means taking a model we have been given access to and embedding it as part of a wider AI systemfor example in our cybersecurity evaluations we provide models with access to tools for interacting with the underlying operating system and repeatedly call models to act in such environment. In our evaluations which do not require agentic capabilities we may use elicitation techniques like finetuning and prompt engineering to ensure assessing the model at its full capacity / our assessment does not miss capabilities that might be present in the model.
About the Role
As a member of this team you will use cuttingedge machine learning techniques to improve model performance in our domains of interest. The work is split into two subteams: Agents and Finetuning. Our Agents subteam focuses on developing the LLM tools and scaffolding to create highly capable LLMbased agents while our Finetuning Team builds out finetuning pipelines to improve models on our domains of interest.
The PostTraining Team is seeking strong Research Scientists to join the team. The priorities of the team include both researchoriented taskssuch as designing new techniques for scaling inferencetime computation or developing methodologies for indepth analysis of agent behaviourand engineeringoriented taskslike implementing new tools for our LLM agents or creating pipelines for supporting and finetuning large opensource models. We recognise that some technical staff may prefer to span or alternate between engineering and research responsibilities and this versatility is something we actively look for in our hires.
Youll receive mentorship and coaching from your manager and the technical leads on your team and regularly interact with worldclass researchers and other exceptional staff including alumni from Anthropic DeepMind and OpenAI.
In addition to junior roles we offer Senior Staff and Principal Research Engineer positions for candidates with the requisite seniority and experience.
Person Specification
You may be a good fit if you have some of the following skills experience and attitudes:
Particularly strong candidates also have the following experience:
Salary & Benefits
We are hiring individuals at all ranges of seniority and experience within this research unit and this advert allows you to apply for any of the roles within this range. Your dedicated talent partner will work with you as you move through our assessment process to explain our internal benchmarking process. The full range of salaries are available below salaries comprise of a base salary technical allowance plusadditional benefitsas detailed on this page.
There are a range of pension options available which can be found through the Civil Service website.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
Selection Process
In accordance with the Civil Service Commission rules the following list contains all selection criteria for the interview process.
Required Experience
We select based on skills and experience regarding the following areas:
Desired Experience
We additionally may factor in experience with any of the areas that our workstreams specialise in:
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal fraud. In instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil service. In this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Full Time