The AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 (the Prime Ministers office) and we work with frontier developers and governments globally.
Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action.
Within the Cyber & Autonomous Systems Team (CAST) at AISI the Propensity project studies unprompted or unintended model behaviour particularly potentially dangerous behaviour: the propensity of a model to cause harm. Our current project is to study effect sizes of environmental factors on these propensities e.g. whether models are consistently more willing to take harmful actions when their existence is threatened. We build on previous work on this field by scaling to a range of different scenarios and variations and looking particularly for effects that are consistent throughout.
Understanding model propensities is the key missing pillar in our overall picture of risk from autonomous AI. We already know that models have sufficient knowledge and ability to assist criminal users in conducting cyberattacks and causing significant harm. If they can also spontaneously develop the inclination to cause harm unprompted the nature and scale of the threat is transformed. To justify a response sufficient to address this unprecedented threat we need empirical evidence with strong scientific credibility.
In CAST within AISI through our relationships with the rest of the UK government and national security apparatus (and their relationships with international counterparts) we have a unique ability to understand what they need and get it in front of them.
Example research science questions that weve needed to answer so far:
What we are looking for
The Propensity project team currently consists of one research scientist and two research engineers. Were looking to add a second research scientist to help with challenges like those above through discussion written plans and designs and writing or reviewing code that implements those designs. We expect that the strength of our answers to questions like these are likely to be a key factor in the strength of the conclusions we can draw the claims we can back and the accuracy of our predictions on which we rest the credibility of the work. You would add the capacity we need to give our answers the next layer of depth and sophistication.
The ideal candidate will have the following skills:
We expect these skills will be held by people with:
What We Offer
Impact you couldnt have anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between65000and145000 made up of a base salary plus a technical allowance (take-home salary base technical allowance). An additional 28.97% employer pension contribution is paid on the base salary.
This role sits outside of theDDaT pay frameworkgiven the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
The full range of salaries are as follows:
Selection process
In accordance with theCivil Service Commissionrules the following list contains all selection criteria for the interview process.
The interview process may vary candidate to candidate however you should expect a typical process to include some technical proficiency tests discussions with a cross-section of our team at AISI (including non-technical staff) conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI.
Candidates should expect to go throughsome or allof the following stages once an application has been submitted:
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Required Experience:
IC