The AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 (the Prime Ministers office) and we work with frontier developers and governments globally.
Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action.
Interventions that secure a system from abuse by bad actors or misaligned AI systems will grow in importance as AI systems become more capable autonomous and integrated into society.
The Misuse Red Team is a specialised sub-team within AISIs wider Red Team. We red-team frontier AI safeguards for dangerous capabilities research novel attack vectors and develop advanced automated attack tooling. We share our findings with frontier AI companies (includingAnthropicOpenAIDeepMind) key UK officials and other governmentstoinform their respective deployment research and policy decision-making.
We have published on several topics includingnovelautomated attackalgorithms(Boundary Point Jailbreaking)poisoning attackssafeguards safety casesdefending finetuning APIsthird-party attacks on agentsagent misuse andpre-training data filtering. Some example impact cases have been advancing the benchmarking of agent misuse identifying novel vulnerabilities and collaborating with frontier labs to mitigate them and producing insights into the feasibility and effectiveness of attacks and defences in data poisoningand fine-tuning APIs.
Were looking for research scientists and research engineers for our misuse sub-team with expertise developing and analysing attacks and protections for systems based on large language models or who have broader experience with frontier LLM research and development. An ideal candidate would have a strong track record of performing and publishing novel and impactful research in these or other areas of LLM research. Were looking for:
In practice wecan support staffs work spanning or alternating between research and engineering.If you have a preference please specify this in your application.
The team is currently led byEric WinsorandXander Daviesadvised byGeoffrey IrvingandYarin Gal. Youll work with incredible technical staff across AISI including alumni from Anthropic OpenAI DeepMind and top universities. You may also collaborate with external teams from Anthropic OpenAI and Gray Swan.
We are open to hires at junior senior staff and principal research scientist levels.
In accordance withtheCivil Service Commissionrules the following listcontainsall selection criteria for the interview process.
The experiences listed below should be interpreted as examples of theexpertisewerelooking for as opposed to a list of everything we expect to find in one applicant:
You may be a good fit if you have:
Strong candidates may also have:
Impact youcouldnthave anywhere else
Resources & access
Growth & autonomy
Life & family*
*These benefits apply to direct employees. Benefits may differ for individuals joining through other employment arrangements such as secondments.
Annual salary is benchmarked to role scope and relevant experience. Most offers land between 65000 and 145000 made up of a base salary plus a technical allowance (take-home salary base technical allowance). Anadditional28.97% employer pension contribution is paid on the base salary.
This role sits outside of theDDaT pay frameworkgiven the scope of this role requires in depth technicalexpertisein frontier AI safetyrobustnessand advanced AI architectures.
The full range of salaries are available below:
The interview process may vary candidate tocandidatehowever you should expect a typical process to include some technicalproficiencytests discussions with a cross-section of our team at AISI (including non-technical staff) conversations with your team lead. The process will culminate in a conversation with members of the senior leadership team here at AISI.
Candidates should expect to go throughsome orallofthe following stages once an application has beensubmitted:
Artificial Intelligence can be a useful tool to support your application however all examples and statements provided must be truthful factually accurate and taken directly from your own experience. Where plagiarism has been identified (presenting the ideas and experiences of others or generated by artificial intelligence as your own) applications may be withdrawn and internal candidates may be subject to disciplinary action. Please see ourcandidate guidancefor more information on appropriate and inappropriate use.
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Required Experience:
IC