Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 and we work with frontier developers and governments globally.
Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action.
About the Team:
Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast safely. We are founding the Security Engineering team in a largely greenfield cloud environment we treat security as a measurable researcher centric product.
Secure by design platforms automated governance and intelligence led detection that protects our people partners models and data. We work shoulder to shoulder with research units and core technology teams and we optimise for enablement over gatekeeping proportionate controls low ego and high ownership.
What you might work on:
Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
Build provenance and integrity into the software supply chain (signing attestation artefact verification reproducibility)
Support strengthened identity segmentation secrets and key management to create a defensible foundation for evaluations at scale
Develop automated evidence driven assurance mapped to relevant standards reducing audit toil and improving signal
Create detections and response playbooks tailored to model evaluations and research workflows and run exercises to validate them
Threat model new evaluation pipelines with research and core technology teams fixing classes of issues at the platform layer
Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
Contribute to open standards and open source and share lessons with the broader community where appropriate
If you want to build security that accelerates frontier scale AI safety research and see your work land in production quickly this is a good place to do it
Role Summary
Own and operationalise AISIs governance risk and compliance (GRC) engineering practice. This role sits at the intersection of security engineering assurance and policy turning paper-based requirements into actionable testable and automatable controls. You will lead the technical response to GovAssure and other regulatory requirements and ensure compliance is continuous and evidence driven. You will also extend GRC disciplines to frontier AI systems integrating model lifecycle artefacts evaluations and release gates into the control and evidence pipeline.
Responsibilities:
Profile requirements:
Key Competencies
What We Offer
Impact you couldnt have anywhere else
Resources & access
Growth & autonomy
Life & family
Salary
Annual salary is benchmarked to role scope and relevant experience. Most offers land between 65000 and 145000 (base plus technical allowance) with 28.97% employer pension and other benefits on top.
This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.
The full range of salaries are as follows:
The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.
We may be able to offer roles to applicant from any nationality or background. As such we encourage you to apply even if you do not meet the standard nationality requirements (opens in a new window).
Required Experience:
Staff IC
Full Time