drjobs Principal Platform Product Security Engineer

Principal Platform Product Security Engineer

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

London - UK

Monthly Salary drjobs

GBP 125000 - 135000

Vacancy

1 Vacancy

Job Description

About the AI Security Institute

The AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 and we work with frontier developers and governments globally.

Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action.

About the Team:

Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast safely. We are founding the Security Engineering team in a largely greenfield cloud environment we treat security as a measurable researcher centric product.
Secure by design platforms automated governance and intelligence led detection that protects our people partners models and data. We work shoulder to shoulder with research units and core technology teams and we optimise for enablement over gatekeeping proportionate controls low ego and high ownership.

What you might work on:

Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely
Build provenance and integrity into the software supply chain (signing attestation artefact verification reproducibility)
Support strengthened identity segmentation secrets and key management to create a defensible foundation for evaluations at scale
Develop automated evidence driven assurance mapped to relevant standards reducing audit toil and improving signal
Create detections and response playbooks tailored to model evaluations and research workflows and run exercises to validate them
Threat model new evaluation pipelines with research and core technology teams fixing classes of issues at the platform layer
Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar
Contribute to open standards and open source and share lessons with the broader community where appropriate

If you want to build security that accelerates frontier scale AI safety research and see your work land in production quickly this is a good place to do it

Role Summary:

Act as AISIs technical security lead for cloud and delivery infrastructure. You will enable secure-by-default platform patterns provide reusable controls and guardrails and partner with engineers to embed safe practices across the development lifecycle. Youll build influence through enablement not enforcement. You will extend these patterns to AI/ML workloads including secure handling of high-capability model weights GPU estates data/feature pipelines evaluation/release gates and inference services.

Responsibilities:

  • Define and maintain secure-by-default IaC modules bootstrap templates and reference architectures
  • Provide consulting and coaching to platform and product teams to support secure delivery
  • Build tooling for identity secrets environment isolation and pipeline hardening
  • Develop and maintain a baseline cloud control set (e.g. SCPs logging tagging)
  • Track and improve cloud posture with automated feedback loops
  • Lead or support post-incident reviews and design for resilience
  • Align technical controls with DSIT central governance and shared responsibility boundaries
  • Provide secure patterns for AI/ML training/finetuning and inference on AWS (e.g. EKS/ECS/SageMaker) including network isolation egress controls data locality and private endpoints
  • Implement custody controls for model weights and sensitive datasets (encryption with KMS/HSM least-privilege access paths just-in-time/break-glass tamper-evident logging)
  • Govern GPU/accelerator compute (quotas tenancy/isolation container image hardening runtime policy driver/AMI baselines)
  • Secure the AI supply chain: signed model/dataset artefacts provenance/attestation (e.g. Sigstore/SLSA) model registries and promotion gates tied to evaluation evidence
  • Establish paved paths for safe use of third-party model APIs (key management egress allowlists privacy-preserving logging rate limiting abuse and data exfil protection)
  • Embed safety guardrails and patterns for RAG and prompting (context isolation/sanitisation prompt injection mitigations output/content policies human-in-the-loop hooks)
  • Deliver observability for AI surfaces (misuse/abuse telemetry secrets/PII leak detection anomalous output monitoring) integrated with incident response

Profile requirements:

  • Deep AWS experience especially with security identity networking and org-level services
  • Strong infra-as-code skills (Terraform CDK etc.) and CI/CD pipeline knowledge
  • Excellent technical judgment and stakeholder communication
  • Experience building influence in cross-functional environments
  • Practical understanding of AI/ML platform surfaces and risks (e.g. model weight security GPU isolation eval/release gating prompt injection/data exfil risks)
  • Desirable: exposure ML registries (e.g. MLflow/SageMaker) vector stores and integrating ML artefacts into CI/CD

Key Competencies:

  • Deep cloud security knowledge (AWS)
  • Ability to design reusable IaC components
  • Threat modelling secure defaults and paved paths
  • Collaboration across platform teams
  • Securing AI/ML workloads and artefacts
  • AI-specific threat mitigation (model supply chain prompt injection misuse/abuse telemetry

What We Offer

Impact you couldnt have anywhere else

  • Incredibly talented mission-driven and supportive colleagues.
  • Direct influence on how frontier AI is governed and deployed globally.
  • Work with the Prime Ministers AI Advisor and leading AI companies.
  • Opportunity to shape the first & best-resourced public-interest research team focused on AI security.

Resources & access

  • Pre-release access to multiple frontier models and ample compute.
  • Extensive operational support so you can focus on research and ship quickly.
  • Work with experts across national security policy AI research and adjacent sciences.

Growth & autonomy

  • If youre talented and driven youll own important problems early.
  • 5 days off learning and development annual stipends for learning and development and funding for conferences and external collaborations.
  • Freedom to pursue research bets without product pressure.
  • Opportunities to publish and collaborate externally.

Life & family

  • Modern central London office (cafes food court gym) or option to work in similar government offices in Birmingham Cardiff Darlington Edinburgh Salford or Bristol.
  • Hybrid working flexibility for occasional remote work abroad and stipends for work-from-home equipment.
  • At least 25 days annual leave 8 public holidays extra team-wide breaks and 3 days off for volunteering.
  • Generous paid parental leave (36 weeks of UK statutory leave shared between parents 3 extra paid weeks option for additional unpaid time).
  • On top of your salary we contribute 28.97% of your base salary to your pension.
  • Discounts and benefits for cycling to work donations and retail/gyms.

Salary

Annual salary is benchmarked to role scope and relevant experience. Most offers land between 65000 and 145000 (base plus technical allowance) with 28.97% employer pension and other benefits on top.

This role sits outside of the DDaT pay framework given the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures.

The full range of salaries are as follows:

  • Level 3: (Base 35720 Technical Allowance )
  • Level 4:(Base 42495 Technical Allowance )
  • Level 5: (Base 55805 Technical Allowance )
  • Level 6: (Base 68770 Technical Allowance )
  • Level 7:145000 (Base 68770 Technical Allowance 76230)

Additional Information

Internal Fraud Database

The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -Internal Fraud Register.

Security

Successful candidates must undergo a criminal record check and getbaseline personnel security standard (BPSS)clearancebefore they can be appointed. Additionally there is a strong preference for eligibility forcounter-terrorist check (CTC)clearance. Some roles may require higher levels of clearance and we will state this by exception in the job advertisement.See our vetting charter here.


Required Experience:

Staff IC

Employment Type

Full Time

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.