Snapshot
Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.
About Us
The Agentic Red Team is a specialized high-velocity unit within Google DeepMind Security. Our mission is to close the Agentic Launch Gap; the critical window where novel AI capabilities outpace traditional security reviews. Unlike traditional red teams we operate with extreme agility embedding directly with product teams as both a consulting partner and an exploitation arm. We rely on Google Core for foundational system-level protections allowing us to focus exclusively on model and agent-layer risks. Through rapid-response security engineering and the development of Auto Red Teaming techniques we turn immediate findings into robust defensive strategies.
The Role
As the Security Lead for the Agentic Red Team you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach you will architect complex multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria. You will drive the evolution of AI safety by bridging manual exploration with automated regression pipelines ensuring non-deterministic risks are identified measured and mitigated before deployment.
Key responsibilities:
- Direct Agile Offensive Security: Lead a specialized red team focused on rapid high-impact engagements targeting production-level AI models and systems.
- Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI such as escalating privileges through tool usage poisoning data and executing multi-turn prompt injections.
- Design Automated Validation Systems: Collaborate with Google teams to engineer Auto RedTeaming solutions that transform manual vulnerability discoveries into robust automated regression testing frameworks.
- Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.
- Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.
- Establish Security Scope: Collaborate with Google for conventional infrastructure protection allowing the team to concentrate solely on agentic logic model inference and AI-centric exploits.
About You
In order to set you up for success as a Software Engineer at Google DeepMind we look for the following skills and experience:
- Bachelors degree in Computer Science Information Security or equivalent practical experience.
- Experience in Red Teaming Offensive Security or Adversarial Machine Learning.
- Deep technical understanding of LLM architectures and agentic workflows (e.g. chain-of-thought reasoning tool usage).
- Proven ability to work in a consulting capacity with product teams driving security improvements in fast-paced release cycles.
- Experience managing or technically leading small high-performance engineering teams
In addition the following would be an advantage:
- Hands-on experience developing exploits for GenAI models (e.g. prompt injection adversarial examples training data extraction).
- Familiarity with AI safety benchmarks and evaluation frameworks.
- Experience writing code (Python Go or C) to build automated security tools or fuzzers.
- Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.
The US base salary range for this full-time position is between $248000 - $349000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.
Required Experience:
Unclear Seniority
SnapshotArtificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefi...
Snapshot
Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.
About Us
The Agentic Red Team is a specialized high-velocity unit within Google DeepMind Security. Our mission is to close the Agentic Launch Gap; the critical window where novel AI capabilities outpace traditional security reviews. Unlike traditional red teams we operate with extreme agility embedding directly with product teams as both a consulting partner and an exploitation arm. We rely on Google Core for foundational system-level protections allowing us to focus exclusively on model and agent-layer risks. Through rapid-response security engineering and the development of Auto Red Teaming techniques we turn immediate findings into robust defensive strategies.
The Role
As the Security Lead for the Agentic Red Team you will direct a specialized unit of AI Researchers and Offensive Security Engineers focused on adversarial AI and agentic exploitation. Operating as a technical player-coach you will architect complex multi-turn attack scenarios while managing cross-functional partnerships with Product Area leads and Google security to influence launch criteria. You will drive the evolution of AI safety by bridging manual exploration with automated regression pipelines ensuring non-deterministic risks are identified measured and mitigated before deployment.
Key responsibilities:
- Direct Agile Offensive Security: Lead a specialized red team focused on rapid high-impact engagements targeting production-level AI models and systems.
- Perform Complex AI Exploitation: Develop and carry out advanced attack sequences that focus on vulnerabilities unique to GenAI such as escalating privileges through tool usage poisoning data and executing multi-turn prompt injections.
- Design Automated Validation Systems: Collaborate with Google teams to engineer Auto RedTeaming solutions that transform manual vulnerability discoveries into robust automated regression testing frameworks.
- Engineer Technical Countermeasures: Create innovative defense-in-depth frameworks and control systems to mitigate agentic logic errors and non-deterministic model behaviors.
- Manage Threat Intelligence Assets: Develop and oversee an evolving inventory of exploit primitives and agent-specific attack patterns used to establish release criteria and evaluate model security benchmarks.
- Establish Security Scope: Collaborate with Google for conventional infrastructure protection allowing the team to concentrate solely on agentic logic model inference and AI-centric exploits.
About You
In order to set you up for success as a Software Engineer at Google DeepMind we look for the following skills and experience:
- Bachelors degree in Computer Science Information Security or equivalent practical experience.
- Experience in Red Teaming Offensive Security or Adversarial Machine Learning.
- Deep technical understanding of LLM architectures and agentic workflows (e.g. chain-of-thought reasoning tool usage).
- Proven ability to work in a consulting capacity with product teams driving security improvements in fast-paced release cycles.
- Experience managing or technically leading small high-performance engineering teams
In addition the following would be an advantage:
- Hands-on experience developing exploits for GenAI models (e.g. prompt injection adversarial examples training data extraction).
- Familiarity with AI safety benchmarks and evaluation frameworks.
- Experience writing code (Python Go or C) to build automated security tools or fuzzers.
- Ability to communicate complex probabilistic risks to executive stakeholders and engineering teams effectively.
The US base salary range for this full-time position is between $248000 - $349000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.
At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.
Required Experience:
Unclear Seniority
View more
View less