Research Scientist Tech Lead, Contextual Security

DeepMind

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 14 hours ago
Vacancies: 1 Vacancy

Job Summary

Snapshot

Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.

About Us

Leveraging our best-in-industry auto-red teaming capabilities our mission is to unblock the strongest and most helpful agentic GenAI capabilities in the real world. Doing so will make Gemini and other GenAI models as capable as highly experienced privacy and security engineers in handling sensitive user data and permissions. We have already delivered a substantial improvement in prompt injection resilience in Gemini 3.0 and are continuing to partner closely with other GDM teams to bring security and privacy into all aspects of Gemini post-training.

The Role

As a Technical Lead and Manager for Contextual Security in the GDM Security & Privacy Research team you will be responsible for:

  • Lead the existing team to address key challenges in contextual security with connections to prompt injection from leading the fundamental research to providing out the solutions for key products.
  • Manage a team of researchers with extensive background in security and machine learning as well as grow the team to keep up with the rapid evolving space of contextual security problems
  • Identifying unsolved impactful privacy & security problems present in generative models through auto-red teaming with priorities guided by securing critical products and product features
  • Building post-training data and tools hypothesised to improve model capabilities in the problem areas testing the hypotheses through evaluations and auto-red teaming and contributing successful solutions into Gemini and other models.
  • Amplifying the impact by generalizing solutions into reusable libraries and frameworks for protecting agents and models across Google and by sharing knowledge through publications open source and education.

About You

In order to set you up for success as a Research Scientist at Google DeepMind we look for the following skills and experience:

  • PhD in Computer Science or related quantitative field OR 5 years of relevant experience.
  • Demonstrated experience driving complex research projects in AI security security privacy or safety
  • Experience managing teams of 5-10 individual contributors

In addition the following would be an advantage:

  • Demonstrated experience driving complex projects to landing in production or adoption in open source
  • Demonstrated experience in adapting research outputs into impactful model improvements in a rapidly shifting landscape and with a strong sense of ownership.
  • Research experience and publications in ML security privacy safety or alignment.
  • Experience working on contextual security or prompt injection.

The US base salary range for this full-time position is between $248000 USD - 349000 USD bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.


Required Experience:

Staff IC

SnapshotArtificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefi...
View more view more

Key Skills

  • Laboratory Experience
  • Machine Learning
  • Python
  • AI
  • Bioinformatics
  • C/C++
  • R
  • Biochemistry
  • Research Experience
  • Natural Language Processing
  • Deep Learning
  • Molecular Biology

About Company

Company Logo

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View Profile View Profile