Research ScientistEngineer, Model Threat Defense

Google DeepMind

Not Interested
Bookmark
Report This Job

profile Job Location:

Mountain View, CA - USA

profile Monthly Salary: $ 166000 - 244000
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

Snapshot

Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.

About Us

Model distillation is a key innovation enabling the acceleration of AI turning large general models into small and specialized models used across the industry. However distillation techniques can also be used to steal critical model capabilities representing a significant threat to the intellectual property and integrity of our foundational models.

The Role

As part of the Security & Privacy Research Team at Google DeepMind you will take on a holistic role in securing our AI assets. You will both identify unauthorized distillation attempts and actively harden our models against distillation. This is a unique opportunity to contribute to the full lifecycle of defense for the Gemini family of models. You will be at the forefront detecting threats in the wild and building resilience into our models.

Key Responsibilities

  • Research Defense Strategies: Research techniques to detect distillation and techniques to actively defend against distillation.
  • Deploy Detection & Mitigation Systems: Design and build systems that detect abd mitigate unauthorized capability extraction.
  • Evaluate Impact: Rigorously measure the effectiveness of defense mechanisms balancing the trade-offs between model robustness defensive utility and core model performance.
  • Collaborate and Publish: Work closely with world-class researchers across GDM Google and the industry to publish groundbreaking work establish new benchmarks and set the standard for responsible AI defense.

About You

We are looking for a creative and rigorous research scientist research engineer or software engineer who is passionate about trailblazing the critical field of model defense. You thrive on ambiguity and are comfortable working across the spectrum of securityfrom thinking like an adversary to building proactive protections. You are driven to build robust systems that protect the future of AI development.

Minimum qualifications:

  • Ph.D. in Computer Science or a related quantitative field or a B.S./M.S. in a similar field with 2 years of relevant industry experience.
  • Demonstrated research or product expertise in a field related to model security adversarial ML post-training or model evaluation.
  • Experience designing and implementing large-scale ML systems or counter-abuse infrastructure.

Preferred qualifications:

  • Deep expertise in one or more of the following areas: model distillation model stealing security memorization Reinforcement Learning Supervised Fine-Tuning or Embeddings.
  • Proven experience in Adversarial Machine Learning with a focus on designing and implementing model defenses.
  • Strong software engineering skills and experience with ML frameworks like JAX PyTorch or TensorFlow.
  • A track record of landing research impact or shipping production systems in a multi-team environment.
  • Current or prior US security clearance.

The US base salary range for this full-time position is between $166000 - $244000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

At Google DeepMind we value diversity of experience knowledge backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex race religion or belief ethnic or national origin disability age citizenship marital domestic or civil partnership status sexual orientation gender identity pregnancy or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation please do not hesitate to let us know.

SnapshotArtificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefi...
View more view more

Key Skills

  • Laboratory Experience
  • Machine Learning
  • Python
  • AI
  • Bioinformatics
  • C/C++
  • R
  • Biochemistry
  • Research Experience
  • Natural Language Processing
  • Deep Learning
  • Molecular Biology

About Company

Company Logo

Artificial intelligence could be one of humanity’s most useful inventions. We research and build safe artificial intelligence systems. We're committed to solving intelligence, to advance science and benefit humanity.

View Profile View Profile