drjobs Project Lead, Safety Testing - 12 Month Fixed Term Contract

Project Lead, Safety Testing - 12 Month Fixed Term Contract

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

New York City, NY - USA

Monthly Salary drjobs

Not Disclosed

drjobs

Salary Not Disclosed

Vacancy

1 Vacancy

Job Description

Snapshot

As the Project Lead Safety Testing in the Responsible Development and Innovation (ReDI) team youll be integral to the delivery and scaling of our external safety testing program on Google DeepMinds (GDMs) most groundbreaking models.

You will work with teams across GDM including Product Management Research Legal Engineering Public Policy and Frontier Safety and Governance to lead external safety evaluations which are a key part of our responsibility and safety best practices helping Google DeepMind to progress towards its mission.

The role is a 12 month fixed-term contract.

About us

Artificial Intelligence could be one of humanitys most useful inventions. At Google DeepMind were a team of scientists engineers machine learning experts and more working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery and collaborate with others on critical challenges ensuring safety and ethics are the highest priority.

As the Project Lead Safety Testing working in ReDI youll be part of a team that partners with external expert groups to conduct safety evaluations across various domains and modalities on our frontier models. In this role youll work in collaboration with other members of this critical program upholding our safety and responsibility commitments whilst responding to the evolving needs of the business.

The role

Key responsibilities

Overarching:

  • Lead the design and oversee the implementation of GDMs external safety testing program ensuring it meets our safety and responsibility requirements and external commitments
  • Lead GDMs input into external safety testing requirements from regulators and government bodies
  • Input into public policy work to help shape potential future regulatory requirements and government policies related to AI safety
  • Lead implementation of external safety testing requirements from regulators and government bodies working with multidisciplinary teams across Legal Business and Corporate Development and Engineering teams
  • Oversee efforts to optimise and scale the program to support the growing needs of the business
  • Identify and plan the programs strategic resource requirements to execute the external safety testing program successfully and to deliver against its priorities
  • Carry out cross-industry horizon scanning to identify and maintain visibility of current and future external testing requirements from regulators government bodies and wider industry standards
  • Matrix manage a cross-functional team aligning resources against business priorities and leading the escalation of risks and issues to wider stakeholder groups including the Head of Evaluations and Responsibility leadership

Testing scope:

  • Scope GDMs external testing program including the domains of frontier models to be tested
  • Engage with various stakeholders across Responsibility modeling and SME teams to identify high-priority focus areas to build into testing plans and inform partnership approaches

Partnerships:

  • Own and manage relationships with various external testing partners across the partnership lifecycle
  • Oversee the identification of new partners with relevant skillsets to undertake external safety testing working with relevant SMEs to ensure it is aligned with high-priority focus areas

Findings:

  • Oversee the collation assessment and distribution of external safety testing findings ensuring internal alignment on severity and escalation of high-severity findings

Stakeholder engagement and communication:

  • Build and lead a high-performing and collaborative multidisciplinary team to deliver the program
  • Oversee communication about the program to wider teams across GDM to increase visibility and buy-in
  • Oversee communication to relevant external stakeholders to influence industry standards and policy positions
  • Represent the external safety testing program in relevant internal and external forums

Budget:

  • Own a significant program budget ensuring work is delivered within budget working with program manager on forecasting spend and reconciliation

About you

In order to set you up for success as a Project Lead Safety Testing in the ReDI team we look for the following skills and experience:

  • Ability to shape lead and deliver programs in a highly complex and live environment where decisions are made in a timely fashion
  • Ability to build and lead high-performing teams
  • Previous experience working in a fast-paced environment either in a start-up tech company or consulting organisation
  • Familiarity with safety considerations of generative AI including (but not limited to) frontier safety (such as chemical and biological risks) content safety and sociotechnical risks (such as fairness)
  • Strong communication skills and demonstrated ability to work in cross-functional teams foster collaboration and influence outcomes
  • Strong project management skills to work with the program manager to optimise existing processes and create new processes
  • Significant experience presenting and communicating complex concepts succinctly and clearly to different audiences

In addition the following would be an advantage:

  • Experience of working with sensitive data and access controls
  • Prior experience working with product development or in similar agile settings would be advantageous
  • Subject matter expertise in generative AI safety considerations including (but not limited to) frontier safety (such as chemical and biological risks) content safety and sociotechnical risks (such as fairness)
  • Experience designing and implementing audits or evaluations of cutting edge AI systems


Required Experience:

Senior IC

Employment Type

Contract

Company Industry

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.