drjobs Technical Program Manager, Frontier Evals

Technical Program Manager, Frontier Evals

Employer Active

1 Vacancy
drjobs

Job Alert

You will be updated with latest job alerts via email
Valid email field required
Send jobs
Send me jobs like this
drjobs

Job Alert

You will be updated with latest job alerts via email

Valid email field required
Send jobs
Job Location drjobs

San Francisco, CA - USA

Yearly Salary drjobs

USD 180000 - 230000

Vacancy

1 Vacancy

Job Description

About the team

Frontier AI models have the potential to benefit all of humanity but also pose increasingly severe risks. To ensure that AI promotes positive change we have dedicated a team to help us best prepare for the development of increasingly capable frontier AI models. This team is tasked with preparing for catastrophic risks related to advanced AI systems.

Specifically the mission of the Preparedness team is to:

  1. Closely monitor and predict the evolving capabilities of frontier AI systems with an eye towards misuse risks whose impact could be catastrophic (not necessarily existential) to our society; and

  2. Ensure we have concrete procedures infrastructure and partnerships to mitigate these risks and more broadly to safely handle the development of powerful AI systems.

Our team will coordinate evaluations and risk assessments for powerful AI models. The teams core goal is to ensure that we have the infrastructure needed for the safety of increasingly capable AI systems including potential future general-purpose models.

About you

We are looking to hire an exceptional technical program manager that can push the boundaries of our frontier models. Specifically we are looking for those that will help us shape our empirical grasp of the whole spectrum of AI safety concerns and will own individual threads within this endeavor end-to-end. We are running complex evaluations and looking for a strong TPM to help support them.

In this role you will:

  • Work on identifying emerging AI safety risks and new methodologies for exploring the impact of these risks

  • Design evaluations of frontier AI models that assess the extent of identified risks using the latest research on capability elicitation

  • Perform statistical analyses on our frontier evaluations

  • Collaborate with cross-functional teams within and outside of OpenAI to translate our technical findings into policy recommendations

  • Contribute to the refinement of risk management and the overall development of best practice guidelines for AI safety evaluations

Ideal (but not necessarily hard-and-fast) background:

  • Basic coding ability in Python

  • Working knowledge of simple statistical analyses and experimental design

  • At least 1-2 years of full-time work experience either as a product manager management consultant (focused on the tech space) technical program manager or machine learning engineer / software engineer at a fast-paced startup

  • Minimum undergraduate degree in computer science statistics data science or applied mathematics

  • Experience with machine learning / artificial intelligence and interest in how cutting-edge AI systems will impact society

  • Experience owning fast-paced projects end-to-end and project managing prioritizing scoping work and writing documents

  • Able to operate effectively in a dynamic and extremely fast-paced research environment as well as scope and deliver projects end-to-end

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.

We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.

For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.

Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public information. In addition job duties require access to secure and protected information technology systems and related data security obligations.

We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.


Required Experience:

Manager

Employment Type

Full-Time

About Company

Report This Job
Disclaimer: Drjobpro.com is only a platform that connects job seekers and employers. Applicants are advised to conduct their own independent research into the credentials of the prospective employer.We always make certain that our clients do not endorse any request for money payments, thus we advise against sharing any personal or bank-related information with any third party. If you suspect fraud or malpractice, please contact us via contact us page.