Researcher, Misalignment Research
San Francisco, CA - USA
Job Summary
About the Team
Safety Systems sits at the forefront of OpenAIs mission to build and deploy safe AGI ensuring our most capable models can be released responsibly and for the benefit of society. Within Safety Systems we are building a misalignment research team to focus on the most pressing problems for the future of AGI. Our mandate is to identify quantify and understand future AGI misalignment risks far in advance of when they can pose harm.
The work of this research taskforce spans four pillars:
WorstCase Demonstrations Craft compelling realityanchored demos that reveal how AI systems can go wrong. We focus especially on high importance cases where misaligned AGI could pursue goals at odds with human well being.
Adversarial & Frontier Safety Evaluations Transform those demos into rigorous repeatable evaluations that measure dangerous capabilities and residual risks. Topics of interest include deceptive behavior scheming reward hacking deception in reasoning and power-seeking along with other related areas.
SystemLevel Stress Testing Build automated infrastructure to probe entire product stacks assessing endtoend robustness under extreme conditions. We treat misalignment as an evolving adversary escalating tests until we find breaking points even as systems continue to improve.
Alignment StressTesting Research Investigate why mitigations break publishing insights that shape strategy and nextgeneration safeguards. We collaborate with other labs when useful and actively share misalignment findings to accelerate collective progress.
About the Role
We are seeking a Senior Researcher who is passionate about redteaming and AI this role you will design and execute cuttingedge attacks build adversarial evaluations and advance our understanding of how safety measures can failand how to fix them. Your insights will directly influence OpenAIs product launches and longterm safety roadmap.
In this role you will
Design and implement worstcase demonstrations that make AGI alignment risks concrete for stakeholders focused on high stakes use cases described above.
Develop adversarial and systemlevel evaluations grounded in those demonstrations driving adoption across OpenAI.
Create automated tools and infrastructure to scale automated redteaming and stress testing.
Conduct research on failure modes of alignment techniques and propose improvements.
Publish influential internal or external papers that shift safety strategy or industry practice. We aim to concretely reduce existential AI risk.
Partner with engineering research policy and legal teams to integrate findings into product safeguards and governance processes.
Mentor engineers and researchers fostering a culture of rigorous impactoriented safety work.
You might thrive in this role if you
Already are thinking about these problems night and day and share our mission to build safe universally beneficial AGI and align with the OpenAI Charter.
Have 4 years of experience in AI redteaming security research adversarial ML or related safety fields.
Possess a strong research track recordpublications opensource projects or highimpact internal workdemonstrating creativity in uncovering and exploiting system weaknesses.
Are fluent in modern ML / AI techniques and comfortable hacking on largescale codebases and evaluation infrastructure.
Communicate clearly with both technical and nontechnical audiences translating complex findings into actionable recommendations.
Enjoy collaboration and can drive crossfunctional projects that span research engineering and policy.
Hold a Ph.D. masters degree or equivalent experience in computer science machine learning security or a related discipline (nice to have but not required).
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
IC
About Company
We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.