Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailUSD 220000 - 320000
1 Vacancy
About the Team
The Intelligence and Investigations team is dedicated to ensuring the safe responsible deployment of AI by rapidly detecting and mitigating abuse. Our team leverages cuttingedge testing methodologies to uncover vulnerabilities and emerging threats helping safeguard OpenAIs products and users. We work closely with crossfunctional partners across product policy and engineering to drive a comprehensive defense strategy against evolving adversarial challenges.
About the Role
As a Novel Abuse Testing Specialist youll be focused on advancing our postlaunch product testing efforts at OpenAI ensuring our systems remain resilient against evolving realworld adversarial threats.
We are seeking a selfstarter to design execute and refine innovative adversarial testing and simulation protocols that leverage stateoftheart tools and a handson red team approach. In this role you will simulate threat actor methodologies and conduct rigorous application testing to uncover novel abuse vectors.
You will serve as a critical technical bridge between our security product and policy teams by employing an attacker mindset and advanced red team tactics to drive actionable insights that strengthen our detection mechanisms and overall product defenses. The ideal candidate will have a strong background in application security or penetration testing with handson experience in web application security and proficiency in tools such as Burp Suite Metasploit and similar.
We value professionals with excellent communication skills a commitment to continuous learning and a passion for securing AI technologies through innovative testing methodologies.
This role is based in San Francisco CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role you will:
Design and execute highfidelity realistic attack simulations that emulate sophisticated threat actor methodologies uncovering hidden exploitation vectors through advanced red teaming and application testing techniques.
Analyze testing outcomes and generate actionable insights to refine our detection mechanisms ultimately bolstering our overall security posture.
Serve as a crucial technical bridge between our security product and policy teams ensuring that insights from adversarial testing inform broader defense strategies and remediation efforts.
Develop and integrate automation tools to streamline testing workflows and enable rapid iteration in response to emerging threats.
Collaborate closely with internal stakeholders to align testing strategies with evolving product needs and security objectives.
Help build out a bug bounty program for AI safety and novel abuse one of the first of its kind.
You might thrive in this role if you have / are:
Experience in application security penetration testing or red team operations with a proven track record of executing adversarial testing and simulation protocols.
A strong technical background in web application security coupled with handson expertise performing security testing of web applications AI systems and highly unique attack surfaces.
An attacker mindset and a passion for identifying novel abuse vectors in live environments along with the ability to translate complex technical findings into actionable insights.
Demonstrated success in working crossfunctionally within fastpaced dynamic environments managing complex product ecosystems and driving rapid security enhancements.
Committed to continuous learning innovation and the pursuit of cuttingedge security methodologies to protect and advance AI technologies.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that generalpurpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and do not discriminate on the basis of race religion national origin gender sexual orientation age veteran status disability or any other legally protected status.
OpenAI Affirmative Action and Equal Employment Opportunity Policy Statement
For US Based Candidates: Pursuant to the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via thislink.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
Unclear Seniority
Full-Time