Product Policy, Cyber Policy Manager
San Francisco, CA - USA
Job Summary
About the Team
The Product Policy team develops implements enforces and communicates the policies that govern use of OpenAIs services including ChatGPT Codex GPTs and the OpenAI API. This cyber-focused role will help define how OpenAI enables legitimate cybersecurity work while reducing the risk that our products are misused for cyber abuse.
This role sits at the intersection of AI capability cybersecurity practice and abuse prevention: helping defenders use OpenAIs tools effectively while setting clear boundaries against malicious cyber activity.
About the Role
As a Product Policy Manager specializing in Cyber you will combine cyber and policy expertise to guide how OpenAI evaluates launches and governs capabilities relevant to cybersecurity. You will work closely with product engineering research safety security legal operations and go-to-market teams to translate complex cyber risk into practical product policy implementation standards enforcement guidance and launch decisions.
The role requires understanding both sides of the cyber equation: how defenders investigate detect triage and respond to threats and how malicious actors may attempt to misuse AI systems for vulnerability exploitation social engineering malware enablement credential abuse or other harmful activity. Strong candidates may bring depth in one or more cyber domains such as attacker tradecraft vulnerability discovery malware analysis phishing and credential abuse identity and access risks incident response detection engineering secure development threat intelligence abuse investigations or security tooling along with the ability to reason across adjacent areas. You do not need to have held a formal policy title but you should have experience turning technical risk into durable rules standards processes or decisions and very strong communications skills.
As OpenAI continues to grow this role will help align diverse teams and stakeholders while operating in a fast-moving ambiguous environment.
This role is based in San Francisco CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role you will:
Provide cyber policy advice to technical and product teams based on a deep understanding of model capabilities product architecture abuse pathways defensive security use cases and the practical needs of cybersecurity teams.
Evaluate cyber-relevant product launches and model capabilities including how they may support legitimate security work and how they could be misused by malicious or irresponsible actors.
Translate cyber threat risk into clear product requirements launch guidance enforcement standards user-facing policy and internal implementation guidance.
Develop operationalizable standards enforcement protocols and escalation paths for cyber abuse scenarios including vulnerability exploitation credential abuse social engineering malware enablement phishing data exfiltration and misuse of security automation.
Partner with safety security product engineering research legal operations communications and global affairs teams to make principled timely decisions about cyber risk in high-ambiguity situations.
Help build scalable policy frameworks for dual-use cyber capabilities including where to draw boundaries between beneficial security research defensive operations and harmful cyber activity.
You might thrive in this role if you:
Have 5 years of experience or equivalent depth in one or more of the following areas: cybersecurity security engineering threat intelligence incident response abuse investigations detection engineering product policy cyber policy trust and safety or a closely related field.
Bring strong technical fluency in one or more cyber domains such as vulnerability management malware analysis threat intelligence incident response phishing and credential abuse detection engineering secure software development cloud security identity and access management or security automation.
Understand the modern cyber threat environment including how sophisticated and opportunistic actors operate how defenders detect and respond and where AI can create both meaningful defensive value and misuse risk.
Can evaluate dual-use cyber capabilities with nuance distinguishing between legitimate security research authorized defensive activity risky automation and malicious or abusive behavior.
Communicate clearly with product managers engineers researchers executives security practitioners and policy stakeholders and enjoy turning ambiguous technical risk into practical decisions requirements and guidance.
Are comfortable building new policy frameworks processes and decision criteria in ambiguous or fast-moving areas.
Use data threat intelligence user feedback and operational signals to improve policy quality measure effectiveness and identify emerging risks.
Care deeply about enabling beneficial cybersecurity work while preventing abuse.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
Manager
About Company
We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.