Senior Strategic Risk Advisor, Security Strategy & Influence
Washington, AR - USA
Department:
Job Summary
OpenAI is looking for a senior strategic risk advisor to help the Security organization anticipate interpret and act on high-consequence risks at the intersection of cybersecurity national security geopolitics ecosystem dynamics regulation and institutional trust.
This role is for someone who can move fluently between technical security teams executives policymakers external experts and commercial stakeholders. The right person will bring judgment formed through national security cyber diplomacy intelligence and senior advisory work and will turn complex external developments into clear choices for OpenAI leadership.
About the Team
The Security Strategy & Influence team exists to make cybersecurity a strategic advantage for OpenAI. As AI systems become more capable and globally consequential the most important security questions increasingly sit above the purely operational layer: model deployment international expansion state-linked threats partner access infrastructure dependencies geopolitical risk external trust and OpenAIs freedom to operate.
About the Role
As a Senior Strategic Risk Advisor you will help OpenAI connect technical security realities to geopolitical policy and ecosystem developments turning that context into clear guidance for consequential security and operating choices.
Were looking for a senior trusted advisor who brings strong cybersecurity and geopolitical fluency and can translate complex risk into clear strategic direction.
This role is based in Washington DC. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. Some domestic and international travel may be required for high-leverage meetings conferences and government engagements.
In this role you will:
Advise security and company leadership on strategic risks connecting technical security priorities to broader business policy geopolitical regulatory and operating realities.
Produce strategic analysis and contribute to scenario planning and executive briefings on cyber geopolitical policy regulatory and ecosystem developments translating them into decision-ready recommendations for OpenAIs security posture resource priorities and operating decisions.
Monitor and interpret external events policy shifts geopolitical developments regulatory changes and ecosystem dynamics relevant to OpenAIs security posture and operating environment.
Provide strategic risk and security input into scenario-planning exercises for high-consequence questions including model deployment regional expansion sensitive partnerships and strategic threat evolution.
Build and maintain trusted relationships with policymakers researchers think tanks academics national security practitioners technology leaders and other external experts.
Help develop credible external positions and executive materials for government engagements conferences and other high-stakes forums.
You might thrive in this role if you:
Have 10 years of experience advising executives boards governments or national security leaders on cyber geopolitical or digital risk.
Deep familiarity with cybersecurity state-linked threats insider risk intelligence analysis and strategic risk management.
Experience translating complex technical ideas for non-technical senior decision makers.
Strong external network across national security cyber policy technology academia think tanks or security communities.
Exceptional written and oral communication skills with demonstrated ability to produce clear decision-oriented analysis under pressure.
Technical security qualifications or hands-on cyber background.
Have a U.S. or U.K. Government Security Clearance or willingness and eligibility to obtain one.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
Senior IC
About Company
We believe our research will eventually lead to artificial general intelligence, a system that can solve human-level problems. Building safe and beneficial AGI is our mission.