About the Team
The Account & Platform Integrity team protects OpenAIs ecosystem from fraud impersonation abuse and account-level threats. We ensure that the people and organizations using OpenAI are who they claim to be that access is used appropriately and that bad actors are prevented from exploiting the platform.
We operate at the intersection of identity access compliance and abuse prevention working closely with Product Engineering Legal Go-To-Market and Support teams to stop harmful activity before it impacts users customers or the business. Our work directly protects revenue user trust and platform safety across ChatGPT the API and enterprise products.
About the Role
Were hiring a Fraud & Risk Analyst to help safeguard OpenAI by investigating validating and monitoring customer accounts and organizations. You will focus on identity legitimacy and risk ensuring accounts are properly verified access is appropriate and emerging threats are detected early.
Youll handle sensitive and high-stakes investigations involving fraud impersonation sanctions misuse of access and coordinated abuse. Your work will directly influence who can use OpenAIs products and how safely we can scale.
Note: This role may involve reviewing sensitive confidential or disturbing content.
We use a hybrid work model of 3 days in the office per week in our San Francisco office.
In this role you will:
Review and verify customer identities organizations and ownership structures
Investigate suspicious or high-risk accounts (e.g. fraud impersonation shell companies abuse of API or ChatGPT access)
Evaluate documents internal data and third-party sources to determine legitimacy and risk
Enforce account-level actions such as approvals restrictions suspensions or escalations
Serve as the case owner for complex high-visibility verification and integrity cases
Partner with Legal Compliance Sales and Support to resolve issues quickly and accurately
Handle escalations appeals and sensitive customer communications
Help design and improve verification workflows fraud detection and risk-scoring systems
Contribute to automation tooling and human-in-the-loop review pipelines
Identify patterns of abuse and recommend new controls or safeguards
Analyze data to uncover fraud and abuse trends
Provide feedback to Product and Engineering to improve onboarding verification and access controls
Create clear playbooks and guidance for frontline teams handling high-risk accounts
You Might Thrive In This Role If You...
Have 5 years of experience in verifications fraud trust & safety or risk investigations
Are comfortable making high-impact decisions about who should or should not have platform access
Have experience working cross-functionally with Legal Product Sales and Operations
Enjoy building systems not just running them especially in fast-moving environments
Are calm under pressure detail-oriented and trusted with sensitive and ambiguous cases
Thrive in environments that require judgment speed and accountability
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
IC
About the TeamThe Account & Platform Integrity team protects OpenAIs ecosystem from fraud impersonation abuse and account-level threats. We ensure that the people and organizations using OpenAI are who they claim to be that access is used appropriately and that bad actors are prevented from exploiti...
About the Team
The Account & Platform Integrity team protects OpenAIs ecosystem from fraud impersonation abuse and account-level threats. We ensure that the people and organizations using OpenAI are who they claim to be that access is used appropriately and that bad actors are prevented from exploiting the platform.
We operate at the intersection of identity access compliance and abuse prevention working closely with Product Engineering Legal Go-To-Market and Support teams to stop harmful activity before it impacts users customers or the business. Our work directly protects revenue user trust and platform safety across ChatGPT the API and enterprise products.
About the Role
Were hiring a Fraud & Risk Analyst to help safeguard OpenAI by investigating validating and monitoring customer accounts and organizations. You will focus on identity legitimacy and risk ensuring accounts are properly verified access is appropriate and emerging threats are detected early.
Youll handle sensitive and high-stakes investigations involving fraud impersonation sanctions misuse of access and coordinated abuse. Your work will directly influence who can use OpenAIs products and how safely we can scale.
Note: This role may involve reviewing sensitive confidential or disturbing content.
We use a hybrid work model of 3 days in the office per week in our San Francisco office.
In this role you will:
Review and verify customer identities organizations and ownership structures
Investigate suspicious or high-risk accounts (e.g. fraud impersonation shell companies abuse of API or ChatGPT access)
Evaluate documents internal data and third-party sources to determine legitimacy and risk
Enforce account-level actions such as approvals restrictions suspensions or escalations
Serve as the case owner for complex high-visibility verification and integrity cases
Partner with Legal Compliance Sales and Support to resolve issues quickly and accurately
Handle escalations appeals and sensitive customer communications
Help design and improve verification workflows fraud detection and risk-scoring systems
Contribute to automation tooling and human-in-the-loop review pipelines
Identify patterns of abuse and recommend new controls or safeguards
Analyze data to uncover fraud and abuse trends
Provide feedback to Product and Engineering to improve onboarding verification and access controls
Create clear playbooks and guidance for frontline teams handling high-risk accounts
You Might Thrive In This Role If You...
Have 5 years of experience in verifications fraud trust & safety or risk investigations
Are comfortable making high-impact decisions about who should or should not have platform access
Have experience working cross-functionally with Legal Product Sales and Operations
Enjoy building systems not just running them especially in fast-moving environments
Are calm under pressure detail-oriented and trusted with sensitive and ambiguous cases
Thrive in environments that require judgment speed and accountability
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
IC
View more
View less