About the Team
Integrity Data Science sits at the center of OpenAIs mission to deploy powerful AI responsibly. We help ensure people can trust our products by building measurement systems experimentation practices and detection/mitigation strategies that protect OpenAI and our users from misuse fraud and evolving adversarial behaviors.
As the scope and urgency of Integrity work expands across product surfaces and go-to-market motion were hiring a dedicated Data Science Manager to scale the team strengthen execution across multiple Integrity domains and deepen partnership with Product Engineering Operations and adjacent orgs (e.g. Growth Ads).
This role is based in our San Francisco HQ (in-office).
About the Role
As Data Science Manager Integrity you will lead a team of data scientists working across trust & safety fraud prevention risk analysis measurement and modeling. Youll be accountable for building a high-performing DS function that can keep pace with fast-moving threatsand for shaping the analytical strategy that informs how OpenAI detects measures and mitigates integrity risks at scale.
This is a highly cross-functional leadership role. Youll help set the roadmap with Integrity Product/Engineering leaders evolve team structure and operating rhythms raise the bar on technical rigor (experimentation causal inference modeling metrics) and develop a culture of proactive high-leverage impact. Many of the challenges in this space are emergentnew misuse patterns appear as the technology and ecosystem evolvesso this role requires strong judgment comfort with ambiguity and an ability to build systems that scale.
In this role you will:
Lead and scale a high-impact Integrity Data Science teamhiring coaching and developing DS ICs (and potentially future managers) while setting a strong technical and cultural bar.
Drive strategy across multiple Integrity domains (policy enforcement bot detection fraud prevention IP theft risk measurement abuse prevention) balancing near-term response with durable systems.
Build and institutionalize analytical rigor: clear metric frameworks experimentation standards monitoring/alerting and repeatable evaluation approaches for Integrity interventions.
Partner deeply with Product & Engineering to shape roadmaps prioritize the right bets and translate ambiguous risk signals into practical product and platform decisions.
Evolve team structure and operating model as the org scalesdefining ownership boundaries improving processes and creating leverage through better tooling and AI-assisted workflows.
Enable cross-org outcomes supporting partners outside Integrity (e.g. Growth Ads GTM) where integrity risks intersect with product and business goals.
Communicate clearly with senior leadership synthesizing complex tradeoffs surfacing risk and driving alignment on priorities and success metrics.
Push the team toward an AI-leveraged operating mode using modern tooling and model capabilities to accelerate detection triage analysis and iteration.
You might thrive in this role if you:
Have deep experience leading and scaling Data Science teams ideally in trust & safety fraud/abuse security risk or other adversarial problem spaces in fast-moving environments.
Bring strong technical grounding across modern DS techniques (experimentation causal inference anomaly detection risk modeling measurement design) and can coach others to execute with rigor.
Have a track record of building durable partnerships across DS Engineering Product and Operationsable to influence without authority and create shared accountability.
Are excellent at hiring mentoring and developing technical talent and can build a culture that is both high-bar and supportive.
Can translate messy evolving threats into clear frameworks metrics and decisionsand keep the team focused on the highest-leverage work.
Are comfortable operating in ambiguity and can bring structure clarity and momentum where the right answer isnt obvious.
Bonus if you:
Have experience deploying scaled detection solutions using LLMs embeddings fine-tuning or related ML systems for abuse/fraud/risk.
Have worked closely with policy content moderation investigations or security operations teams and understand how to design analytics that actually works end-to-end.
Have built or led measurement systems that balance safety user experience and operational/business constraints.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
Manager
About the TeamIntegrity Data Science sits at the center of OpenAIs mission to deploy powerful AI responsibly. We help ensure people can trust our products by building measurement systems experimentation practices and detection/mitigation strategies that protect OpenAI and our users from misuse fraud...
About the Team
Integrity Data Science sits at the center of OpenAIs mission to deploy powerful AI responsibly. We help ensure people can trust our products by building measurement systems experimentation practices and detection/mitigation strategies that protect OpenAI and our users from misuse fraud and evolving adversarial behaviors.
As the scope and urgency of Integrity work expands across product surfaces and go-to-market motion were hiring a dedicated Data Science Manager to scale the team strengthen execution across multiple Integrity domains and deepen partnership with Product Engineering Operations and adjacent orgs (e.g. Growth Ads).
This role is based in our San Francisco HQ (in-office).
About the Role
As Data Science Manager Integrity you will lead a team of data scientists working across trust & safety fraud prevention risk analysis measurement and modeling. Youll be accountable for building a high-performing DS function that can keep pace with fast-moving threatsand for shaping the analytical strategy that informs how OpenAI detects measures and mitigates integrity risks at scale.
This is a highly cross-functional leadership role. Youll help set the roadmap with Integrity Product/Engineering leaders evolve team structure and operating rhythms raise the bar on technical rigor (experimentation causal inference modeling metrics) and develop a culture of proactive high-leverage impact. Many of the challenges in this space are emergentnew misuse patterns appear as the technology and ecosystem evolvesso this role requires strong judgment comfort with ambiguity and an ability to build systems that scale.
In this role you will:
Lead and scale a high-impact Integrity Data Science teamhiring coaching and developing DS ICs (and potentially future managers) while setting a strong technical and cultural bar.
Drive strategy across multiple Integrity domains (policy enforcement bot detection fraud prevention IP theft risk measurement abuse prevention) balancing near-term response with durable systems.
Build and institutionalize analytical rigor: clear metric frameworks experimentation standards monitoring/alerting and repeatable evaluation approaches for Integrity interventions.
Partner deeply with Product & Engineering to shape roadmaps prioritize the right bets and translate ambiguous risk signals into practical product and platform decisions.
Evolve team structure and operating model as the org scalesdefining ownership boundaries improving processes and creating leverage through better tooling and AI-assisted workflows.
Enable cross-org outcomes supporting partners outside Integrity (e.g. Growth Ads GTM) where integrity risks intersect with product and business goals.
Communicate clearly with senior leadership synthesizing complex tradeoffs surfacing risk and driving alignment on priorities and success metrics.
Push the team toward an AI-leveraged operating mode using modern tooling and model capabilities to accelerate detection triage analysis and iteration.
You might thrive in this role if you:
Have deep experience leading and scaling Data Science teams ideally in trust & safety fraud/abuse security risk or other adversarial problem spaces in fast-moving environments.
Bring strong technical grounding across modern DS techniques (experimentation causal inference anomaly detection risk modeling measurement design) and can coach others to execute with rigor.
Have a track record of building durable partnerships across DS Engineering Product and Operationsable to influence without authority and create shared accountability.
Are excellent at hiring mentoring and developing technical talent and can build a culture that is both high-bar and supportive.
Can translate messy evolving threats into clear frameworks metrics and decisionsand keep the team focused on the highest-leverage work.
Are comfortable operating in ambiguity and can bring structure clarity and momentum where the right answer isnt obvious.
Bonus if you:
Have experience deploying scaled detection solutions using LLMs embeddings fine-tuning or related ML systems for abuse/fraud/risk.
Have worked closely with policy content moderation investigations or security operations teams and understand how to design analytics that actually works end-to-end.
Have built or led measurement systems that balance safety user experience and operational/business constraints.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core and to achieve our mission we must encompass and value the many different perspectives voices and experiences that form the full spectrum of humanity.
We are an equal opportunity employer and we do not discriminate on the basis of race religion color national origin sex sexual orientation age veteran status disability genetic information or other applicable legally protected characteristic.
For additional information please see OpenAIs Affirmative Action and Equal Employment Opportunity Policy Statement.
Background checks for applicants will be administered in accordance with applicable law and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws including the San Francisco Fair Chance Ordinance the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct adverse and negative relationship with the following job duties potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary confidential and non-public addition job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI we believe artificial intelligence has the potential to help people solve immense global challenges and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Required Experience:
Manager
View more
View less