Staff Product Manager, AI Safety
San Francisco, CA - USA
Job Summary
About Pinterest:
Millions of people around the world come to our platform to find creative ideas dream about new possibilities and plan for memories that will last a lifetime. At Pinterest were on a mission to bring everyone the inspiration to create a life they love and that starts with the people behind the product.
Discover a career where you ignite innovation for millions transform passion into growth opportunities celebrate each others unique experiences and embrace theflexibility to do your best work. Creating a career you love Its Possible.
At Pinterest AI isnt just a feature its a powerful partner that augments our creativity and amplifies our impact and were looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities well explore your foundational skills and how you collaborate with AI.
Through our interview process what matters most is that you can always explain your approach showing us not just what you know but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.
As a Staff Product Manager for the GenAI Safety team within Trust & Safety youll define and drive the product strategy for ensuring Pinterests GenAI-powered systems are safe fair and trustworthy. Youll be responsible for building proactive safety frameworks that scale with our growing AI capabilities partnering deeply with engineering policy data science and design to protect our users while enabling Pinterest to innovate responsibly.
This is a high-impact role for someone who is passionate about the intersection of AI and user safety and who thrives in ambiguous fast-evolving problem spaces. Youll work at the frontier of responsible AI - anticipating novel harms before they emerge red-teaming new AI features and translating complex policy goals into measurable product requirements.
What youll do:
- GenAI Safety Strategy: Own and drive the product roadmap for GenAI safety across Pinterests AI-powered surfaces including assisted search content recommendations automated moderation and generative content creation tools
- Threat Modeling & Red-Teaming: Lead proactive identification of risks failure modes and adversarial attack vectors across AI systems - designing structured red-teaming exercises and evaluation frameworks before and after product launches
- Policy-to-Product Translation: Partner closely with Trust & Safety policy legal and ethics teams to translate nuanced content guidelines (e.g. self-harm misinformation body image) into precise buildable product requirements and model guardrails
- Cross-Functional Collaboration: Work with engineering ML design data science policy legal comms and operations teams to define align and ship AI safety solutions across global markets and diverse user populations
- Evaluation & Measurement: Define and track quantitative safety metrics - including fairness audits false positive/negative rates disparate impact analysis and content harm reduction - to ensure AI systems meet safety standards at scale
- Incident Response: Develop and maintain AI safety incident runbooks and escalation frameworks and lead rapid triage and remediation when AI systems produce harmful or unexpected outputs
- Emerging Risk Anticipation: Stay ahead of the rapidly evolving AI landscape to identify safety implications of new capabilities (e.g. multi-modal generation synthetic media agentic AI) and proactively build extensible safety infrastructure to address unknown future applications
- Global & Cultural Sensitivity: Ensure AI safety approaches account for the needs norms and contexts of Pinterests diverse global user base - avoiding one-size-fits-all solutions and centering equity in safety design
- User & Employee Wellbeing: Champion the safety and psychological wellbeing of both users who encounter harmful content and internal teams (content reviewers T&S specialists) who work on the front lines of content safety
What were looking for:
- 7 years of product management experience with meaningful depth in GenAI/ML trust & safety content moderation or responsible AI
- Strong fluency in AI/ML concepts - including generative models recommendation systems multi-modal AI and reinforcement learning from human feedback (RLHF)
- Experience with AI ethics frameworks responsible AI principles or relevant regulatory landscapes (e.g. NIST AI RMF EU AI Act)
- Demonstrated ability to lead cross-functional teams through ambiguous high-stakes problem spaces with a bias for action
- Proficiency in engaging with research mapping threat models validating risks and translating insights into clear product strategies and roadmaps
- Excellent communication skills - the ability to articulate complex technical and ethical trade-offs to non-technical audiences and senior leadership facilitating clear decision-making
- Deep empathy for users and a genuine commitment to making the internet safer
- Bachelors degree in a relevant field such as Computer Science or equivalent experience
Relocation Statement:
- This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
- We let the type of work you do guide the collaboration style. That means were not always working in an office but we continue to gather for key moments of collaboration and connection.
- This role will need to be in the office for in-person collaboration 1-2 times/quarter and therefore can be situated anywhere in the country.
#LI-REMOTE
#LI-REX
Required Experience:
Staff IC
About Company
Join the people behind the product to build a more positive internet for Pinterest users worldwide.