Sr. Data Scientist, Responsible AI

Pinterest

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

About Pinterest:

Millions of people around the world come to our platform to find creative ideas dream about new possibilities and plan for memories that will last a lifetime. At Pinterest were on a mission to bring everyone the inspiration to create a life they love and that starts with the people behind the product.

Discover a career where you ignite innovation for millions transform passion into growth opportunities celebrate each others unique experiences and embrace theflexibility to do your best work. Creating a career you love Its Possible.

At Pinterest AI isnt just a feature its a powerful partner that augments our creativity and amplifies our impact and were looking for candidates who are excited to be a part of that. To get a complete picture of your experience and abilities well explore your foundational skills and how you collaborate with AI.

Through our interview process what matters most is that you can always explain your approach showing us not just what you know but how you think. You can read more about our AI interview philosophy and how we use AI in our recruiting process here.


Pinterest is the worlds leading visual search and discovery platform serving over 500 million monthly active users globally on their journey from inspiration to action. As we scale Generative AI experiences including Pinterest Assistant and Canvas ensuring these products are safe fair and trustworthy is paramount. We are looking for a Senior Data Scientist to help lead Pinterests responsible AI mandate within the Advanced Technology Group (ATG). In this role you will design and build the data science foundations for automated adversarial testing of our GenAI products developing attack strategies evaluation frameworks and harm-detection methodologies that operate at scale. You will work in a highly collaborative and cross-functional environment partnering with ML Engineers Trust & Safety specialists Policy teams and Product Managers. You are expected to develop a deep understanding of generative AI vulnerabilities and generate insights and robust methodologies to proactively surface and mitigate risks. The results of your work will directly influence product safety policy compliance and user trust across Pinterest.

What youll do

  • Design and develop automated adversarial testing methodologies including single-turn multi-turn and multimodal attack strategies to proactively identify vulnerabilities in Pinterests Generative AI products.
  • Build and calibrate hybrid evaluation pipelines combining LLM-based judges classifiers and rule-based systems to accurately detect safety violations policy breaches bias and representational harms.
  • Develop and operationalize harm taxonomies grounded in industry standards and Pinterests Responsible AI and Trust & Safety threat models.
  • Design adaptive refinement loops that learn from attack outcomes (near-misses partial failures) to iteratively surface deeper and previously unknown vulnerabilities.
  • Bring scientific rigor and statistical methods to the evaluation of AI safety including benchmark dataset construction evaluation calibration and success-metric definition (vulnerability severity coverage breadth pre-launch risk reduction).
  • Work cross-functionally to build relationships proactively communicate key findings and collaborate closely with ML engineers Trust & Safety specialists policy teams product managers and legal partners to ensure safe product launches.
  • Relentlessly focus on impact whether through influencing product safety strategy advancing responsible AI metrics or improving critical evaluation processes.
  • Mentor and up-level junior data scientists and cross-functional partners on adversarial evaluation responsible AI methodologies and safety-aware data science practices.

What were looking for

  • 5 years of experience analyzing data in a fast-paced data-driven environment with proven ability to apply scientific methods to solve real-world problems on web-scale data.
  • Strong interest and hands-on experience in one or more of: AI safety adversarial machine learning red teaming responsible AI or trust & safety.
  • Deep familiarity with large language models (LLMs) generative AI systems and their failure modes including prompt injection jailbreaks bias and safety violations.
  • Experience designing and calibrating evaluation frameworks for AI systems including LLM-as-judge classifier-based evaluation and benchmark dataset construction.
  • Strong quantitative programming (Python) and data manipulation skills (SQL/Spark); experience with ML pipelines and large-scale experimentation.
  • Familiarity with AI safety taxonomies and frameworks (e.g. OWASP LLM Top 10 MITRE ATLAS) is strongly preferred.
  • Ability to work independently drive ambiguous projects end-to-end and operate with high ownership.
  • Excellent written and verbal communication skills with the ability to explain complex technical findings to both technical and non-technical partners.
  • A team player eager to partner across Responsible AI Trust & Safety Product Engineering Policy and Legal to turn safety insights into action.



This position is not eligible for relocation assistance.

#LI-NM4


Required Experience:

Senior IC

About Pinterest:Millions of people around the world come to our platform to find creative ideas dream about new possibilities and plan for memories that will last a lifetime. At Pinterest were on a mission to bring everyone the inspiration to create a life they love and that starts with the people b...
View more view more

About Company

Company Logo

Join the people behind the product to build a more positive internet for Pinterest users worldwide.

View Profile View Profile