Adversarial AI Tester

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 3 hours ago
Vacancies: 1 Vacancy

Job Summary

The Adversarial AI Tester is responsible for evaluating stress-testing and validating artificial intelligence and machine learning systems against adversarial threats misuse bias and failure modes. This role plays a critical part in ensuring the robustness safety reliability and ethical performance of AI models across production and pre-production environments.

This position is strictly limited to candidates who currently reside in the United States and are legally authorized to work in the U.S. Applications from individuals residing outside the United States will be rejected.


Key Responsibilities:

Design and execute adversarial testing strategies for AI and machine learning models

Identify vulnerabilities related to model robustness security bias hallucinations and misuse

Perform red-teaming prompt injection testing model evasion testing and data poisoning simulations

Develop test cases to evaluate AI behavior under malicious edge-case or unexpected inputs

Document findings risks and mitigation recommendations in clear technical reports

Collaborate with ML engineers data scientists and security teams to remediate identified weaknesses

Validate fixes and improvements through regression and re-testing

Support responsible AI governance and compliance initiatives

Stay current on emerging adversarial AI techniques threats and industry best practices


Required Qualifications:

Bachelors degree in Computer Science Artificial Intelligence Cybersecurity Data Science or a related field

36 years of experience in AI/ML testing model evaluation security testing or red team activities

Strong understanding of machine learning concepts including supervised and unsupervised models

Experience testing large language models (LLMs) computer vision or predictive systems

Familiarity with adversarial attack techniques (e.g. prompt injection model evasion data poisoning)

Proficiency in Python and common ML frameworks (e.g. PyTorch TensorFlow scikit-learn)

Strong analytical documentation and communication skills

Ability to work independently in a fully remote environment


Preferred Qualifications:

Masters degree or Ph.D. in AI Machine Learning or Cybersecurity

Experience with AI governance model risk management or responsible AI frameworks

Familiarity with cloud-based AI platforms (AWS Azure GCP)

Knowledge of NIST AI Risk Management Framework or similar standards

Experience with security testing tools red teaming methodologies or AI audits


Compensation:

Annual Salary Range: $120000 $165000 USD based on experience technical expertise and geographic location


Benefits:

Comprehensive medical dental and vision insurance

401(k) retirement plan with employer matching

Paid time off paid holidays and sick leave

Life short-term and long-term disability insurance

Flexible remote work schedule

Professional development research and certification support

Employee wellness and assistance programs


Work Authorization & Residency Requirement:

Must be legally authorized to work in the United States

Must currently reside within the United States

Applications from candidates outside the U.S. will not be considered

The Adversarial AI Tester is responsible for evaluating stress-testing and validating artificial intelligence and machine learning systems against adversarial threats misuse bias and failure modes. This role plays a critical part in ensuring the robustness safety reliability and ethical performance ...
View more view more

Key Skills

  • Asset
  • Front Desk
  • Banking & Finance
  • Jboss
  • Accident Investigation
  • Chemistry