Location: Remote-friendly (US time zones); Geography restricted to US UK Canada Type: Full-time or Part-time
Why This Role Exists
At Mercor we believe the safest AI is the one thats already been attacked by us. Thats why were building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs surface vulnerabilities and generate the red-team data that makes AI safer for our customers.
This role may include reviewing AI outputs that touch on sensitive topics such as bias misinformation or harmful behaviors. All work is text-based and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.
What Youll Do
-
Red-team AI models and agents: jailbreaks prompt injections misuse cases exploits
-
Generate high-quality human data: annotate failures classify vulnerabilities and flag systemic risks
-
Apply structure: follow taxonomies benchmarks and playbooks to keep testing consistent Document reproducibly: produce reports datasets and attack cases customers can act on
-
Flex across projects: support different customers from LLM jailbreaks to socio-technical abuse testing
Who You Are
-
You bring prior red-teaming experience (AI adversarial work cybersecurity socio-technical probing)
-
Youre curious and adversarial: you instinctively push systems to breaking points
-
Youre structured: you use frameworks or benchmarks not just random hacks
-
Youre communicative: you explain risks clearly to technical and non-technical stakeholders
-
Youre adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties
-
Adversarial ML: jailbreak datasets prompt injection RLHF/DPO attacks model extraction
-
Cybersecurity: penetration testing exploit development reverse engineering
-
Socio-technical risk: harassment/disinfo probing abuse analysis
-
Creative probing: psychology acting writing for unconventional adversarial thinking
What Success Looks Like
-
You uncover vulnerabilities automated tests miss
-
You deliver reproducible artifacts that strengthen customer AI systems
-
Evaluation coverage expands: more scenarios tested fewer surprises in production
-
Mercor customers trust the safety of their AI because youve already probed it like an adversary
Why Join Mercor
The pay rate for this role may vary by project customer and content category. Compensation will be aligned with the level of expertise required the sensitivity of the material and the scope of work for each engagement.
Location: Remote-friendly (US time zones); Geography restricted to US UK Canada Type: Full-time or Part-time Why This Role Exists At Mercor we believe the safest AI is the one thats already been attacked by us. Thats why were building a pod of AI Red-Teamers: human data experts who probe AI models ...
Location: Remote-friendly (US time zones); Geography restricted to US UK Canada Type: Full-time or Part-time
Why This Role Exists
At Mercor we believe the safest AI is the one thats already been attacked by us. Thats why were building a pod of AI Red-Teamers: human data experts who probe AI models with adversarial inputs surface vulnerabilities and generate the red-team data that makes AI safer for our customers.
This role may include reviewing AI outputs that touch on sensitive topics such as bias misinformation or harmful behaviors. All work is text-based and participation in higher-sensitivity projects is optional and supported by clear guidelines and wellness resources.
What Youll Do
-
Red-team AI models and agents: jailbreaks prompt injections misuse cases exploits
-
Generate high-quality human data: annotate failures classify vulnerabilities and flag systemic risks
-
Apply structure: follow taxonomies benchmarks and playbooks to keep testing consistent Document reproducibly: produce reports datasets and attack cases customers can act on
-
Flex across projects: support different customers from LLM jailbreaks to socio-technical abuse testing
Who You Are
-
You bring prior red-teaming experience (AI adversarial work cybersecurity socio-technical probing)
-
Youre curious and adversarial: you instinctively push systems to breaking points
-
Youre structured: you use frameworks or benchmarks not just random hacks
-
Youre communicative: you explain risks clearly to technical and non-technical stakeholders
-
Youre adaptable: thrive on moving across projects and customers
Nice-to-Have Specialties
-
Adversarial ML: jailbreak datasets prompt injection RLHF/DPO attacks model extraction
-
Cybersecurity: penetration testing exploit development reverse engineering
-
Socio-technical risk: harassment/disinfo probing abuse analysis
-
Creative probing: psychology acting writing for unconventional adversarial thinking
What Success Looks Like
-
You uncover vulnerabilities automated tests miss
-
You deliver reproducible artifacts that strengthen customer AI systems
-
Evaluation coverage expands: more scenarios tested fewer surprises in production
-
Mercor customers trust the safety of their AI because youve already probed it like an adversary
Why Join Mercor
The pay rate for this role may vary by project customer and content category. Compensation will be aligned with the level of expertise required the sensitivity of the material and the scope of work for each engagement.
View more
View less