Position Overview
The Offensive Security Research Team seeks a Cyber Security Researcher to perform AI-focused red team testing and vulnerability analysis. This role involves testing new AI offerings and integrations building proof-of-concept exploits and strengthening overall AI security posture through advanced offensive research.
Core Responsibilities
Conduct AI Red Team and penetration testing on LLMs agentic systems and AI-integrated applications.
Perform reverse engineering create exploits and simulate real-world attack vectors.
Identify document and mitigate vulnerabilities in AI models and supporting infrastructure.
Write technical reports detailing findings attack chains and security implications.
Collaborate with cross-functional teams to promote responsible AI deployment and secure engineering practices.
Required Skills & Experience
Skill Requirement
AI Cybersecurity Research 2 years
Red Team / Pen Testing 2 years
Exploit Development / Reverse Engineering 2 years
AI Technologies (LLMs Copilot Gemini MCP Agentic Solutions) Hands-on
Information Security Engineering 4 years
Technical Writing & Presentations 3 years
Communication Skills Strong verbal & written
Additional Notes
Must have direct experience testing AI systems (LLMs Copilot Gemini etc.).
Military or equivalent cybersecurity experience acceptable.
Collaborates directly with internal technology leadership on AI security best practices.
Position Overview The Offensive Security Research Team seeks a Cyber Security Researcher to perform AI-focused red team testing and vulnerability analysis. This role involves testing new AI offerings and integrations building proof-of-concept exploits and strengthening overall AI security posture...
Position Overview
The Offensive Security Research Team seeks a Cyber Security Researcher to perform AI-focused red team testing and vulnerability analysis. This role involves testing new AI offerings and integrations building proof-of-concept exploits and strengthening overall AI security posture through advanced offensive research.
Core Responsibilities
Conduct AI Red Team and penetration testing on LLMs agentic systems and AI-integrated applications.
Perform reverse engineering create exploits and simulate real-world attack vectors.
Identify document and mitigate vulnerabilities in AI models and supporting infrastructure.
Write technical reports detailing findings attack chains and security implications.
Collaborate with cross-functional teams to promote responsible AI deployment and secure engineering practices.
Required Skills & Experience
Skill Requirement
AI Cybersecurity Research 2 years
Red Team / Pen Testing 2 years
Exploit Development / Reverse Engineering 2 years
AI Technologies (LLMs Copilot Gemini MCP Agentic Solutions) Hands-on
Information Security Engineering 4 years
Technical Writing & Presentations 3 years
Communication Skills Strong verbal & written
Additional Notes
Must have direct experience testing AI systems (LLMs Copilot Gemini etc.).
Military or equivalent cybersecurity experience acceptable.
Collaborates directly with internal technology leadership on AI security best practices.
View more
View less