| Position Summary / Purpose: Overview of the basic function and purpose of the job and how it contributes to the successful achievement of department and organization objectives. |
- Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML objective-based testing of AI-driven features and coverage of both traditional and AI-centric attack surfaces.
- Perform threat modeling for AI-powered software systems evaluate AI-related business logic and conduct architecture reviews. Focus on adversarial ML vectors prompt-based vulnerabilities and other AI-specific security risks.
- Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery exploitation fuzzing and adversarial ML testing emphasizing web apps APIs and mobile clients.
- Demonstrate AI penetration testing findings to technical and non-technical audiences including live demos.
- Collaborate with engineering development and security teams to communicate findings lead remediation discussions and advise on secure AI model development and deployment best practices.
- Research emerging AI attack techniques and evaluate their potential impact identify vulnerabilities and provide actionable recommendations to strengthen AI defenses.
- Collaborate with internal Red Teams SOC analysts and AI security researchers sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics.
- Act independently on AI penetration testing with minimal oversight guiding engagements from planning through execution and reporting.
|
| Qualifications: The skills abilities specific knowledge education and minimum experience necessary to perform this job. |
- Minimum three (3) years of recent penetration testing experience focused on APIs web applications and mobile applications. Experience with AI model testing or AI security is highly desirable.
- Proven background in AI red teaming and adversarial attack development including prompt engineering attacks LLM-based vulnerability analysis and model evasion techniques.
- Proficiency with penetration testing tools (e.g. Burp Suite Pro Netsparker Checkmarx) and AI security frameworks (e.g. TensorFlow PyTorch LLM APIs LangChain).
- Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation.
- One or more major ethical hacking certifications (e.g. GWAPT CREST OSWE OSWA) and certifications or training in AI security techniques.
- Bachelors degree from an accredited college/university or equivalent industry experience.
- Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
|
Position Summary / Purpose: Overview of the basic function and purpose of the job and how it contributes to the successful achievement of department and organization objectives. Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporati...
| Position Summary / Purpose: Overview of the basic function and purpose of the job and how it contributes to the successful achievement of department and organization objectives. |
- Execute AI-focused penetration testing engagements that include manual penetration testing of systems incorporating AI/ML objective-based testing of AI-driven features and coverage of both traditional and AI-centric attack surfaces.
- Perform threat modeling for AI-powered software systems evaluate AI-related business logic and conduct architecture reviews. Focus on adversarial ML vectors prompt-based vulnerabilities and other AI-specific security risks.
- Develop and improve AI-driven tools and methodologies for offensive security tasks such as discovery exploitation fuzzing and adversarial ML testing emphasizing web apps APIs and mobile clients.
- Demonstrate AI penetration testing findings to technical and non-technical audiences including live demos.
- Collaborate with engineering development and security teams to communicate findings lead remediation discussions and advise on secure AI model development and deployment best practices.
- Research emerging AI attack techniques and evaluate their potential impact identify vulnerabilities and provide actionable recommendations to strengthen AI defenses.
- Collaborate with internal Red Teams SOC analysts and AI security researchers sharing insights and data to enhance AI-driven offensive security methodologies. Refine existing AI red teaming approaches by integrating new adversarial ML techniques and proven exploitation tactics.
- Act independently on AI penetration testing with minimal oversight guiding engagements from planning through execution and reporting.
|
| Qualifications: The skills abilities specific knowledge education and minimum experience necessary to perform this job. |
- Minimum three (3) years of recent penetration testing experience focused on APIs web applications and mobile applications. Experience with AI model testing or AI security is highly desirable.
- Proven background in AI red teaming and adversarial attack development including prompt engineering attacks LLM-based vulnerability analysis and model evasion techniques.
- Proficiency with penetration testing tools (e.g. Burp Suite Pro Netsparker Checkmarx) and AI security frameworks (e.g. TensorFlow PyTorch LLM APIs LangChain).
- Strong communication and presentation skills to explain AI-related vulnerabilities to technical and non-technical stakeholders and drive remediation.
- One or more major ethical hacking certifications (e.g. GWAPT CREST OSWE OSWA) and certifications or training in AI security techniques.
- Bachelors degree from an accredited college/university or equivalent industry experience.
- Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.
|
View more
View less