Job Summary
The AIML Security Risk Assessment Specialist will play a critical role in validating reports and making final risk assessments for AIML models used in various business applications and use cases. This role will work closely with the Digital Risk Management Portfolio team to ensure the security and integrity of AIML models use case along with applications.
Key Responsibilities
- Risk Assessment: understand the business requirement finalise the scope and perform end to end risk assessment.
- Validate reports from various sources and make final risk assessments for AIML models considering factors such as data quality model performance and potential security threats.
- Conduct Security Risk assessment for GenAI models tools and platforms risk assessment.
- Perform in-depth risk assessments of GenAI systems and associated data pipelines both internally developed and third party.
- Evaluate the risk profile of different model architectures (e.g. transformer-based LLMs multimodal models) and deployment types (cloud edge open-source API-based)
- AIML Model Review: Review AIML models for potential security vulnerabilities including data poisoning model evasion and adversarial attacks.
- Report Analysis: Analyse reports from AIML model testing and validation teams to identify potential security risks and provide recommendations for mitigation.
- Risk Classification: Classify risks associated with AIML models and provide recommendations for risk mitigation and remediation.
- Collaboration: Work closely with cross-functional teams including data science engineering and security to ensure secure AIML system development and deployment.
- Review AIML use cases and provide assurance/feedback/confirmation on feedback.
- Reasonable understanding on LLM security Agentic and RAG security
Required Skills
- AIML Fundamentals: Strong understanding of AIML concepts including machine learning pipelines model architecture deep learning and natural language processing.
- Secure software development and MLOps (DevSecOps Principles)
- Hands-on experience with GenAI toolkits and APIs (e.g. OpenAI Claude Bard LLaMA Hugging face transformers.
- Security Expertise: Experience with security risk assessment threat modelling and vulnerability management.
- Analytical Skills: Excellent analytical and problem-solving skills with the ability to interpret complex data and reports.
- Communication: Strong communication and collaboration skills with the ability to provide clear and concise recommendations.
Experience
- Experience with AIML Security Frameworks: Familiarity with AIML security frameworks and guidelines Gartner / NIST 100 / ISO 42001
- Knowledge of Regulatory Requirements: Understanding of regulatory requirements such as GDPR HIPAA or CCPA.
- Experience with Risk Management: Familiarity with risk management frameworks and methodologies such as NIST or ISO 27001 ISO 31000.
- Experience overall in Information & cyber security domain
- Understanding of BFSI domain so that terms like DPSC payments ecosystem API banking Cloud IAM application security etc in context of risk assessment and management.
Education
- Bachelors or Masters degree in Computer Science Information Security or related field.
- Minimum 7-15 years of experience with 2-3 years of experience* in AIML / GenAI security risk management or related field.
- CISA CISM or at least AIML security certification
This job description highlights the key responsibilities and required skills for an AIML / GenAI Security Risk Assessment Specialist role. The focus is on validating reports making final risk assessments and providing recommendations for risk mitigation and remediation.
AI,ML,Artificial Intelligence,Machine Learning,Risk Assessment,security risk,cyber security risk