Role Overview
Bank is hiring a Technical Lead AI Security to join our CISO team in Mumbai. This is a critical hands-on role - ensuring the trustworthiness resilience and compliance of AI/ML systems including large language models (LLMs). You will work at the intersection of cybersecurity and AI shaping secure testing understanding secure MLOps/LLMOps workflows and leading technical implementation of defenses against emerging AI threats. This role requires both strategic vision and strong engineering depth.
Key Responsibilities
- Lead and operationalize the AI/ML and LLM security roadmap across training validation deployment and runtime to enable AI Security Platform Approach.
- Design and implement defenses against threats like adversarial attacks data poisoning model inversion prompt injection and fine-tuning exploits using industry leading open source and commercial tools.
- Build hardened workflows for model security integrity verification and auditability in production AI environments.
- Leverage AI security tools for scanning fuzzing and penetration testing models.
- Apply best practices from OWASP Top 10 for ML/LLMs MITRE ATLAS NIST AI RMF and ISO/IEC 42001 to test AI/ML assets.
- Ensure AI model security testing framework aligns with internal policy national regulatory requirements and global best practices.
- Plan and execute security tests for AI/LLM systems including jailbreaking RAG hardening and bias/toxicity validation.
Required Skills & Experience
- 8 years in cybersecurity with at least 3 years hands-on in AI/ML security or secure MLOps/LLMOps
- Proficient in Python TensorFlow/PyTorch HuggingFace LangChain and common data science libraries
- Deep understanding of adversarial ML/LLM model evaluation under threat conditions and inference/training-time attack vectors
- Experience securing cloud-based AI workloads (AWS Azure or GCP)
- Familiarity with secure DevOps and CI/CD practices
- Strong understanding of AI-specific threat models (MITRE ATLAS) and security benchmarks (OWASP Top 10 for ML/LLMs)
- Ability to communicate technical risk clearly to non-technical stakeholders
- Ability to guide developers and data scientists to solve the AI Security risks.
- Certifications: CISSP OSCP GCP ML Security or relevant AI/ML certificates
- Experience with AI security tools or platforms (e.g. model registries lineage tracking policy enforcement)
- Experience with RAG LLM-based agents or agentic workflows
- Experience in regulated sectors (finance public sector)
Cyber Security,AI,ML,large language models,LLM,OSCP,CISSP,AI Security,ML Security