Senior AI Security & Governance Engineer
Experience: 5-8 Years
Duration: 6 Months (Contract)
Working Hours: 02:00 PM - 11:00 PM IST
Location: Remote (First week should in Trivandrum/Kochi Office)
About the Role
We are hiring a Senior AI Security & Governance Engineer to build and enforce runtime-level AI safety and governance controls across enterprise AI systems.
This is a high-impact engineering role focused on embedding security guardrails and risk controls directly into AI workflows not policy documentation. You will help ensure safe compliant and production-ready AI deployments while enabling innovation at scale.
Role Summary
You will design and implement AI guardrails risk mitigation strategies and runtime enforcement mechanisms for agent-based and LLM-powered systems. The ideal candidate combines strong engineering skills with deep understanding of AI risks and enterprise governance.
Key Responsibilities
- Design and implement AI safety controls including:
- Prompt filtering and sanitization
- Output validation and moderation
- Misuse and abuse prevention mechanisms
- Embed AI governance through code automation and system design
- Build auditability traceability and monitoring frameworks for AI systems
- Define and standardize security and safety patterns across AI agents and workflows
- Identify and mitigate AI risk scenarios and failure modes (e.g. prompt injection hallucinations data leakage)
- Collaborate with engineering and security teams to ensure secure AI deployments in production
Must-Have Skills
- Strong proficiency in Python / scripting
- Hands-on experience with:
- Security governance or risk management systems
- Policy-as-code frameworks / rule engines
- Solid understanding of:
- AI/LLM risks - hallucinations prompt injection data leakage
- Enterprise security and compliance principles
- Experience implementing runtime controls and guardrails in production systems
Good to Have
- Experience with IAM / governance tools (Saviynt ServiceNow approval workflows)
- Familiarity with Responsible AI frameworks / Model Risk Management
- Exposure to AI/ML systems or LLM-based applications
Education
- Bachelors degree in Computer Science Information Technology or related field
What Were Looking For
- Strong risk mindset with engineering depth (not just policy-focused)
- Ability to balance innovation with security and compliance
- Hands-on ownership-driven professional who can build and scale AI safeguards
Required Skills:
PythonScriptingSecurityRisk Management Tools
Senior AI Security & Governance Engineer Experience: 5-8 YearsDuration: 6 Months (Contract)Working Hours: 02:00 PM - 11:00 PM ISTLocation: Remote (First week should in Trivandrum/Kochi Office) About the Role We are hiring a Senior AI Security & Governance Engineer to build and enforce runtime-level...
Senior AI Security & Governance Engineer
Experience: 5-8 Years
Duration: 6 Months (Contract)
Working Hours: 02:00 PM - 11:00 PM IST
Location: Remote (First week should in Trivandrum/Kochi Office)
About the Role
We are hiring a Senior AI Security & Governance Engineer to build and enforce runtime-level AI safety and governance controls across enterprise AI systems.
This is a high-impact engineering role focused on embedding security guardrails and risk controls directly into AI workflows not policy documentation. You will help ensure safe compliant and production-ready AI deployments while enabling innovation at scale.
Role Summary
You will design and implement AI guardrails risk mitigation strategies and runtime enforcement mechanisms for agent-based and LLM-powered systems. The ideal candidate combines strong engineering skills with deep understanding of AI risks and enterprise governance.
Key Responsibilities
- Design and implement AI safety controls including:
- Prompt filtering and sanitization
- Output validation and moderation
- Misuse and abuse prevention mechanisms
- Embed AI governance through code automation and system design
- Build auditability traceability and monitoring frameworks for AI systems
- Define and standardize security and safety patterns across AI agents and workflows
- Identify and mitigate AI risk scenarios and failure modes (e.g. prompt injection hallucinations data leakage)
- Collaborate with engineering and security teams to ensure secure AI deployments in production
Must-Have Skills
- Strong proficiency in Python / scripting
- Hands-on experience with:
- Security governance or risk management systems
- Policy-as-code frameworks / rule engines
- Solid understanding of:
- AI/LLM risks - hallucinations prompt injection data leakage
- Enterprise security and compliance principles
- Experience implementing runtime controls and guardrails in production systems
Good to Have
- Experience with IAM / governance tools (Saviynt ServiceNow approval workflows)
- Familiarity with Responsible AI frameworks / Model Risk Management
- Exposure to AI/ML systems or LLM-based applications
Education
- Bachelors degree in Computer Science Information Technology or related field
What Were Looking For
- Strong risk mindset with engineering depth (not just policy-focused)
- Ability to balance innovation with security and compliance
- Hands-on ownership-driven professional who can build and scale AI safeguards
Required Skills:
PythonScriptingSecurityRisk Management Tools
View more
View less