:
- Conduct research on responsible AI frameworks and global regulatory best practices (e.g. NIST AI RMF EU AI Act OECD AI Principles).
- Design and develop evaluation protocols adversarial test scenarios and use-case-specific test sets to assess GenAI risks (e.g. hallucination toxicity bias).
- Analyze trends in foundation models RAG pipelines agentic AI and explainability techniques to inform our internal standards.
- Build reusable research assets and toolkits to guide AI risk reviews and model assurance processes.
- Support the rollout of internal frameworks templates and assessment checklists for responsible AI.
- Collaborate with GenAI validation team and policy owners to embed governance throughout the GenAI lifecycle.
:
- 5 years of experience in AI/ML research responsible AI development or AI assurance roles.
- Strong foundation in AI ethics GenAI architectures and applied data science.
- Working knowledge of evaluation frameworks risk benchmarking and safety/guardrail testing.
- Excellent research writing documentation and communication skills.
& :
- Demonstrated experience in developing AI/GenAI governance frameworks responsible AI checklists or evaluation methodologies.
- Strong understanding of the GenAI lifecycle including foundation model selection prompt engineering fine-tuning RAG and output risk assessment.
- Familiarity with global AI governance and risk frameworks (e.g. NIST AI RMF EU AI Act OECD AI Principles) and ability to translate them into practical implementation tools.
- Prior experience designing or executing adversarial testing safety benchmarks and use-case-specific test sets for GenAI applications.
- Hands-on experience in creating research-backed guidance on hallucination bias explainability and toxicity mitigation.
- Exposure to cross-functional AI risk projects or committees in a banking or regulated environment is a plus.
Model Risk,AI Research,AI Researcher,AI,GenAI,Risk Framewrorks,AI Frameworks,NIST,NIST AI RMF,EU AI Act,OECD,RAG Pipelines,Agentic AI,AI/ML,ML,Machine Learning,AI development,AI Assurance,AI/ML Research,AI Ethics,GenAI Architecture,AI Governance