Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailResponsibilities:
Develop and evolve Gen AI validation methodologies and frameworks in line with regulatory requirements and industry best practices
- Validate foundation models Gen AI solutions fine-tuning approaches and prompt engineering
- Implement monitoring for deployed AI solutions
- Assess risks of AI models and solutions to address hallucinations bias toxicity adversarial threats model drift and regulatory/ethical challenges
- Investigate performance issues and develop remediation plans
- Create executive-ready reports and remediation plans
- Partner with various stakeholders to embed risk controls into MLops/LLMops pipelines
Recommended Experience / Exposure You should have:
- Hands-on with Foundation Model architectures RAG fine-tuning & prompt engineering approaches
- Experience in either development or evaluation of Gen AI solutions and LLM architecture knowledge
- Familiarity with end-to-end Generative AI lifecycle is crucial.
- Knowledge of guardrails and risk mitigation techniques for hallucination toxicity bias & adversarial/security concerns
- 3-12 Years of Overall Experience
MOdel Development,Model Architecture,Fine tuning,Gen AI Solutions,AI Models
Full Time