Summary:
The AI Quality Analyst plays a pivotal role in ensuring the reliability safety and performance of AI-driven products. This position is critical in shaping the quality assurance framework for advanced AI systems with a focus on accuracy fairness and compliance. The ideal candidate will lead the design and execution of comprehensive testing strategies for AI models including large language models (LLMs) ensuring robust evaluation across response relevance context awareness and ethical alignment. With hands-on expertise in automation CI/CD integration and evaluation reporting the analyst will drive continuous validation bias mitigation and response fallback mechanisms. This role demands a strong foundation in API testing Python-based automation and traceability of model behavior all while adhering to legal and regulatory standards prior to production deployment. The position requires an on-site presence in Bangalore to collaborate closely with cross-functional teams and ensure high-quality AI product delivery.
Responsibilities:
- Design and implement end-to-end product testing strategies for AI and LLM-based systems
- Execute and maintain automated test frameworks using Python Behave BDD and Git version control
- Conduct rigorous evaluation of AI outputs for accuracy relevance context retention and safety
- Develop and manage feedback mechanisms for continuous model validation and improvement
- Perform bias detection safety checks and ensure consistent handling of harmful or edge-case prompts
- Validate response fall-back mechanisms and non-subjective output handling
- Perform basic performance testing (p50/p95 response times) load testing and SLA adherence monitoring
- Ensure data traceability auditability and secure storage across testing and evaluation cycles
- Support integration with CI/CD pipelines for automated quality gates
- Apply relevant legal or compliance frameworks during testing phases before production release
Requirements
- 710 years of experience in software or AI product testing
- Proven experience as an Automation Tester with hands-on testing of AI/ML products
- Strong proficiency in API testing automation frameworks and BDD practices
- Expertise in Python and Behave for test scripting and execution
- Experience with Git version control and CI/CD integration
- In-depth understanding of LLM evaluation: accuracy relevance context and safety
- Experience with feedback loops continuous validation and model improvement workflows
- Familiarity with performance testing (p50/p95) response time SLAs and load testing
- Knowledge of data storage traceability and auditability in AI systems
- Experience handling edge cases harmful prompts and safety mitigation strategies
- Understanding of legal or regulatory frameworks applicable to AI product testing
- Must be available for on-site work in Bangalore
Additional Details:
Role: AI Quality Analyst
Experience: 7-10 years
Must-Have skill: Automation testing Gen AI AWS
Notice Period: 30-40 days
Location: Bangalore
Employment type: Full-time
Benefits
Salary: 20 LPA - 35 LPA
PF/ESIC
Required Skills:
Summary: The AI Quality Analyst plays a pivotal role in ensuring the reliability safety and performance of AI-driven products. This position is critical in shaping the quality assurance framework for advanced AI systems with a focus on accuracy fairness and compliance. The ideal candidate will lead the design and execution of comprehensive testing strategies for AI models including large language models (LLMs) ensuring robust evaluation across response relevance context awareness and ethical alignment. With hands-on expertise in automation CI/CD integration and evaluation reporting the analyst will drive continuous validation bias mitigation and response fallback mechanisms. This role demands a strong foundation in API testing Python-based automation and traceability of model behavior all while adhering to legal and regulatory standards prior to production deployment. The position requires an on-site presence in Bangalore to collaborate closely with cross-functional teams and ensure high-quality AI product delivery. Responsibilities: Design and implement end-to-end product testing strategies for AI and LLM-based systems Execute and maintain automated test frameworks using Python Behave BDD and Git version control Conduct rigorous evaluation of AI outputs for accuracy relevance context retention and safety Develop and manage feedback mechanisms for continuous model validation and improvement Perform bias detection safety checks and ensure consistent handling of harmful or edge-case prompts Validate response fall-back mechanisms and non-subjective output handling Perform basic performance testing (p50/p95 response times) load testing and SLA adherence monitoring Ensure data traceability auditability and secure storage across testing and evaluation cycles Support integration with CI/CD pipelines for automated quality gates Apply relevant legal or compliance frameworks during testing phases before production release Requirements 710 years of experience in software or AI product testing Proven experience as an Automation Tester with hands-on testing of AI/ML products Strong proficiency in API testing automation frameworks and BDD practices Expertise in Python and Behave for test scripting and execution Experience with Git version control and CI/CD integration In-depth understanding of LLM evaluation: accuracy relevance context and safety Experience with feedback loops continuous validation and model improvement workflows Familiarity with performance testing (p50/p95) response time SLAs and load testing Knowledge of data storage traceability and auditability in AI systems Experience handling edge cases harmful prompts and safety mitigation strategies Understanding of legal or regulatory frameworks applicable to AI product testing Must be available for on-site work in Bangalore Additional Details: Role: AI Quality Analyst Experience: 7-10 years Must-Have skill: Automation testing Gen AI AWS Notice Period: 30-40 days Location: Bangalore Employment type: Full-time Benefits Salary: 20 LPA - 35 LPA PF/ESIC
Required Education:
Graduate
Summary:The AI Quality Analyst plays a pivotal role in ensuring the reliability safety and performance of AI-driven products. This position is critical in shaping the quality assurance framework for advanced AI systems with a focus on accuracy fairness and compliance. The ideal candidate will lead t...
Summary:
The AI Quality Analyst plays a pivotal role in ensuring the reliability safety and performance of AI-driven products. This position is critical in shaping the quality assurance framework for advanced AI systems with a focus on accuracy fairness and compliance. The ideal candidate will lead the design and execution of comprehensive testing strategies for AI models including large language models (LLMs) ensuring robust evaluation across response relevance context awareness and ethical alignment. With hands-on expertise in automation CI/CD integration and evaluation reporting the analyst will drive continuous validation bias mitigation and response fallback mechanisms. This role demands a strong foundation in API testing Python-based automation and traceability of model behavior all while adhering to legal and regulatory standards prior to production deployment. The position requires an on-site presence in Bangalore to collaborate closely with cross-functional teams and ensure high-quality AI product delivery.
Responsibilities:
- Design and implement end-to-end product testing strategies for AI and LLM-based systems
- Execute and maintain automated test frameworks using Python Behave BDD and Git version control
- Conduct rigorous evaluation of AI outputs for accuracy relevance context retention and safety
- Develop and manage feedback mechanisms for continuous model validation and improvement
- Perform bias detection safety checks and ensure consistent handling of harmful or edge-case prompts
- Validate response fall-back mechanisms and non-subjective output handling
- Perform basic performance testing (p50/p95 response times) load testing and SLA adherence monitoring
- Ensure data traceability auditability and secure storage across testing and evaluation cycles
- Support integration with CI/CD pipelines for automated quality gates
- Apply relevant legal or compliance frameworks during testing phases before production release
Requirements
- 710 years of experience in software or AI product testing
- Proven experience as an Automation Tester with hands-on testing of AI/ML products
- Strong proficiency in API testing automation frameworks and BDD practices
- Expertise in Python and Behave for test scripting and execution
- Experience with Git version control and CI/CD integration
- In-depth understanding of LLM evaluation: accuracy relevance context and safety
- Experience with feedback loops continuous validation and model improvement workflows
- Familiarity with performance testing (p50/p95) response time SLAs and load testing
- Knowledge of data storage traceability and auditability in AI systems
- Experience handling edge cases harmful prompts and safety mitigation strategies
- Understanding of legal or regulatory frameworks applicable to AI product testing
- Must be available for on-site work in Bangalore
Additional Details:
Role: AI Quality Analyst
Experience: 7-10 years
Must-Have skill: Automation testing Gen AI AWS
Notice Period: 30-40 days
Location: Bangalore
Employment type: Full-time
Benefits
Salary: 20 LPA - 35 LPA
PF/ESIC
Required Skills:
Summary: The AI Quality Analyst plays a pivotal role in ensuring the reliability safety and performance of AI-driven products. This position is critical in shaping the quality assurance framework for advanced AI systems with a focus on accuracy fairness and compliance. The ideal candidate will lead the design and execution of comprehensive testing strategies for AI models including large language models (LLMs) ensuring robust evaluation across response relevance context awareness and ethical alignment. With hands-on expertise in automation CI/CD integration and evaluation reporting the analyst will drive continuous validation bias mitigation and response fallback mechanisms. This role demands a strong foundation in API testing Python-based automation and traceability of model behavior all while adhering to legal and regulatory standards prior to production deployment. The position requires an on-site presence in Bangalore to collaborate closely with cross-functional teams and ensure high-quality AI product delivery. Responsibilities: Design and implement end-to-end product testing strategies for AI and LLM-based systems Execute and maintain automated test frameworks using Python Behave BDD and Git version control Conduct rigorous evaluation of AI outputs for accuracy relevance context retention and safety Develop and manage feedback mechanisms for continuous model validation and improvement Perform bias detection safety checks and ensure consistent handling of harmful or edge-case prompts Validate response fall-back mechanisms and non-subjective output handling Perform basic performance testing (p50/p95 response times) load testing and SLA adherence monitoring Ensure data traceability auditability and secure storage across testing and evaluation cycles Support integration with CI/CD pipelines for automated quality gates Apply relevant legal or compliance frameworks during testing phases before production release Requirements 710 years of experience in software or AI product testing Proven experience as an Automation Tester with hands-on testing of AI/ML products Strong proficiency in API testing automation frameworks and BDD practices Expertise in Python and Behave for test scripting and execution Experience with Git version control and CI/CD integration In-depth understanding of LLM evaluation: accuracy relevance context and safety Experience with feedback loops continuous validation and model improvement workflows Familiarity with performance testing (p50/p95) response time SLAs and load testing Knowledge of data storage traceability and auditability in AI systems Experience handling edge cases harmful prompts and safety mitigation strategies Understanding of legal or regulatory frameworks applicable to AI product testing Must be available for on-site work in Bangalore Additional Details: Role: AI Quality Analyst Experience: 7-10 years Must-Have skill: Automation testing Gen AI AWS Notice Period: 30-40 days Location: Bangalore Employment type: Full-time Benefits Salary: 20 LPA - 35 LPA PF/ESIC
Required Education:
Graduate
View more
View less