Illuminas AI Center of Excellence is seeking a passionate and attention-oriented QA AI Engineer to support our world class innovative solutions. As a QA Engineer (AI Systems & Automation) you will lead quality strategy and test automation for critical data platforms and AI-powered experiences. You will ensure both traditional software and AI/agentic systems are reliable explainable and safe in a regulated FDA environment.
You will:
Own end-to-end quality for complex web API data and AI/ML-powered features
Design AI-system test strategies and automation that leverage GenAI and agentic frameworks
Roles and Responsibilities
Core QA & Automation
Develop and maintain test plans test cases traceability and test data for product and AI features
Execute manual and automated tests for web applications APIs data workflows and AI/ML features
Own automated regression suites release readiness criteria and provide clear go / no-go quality signals
Participate in agile ceremonies validate end-to-end functionality and ensure user stories (including AI features) meet acceptance criteria
Manage the full defect lifecycle including triage prioritization root cause analysis and verification of fixes
Maintain QA documentation runbooks and quality dashboards
AI-Systems QA Responsibilities
Design and execute test strategies for AI/LLM-powered capabilities including virtual agents chatbots copilots and RAG-based systems
Use approved LLM-powered tools to accelerate test design data generation exploratory testing and script authoring
Auto-generate and maintain test scripts
Define and monitor AI-specific quality metrics (accuracy vs. ground truth hallucination and error rates safety / policy adherence)
Ensure AI and virtual agent experiences are accurate consistent and high quality
Non-Functional Data Quality & Collaboration
Plan and execute performance load and scalability testing
Validate data integrity and transformation quality across complex biomedical data pipelines and AI-enhanced workflows
Partner with engineers and data scientists to ensure AI/ML models and integrations are testable observable and measurable post-deployment
Collaborate with development DevOps product UX and data teams to improve testability shift-left quality and increase automated coverage
Integrate automation into CI/CD (e.g. GitHub Actions Jenkins Azure DevOps GitLab CI) monitor test health and flakiness and address coverage gaps
Communicate quality risks trends and mitigation plans to technical and non-technical stakeholders including government partners
Preferred Experience/Education/Skills:
Bachelors degree in computer science Information Technology Engineering or related field
1Years Software QA experience (manual and automation) in production environments
1 Years Experience testing APIs and microservices architectures
Hands-on experience integrating automated tests into CI/CD pipelines (GitHub Actions Jenkins Azure DevOps or GitLab CI)
Professional-level proficiency in Python or JavaScript for test automation
Hands-on use of GenAI tools (e.g. ChatGPT Claude Copilot) for QA tasks such as test-case generation data creation and exploratory testing
Understanding of AI/agentic concepts
AI-driven data comparison and validation
Proficiency with Jira or similar issue tracking tools
Strong written and verbal communication skills including the ability to explain AI-related quality risks to stakeholders
Ability to prioritize multitask and operate effectively in complex mission-driven environments
Experience testing AI/ML-powered features (LLM applications RAG systems agents recommendation engines or chatbots)
Experience with one or more:
LangChain or LangGraph
AWS Bedrock Agents or OpenAI Assistants API
MCP (Multi-Context Protocol) or similar orchestration frameworks
Experience designing or testing internal QA copilots or automation bots for test author or execution
Prior QA experience in healthcare life sciences biomedical informatics or other regulated data environments
Required Experience:
IC