AI QE Architect
Duration: Fulltime
Location Onsite - West Palm Beach Florida
Job description:
We are seeking an experienced AI Quality Engineering (QE) Architect & Governance Leader to drive end-to-end quality automation and governance for AI ML and GenAI solutions across mission-critical and highly regulated environments. This role combines deep expertise in automation AI/ML technologies and agentic workflows with strong leadership in governance risk and compliance-especially for Energy/Utilities/Nuclear domains.
You will architect future-state AI QE frameworks establish testing standards for LLMs and agents automate validation pipelines enforce model governance and lead enterprise-wide assurance initiatives.
Core Experience & Qualifications
- 10 14 years as a Technology Architect SDET or QE Leader with strong coding skills in Python TypeScript or Java.
- Proven experience designing automation frameworks or developer tools for large engineering organizations.
- Hands-on expertise with Large Language Models (LLMs) prompt engineering and safety evaluation techniques.
- Exposure to Agentic AI systems and orchestration tools such as LangGraph AutoGen CrewAI or similar agent frameworks.
- Experience implementing Model Context Protocol (MCP) for real-time automation autonomous workflows or CI/CD integrations.
- Experience working in highly regulated industries-Energy Utilities Nuclear Healthcare BFSI or similar.
AI / ML Technologies
- Practical experience with frameworks and ecosystems:
LangChain Hugging Face GPT models vector databases
- Working knowledge of ML/DL libraries:
Scikit-learn TensorFlow Keras PyTorch HuggingFace Transformers OpenCV NLTK and BART
- Understanding of RAG architectures embeddings and semantic search (bonus).
GenAI & AI Agent Development
- Expertise in designing developing validating and deploying Generative AI solutions.
- Experience building AI agents multi-agent workflows or autonomous decision systems.
- Ability to define governance for:
oLLM drift detection
oPrompt quality standards
oAgent monitoring & observability
oData lineage & model versioning
Automation & Quality Engineering
- Strong experience building test automation frameworks using Python PyTest Selenium Playwright and Requests.
- Ability to create automated tests covering:
oFunctional
oAPI
oIntegration
oPerformance
oSecurity
oAI/ML validation (LLM testing model accuracy hallucination detection)
- Proficiency in API testing and validation of RESTful services.
- Plus: Experience with performance/load testing tools K6 or JMeter.
AI Governance & Compliance
- Establish AI/ML quality standards testing guidelines and risk controls.
- Define governance around:
oData security & privacy
oModel evaluation KPIs (accuracy bias toxicity hallucination rates)
oRegulatory alignment for Energy/Utility operations
oContinuous monitoring & drift alerts
- Experience working with IRB compliance or audit teams.
Cloud DevOps & CI/CD
- Knowledge of deploying AI and automation solutions on AWS.
- Experience implementing CI/CD pipelines for ML and LLM models:
oModel versioning & lifecycle
oRetraining workflows
oAutomated evaluation gates
oInfrastructure-as-code (IAC) familiarity
- Experience implementing observability frameworks for AI/ML systems.
SDLC & Collaboration
- Strong understanding of end-to-end SDLC and QE methodologies.
- Work closely with developers product managers data scientists and business stakeholders.
- Ability to provide clear communication around test strategy risks coverage and governance readiness.
Skilled in defect triage risk-based testing and quality strategy leadership
AI QE Architect Duration: Fulltime Location Onsite - West Palm Beach Florida Job description: We are seeking an experienced AI Quality Engineering (QE) Architect & Governance Leader to drive end-to-end quality automation and governance for AI ML and GenAI solutions across mission-critical and ...
AI QE Architect
Duration: Fulltime
Location Onsite - West Palm Beach Florida
Job description:
We are seeking an experienced AI Quality Engineering (QE) Architect & Governance Leader to drive end-to-end quality automation and governance for AI ML and GenAI solutions across mission-critical and highly regulated environments. This role combines deep expertise in automation AI/ML technologies and agentic workflows with strong leadership in governance risk and compliance-especially for Energy/Utilities/Nuclear domains.
You will architect future-state AI QE frameworks establish testing standards for LLMs and agents automate validation pipelines enforce model governance and lead enterprise-wide assurance initiatives.
Core Experience & Qualifications
- 10 14 years as a Technology Architect SDET or QE Leader with strong coding skills in Python TypeScript or Java.
- Proven experience designing automation frameworks or developer tools for large engineering organizations.
- Hands-on expertise with Large Language Models (LLMs) prompt engineering and safety evaluation techniques.
- Exposure to Agentic AI systems and orchestration tools such as LangGraph AutoGen CrewAI or similar agent frameworks.
- Experience implementing Model Context Protocol (MCP) for real-time automation autonomous workflows or CI/CD integrations.
- Experience working in highly regulated industries-Energy Utilities Nuclear Healthcare BFSI or similar.
AI / ML Technologies
- Practical experience with frameworks and ecosystems:
LangChain Hugging Face GPT models vector databases
- Working knowledge of ML/DL libraries:
Scikit-learn TensorFlow Keras PyTorch HuggingFace Transformers OpenCV NLTK and BART
- Understanding of RAG architectures embeddings and semantic search (bonus).
GenAI & AI Agent Development
- Expertise in designing developing validating and deploying Generative AI solutions.
- Experience building AI agents multi-agent workflows or autonomous decision systems.
- Ability to define governance for:
oLLM drift detection
oPrompt quality standards
oAgent monitoring & observability
oData lineage & model versioning
Automation & Quality Engineering
- Strong experience building test automation frameworks using Python PyTest Selenium Playwright and Requests.
- Ability to create automated tests covering:
oFunctional
oAPI
oIntegration
oPerformance
oSecurity
oAI/ML validation (LLM testing model accuracy hallucination detection)
- Proficiency in API testing and validation of RESTful services.
- Plus: Experience with performance/load testing tools K6 or JMeter.
AI Governance & Compliance
- Establish AI/ML quality standards testing guidelines and risk controls.
- Define governance around:
oData security & privacy
oModel evaluation KPIs (accuracy bias toxicity hallucination rates)
oRegulatory alignment for Energy/Utility operations
oContinuous monitoring & drift alerts
- Experience working with IRB compliance or audit teams.
Cloud DevOps & CI/CD
- Knowledge of deploying AI and automation solutions on AWS.
- Experience implementing CI/CD pipelines for ML and LLM models:
oModel versioning & lifecycle
oRetraining workflows
oAutomated evaluation gates
oInfrastructure-as-code (IAC) familiarity
- Experience implementing observability frameworks for AI/ML systems.
SDLC & Collaboration
- Strong understanding of end-to-end SDLC and QE methodologies.
- Work closely with developers product managers data scientists and business stakeholders.
- Ability to provide clear communication around test strategy risks coverage and governance readiness.
Skilled in defect triage risk-based testing and quality strategy leadership
View more
View less