Job Title: Applied AI SDET
Location: Dallas TX
Duration: / Term: Contract
Experience Desired: 5 Years
Job Description:
AI-generated code introduces quality risks that traditional QA processes were not designed to catch. Subtle logic errors incomplete edge case handling and systematically misapplied patterns can pass human review when reviewers are moving fast. Your role goes beyond building a safety net - you will design and implement AI-driven quality engineering processes tools and agents that make quality intrinsic to the delivery pipeline rather than a gate at the end of it. You will feed intelligence back into the practice so the pipeline continuously improves and you will build the automation infrastructure that allows the pod to ship with confidence at a pace that manual QA cannot sustain.
What You Will Do
- Own the Test Strategy: Define and maintain the end-to-end test strategy for AI-generated code across unit integration contract and E2E layers calibrated to the risks introduced by agentic delivery.
- Build AI-Driven Quality Agents: Design and implement AI-powered quality agents and tooling - using Claude or equivalent - that automate test generation coverage gap analysis regression triage and defect pattern detection within the CI/CD pipeline.
- Validate AI-Generated Tests: Critically assess Claude-generated unit and integration tests for completeness correctness and meaningful coverage. Identify gaps redundancies and tests that pass without actually validating behavior.
- Build and Maintain Automation Suites: Design implement and own automated test suites that run as quality gates in the CI/CD pipeline including regression safety nets that protect the codebase from agentic regressions.
- TestContainers & Isolated Test Environments: Design and manage containerized isolated test environments using TestContainers and Docker to ensure backend service tests run against production-equivalent dependencies - databases message queues and third-party service stubs - without shared state or environment bleed.
- Synthetic Data Engineering: Design and maintain synthetic data strategies that produce realistic consistent and constraint-safe test data for backend service testing
- ensuring coverage of edge cases boundary conditions and stateful workflows without reliance on production data.
- Establish AI Quality Engineering Processes: Define and document repeatable quality engineering processes tailored to the AI-DLC model - covering how tests are generated reviewed validated and evolved alongside AI-generated features.
- Identify AI Failure Patterns: Detect and document systematic patterns in Claudes output quality - recurring anti-patterns common security misses or edge cases Claude consistently overlooks - and feed these back to the AI Solution Owner to improve specs and prompt context.
- Partner on Spec Quality: Review feature specifications before agentic generation begins flagging ambiguities or missing acceptance criteria that will produce untestable or unverifiable output.
- Own Your Delivery: Take full responsibility for test coverage and quality sign-off on every feature delivered by the pod from spec handoff through production deployment and post-deploy verification.
Must Have Experience:
- SDET / QA Engineering Proficiency: 5 years in a software development engineer in test or senior QA role with a track record of building and owning test automation in an enterprise software delivery context.
- Python & FastAPI Testing: Proven experience testing Python/FastAPI backend services including async endpoint testing dependency injection overrides and integration test patterns with pytest and httpx.
- TypeScript & React Testing: Experience writing and maintaining tests for TypeScript/React frontends using React Testing Library Jest and component-level testing patterns.
- Testing: Proven experience testing backend services using Jest Supertest or Mocha - including middleware testing async handler validation and integration testing of REST and GraphQL APIs built with Express Fastify or NestJS.
- TestContainers & Isolated Environments: Hands-on experience with TestContainers (Python or Java) to spin up isolated containerized dependencies - PostgreSQL Redis Kafka or equivalent - for reliable repeatable integration tests with zero shared state.
- Synthetic Data Engineering: Demonstrated ability to design synthetic data pipelines that generate realistic constraint-safe test data for stateful backend services covering edge cases and boundary conditions without using production data.
- API & Contract Testing: Proven experience with REST and GraphQL API testing contract testing (Pact or equivalent) and validating service boundaries in microservices or distributed systems.
- AI-Augmented Development Experience: Demonstrable experience using AI coding agents (Claude Code GitHub Copilot Cursor or equivalent) as a delivery tool including the ability to critically evaluate and extend AI-generated test output.
- CI/CD Pipeline Integration: Experience embedding automated test suites as quality gates in CI/CD pipelines with GitHub Actions ArgoCD or equivalent including test parallelization flakiness management and coverage reporting.
- Kubernetes & AWS Fundamentals: Working knowledge of Kubernetes (EKS) and AWS services relevant to test environment management including ECR S3 RDS and CloudWatch for test observability.
- SAST & Security Testing Awareness: Familiarity with SAST tooling (SonarQube Checkmarx or equivalent) and the ability to interpret and act on security scan findings in AI-generated code.
- BDD & Spec-Driven Testing: Experience with Gherkin/Cucumber or equivalent BDD frameworks and their integration with agentic spec pipelines.
- E2E Testing: Experience with Cypress or Playwright for end-to-end testing of React applications.
Nice to Have Experience:
- Performance & Load Testing: Experience with k6 Gatling or equivalent tools for validating service performance under load.
- Rust Familiarity: Exposure to testing Rust services or understanding of Rusts testing model in a polyglot microservices context.
- Familiarity with prompt engineering concepts and how spec quality influences AI-generated test completeness.
Key Skills:
SDET / QA AI AI-Augmented Python & FastAPI TypeScript & React Testing
Job Title: Applied AI SDET Location: Dallas TX Duration: / Term: Contract Experience Desired: 5 Years Job Description: AI-generated code introduces quality risks that traditional QA processes were not designed to catch. Subtle logic errors incomplete edge case handling and systematically misappli...
Job Title: Applied AI SDET
Location: Dallas TX
Duration: / Term: Contract
Experience Desired: 5 Years
Job Description:
AI-generated code introduces quality risks that traditional QA processes were not designed to catch. Subtle logic errors incomplete edge case handling and systematically misapplied patterns can pass human review when reviewers are moving fast. Your role goes beyond building a safety net - you will design and implement AI-driven quality engineering processes tools and agents that make quality intrinsic to the delivery pipeline rather than a gate at the end of it. You will feed intelligence back into the practice so the pipeline continuously improves and you will build the automation infrastructure that allows the pod to ship with confidence at a pace that manual QA cannot sustain.
What You Will Do
- Own the Test Strategy: Define and maintain the end-to-end test strategy for AI-generated code across unit integration contract and E2E layers calibrated to the risks introduced by agentic delivery.
- Build AI-Driven Quality Agents: Design and implement AI-powered quality agents and tooling - using Claude or equivalent - that automate test generation coverage gap analysis regression triage and defect pattern detection within the CI/CD pipeline.
- Validate AI-Generated Tests: Critically assess Claude-generated unit and integration tests for completeness correctness and meaningful coverage. Identify gaps redundancies and tests that pass without actually validating behavior.
- Build and Maintain Automation Suites: Design implement and own automated test suites that run as quality gates in the CI/CD pipeline including regression safety nets that protect the codebase from agentic regressions.
- TestContainers & Isolated Test Environments: Design and manage containerized isolated test environments using TestContainers and Docker to ensure backend service tests run against production-equivalent dependencies - databases message queues and third-party service stubs - without shared state or environment bleed.
- Synthetic Data Engineering: Design and maintain synthetic data strategies that produce realistic consistent and constraint-safe test data for backend service testing
- ensuring coverage of edge cases boundary conditions and stateful workflows without reliance on production data.
- Establish AI Quality Engineering Processes: Define and document repeatable quality engineering processes tailored to the AI-DLC model - covering how tests are generated reviewed validated and evolved alongside AI-generated features.
- Identify AI Failure Patterns: Detect and document systematic patterns in Claudes output quality - recurring anti-patterns common security misses or edge cases Claude consistently overlooks - and feed these back to the AI Solution Owner to improve specs and prompt context.
- Partner on Spec Quality: Review feature specifications before agentic generation begins flagging ambiguities or missing acceptance criteria that will produce untestable or unverifiable output.
- Own Your Delivery: Take full responsibility for test coverage and quality sign-off on every feature delivered by the pod from spec handoff through production deployment and post-deploy verification.
Must Have Experience:
- SDET / QA Engineering Proficiency: 5 years in a software development engineer in test or senior QA role with a track record of building and owning test automation in an enterprise software delivery context.
- Python & FastAPI Testing: Proven experience testing Python/FastAPI backend services including async endpoint testing dependency injection overrides and integration test patterns with pytest and httpx.
- TypeScript & React Testing: Experience writing and maintaining tests for TypeScript/React frontends using React Testing Library Jest and component-level testing patterns.
- Testing: Proven experience testing backend services using Jest Supertest or Mocha - including middleware testing async handler validation and integration testing of REST and GraphQL APIs built with Express Fastify or NestJS.
- TestContainers & Isolated Environments: Hands-on experience with TestContainers (Python or Java) to spin up isolated containerized dependencies - PostgreSQL Redis Kafka or equivalent - for reliable repeatable integration tests with zero shared state.
- Synthetic Data Engineering: Demonstrated ability to design synthetic data pipelines that generate realistic constraint-safe test data for stateful backend services covering edge cases and boundary conditions without using production data.
- API & Contract Testing: Proven experience with REST and GraphQL API testing contract testing (Pact or equivalent) and validating service boundaries in microservices or distributed systems.
- AI-Augmented Development Experience: Demonstrable experience using AI coding agents (Claude Code GitHub Copilot Cursor or equivalent) as a delivery tool including the ability to critically evaluate and extend AI-generated test output.
- CI/CD Pipeline Integration: Experience embedding automated test suites as quality gates in CI/CD pipelines with GitHub Actions ArgoCD or equivalent including test parallelization flakiness management and coverage reporting.
- Kubernetes & AWS Fundamentals: Working knowledge of Kubernetes (EKS) and AWS services relevant to test environment management including ECR S3 RDS and CloudWatch for test observability.
- SAST & Security Testing Awareness: Familiarity with SAST tooling (SonarQube Checkmarx or equivalent) and the ability to interpret and act on security scan findings in AI-generated code.
- BDD & Spec-Driven Testing: Experience with Gherkin/Cucumber or equivalent BDD frameworks and their integration with agentic spec pipelines.
- E2E Testing: Experience with Cypress or Playwright for end-to-end testing of React applications.
Nice to Have Experience:
- Performance & Load Testing: Experience with k6 Gatling or equivalent tools for validating service performance under load.
- Rust Familiarity: Exposure to testing Rust services or understanding of Rusts testing model in a polyglot microservices context.
- Familiarity with prompt engineering concepts and how spec quality influences AI-generated test completeness.
Key Skills:
SDET / QA AI AI-Augmented Python & FastAPI TypeScript & React Testing
View more
View less