Sr SDET (AI-Augmented)

Not Interested
Bookmark
Report This Job

profile Job Location:

Blue Bell, PA - USA

profile Monthly Salary: Not Disclosed
Posted on: 5 hours ago
Vacancies: 1 Vacancy

Job Summary

We are seeking two hands-on Software Development Engineers in Test (SDETs) who treat AI as a core part of their workflow - not a convenience. You will be embedded on delivery teams executing key platform initiatives within a high-stakes high-velocity delivery program writing automated tests from day one. The defining expectation for these roles: you will use GitHub Copilot and Claude Code to build CLI-driven test tools scaffold entire Karate feature file suites and generate Playwright test scripts at a speed that a purely manual approach cannot match. We want engineers who can sit down with a Copilot chat session a services OpenAPI spec and emerge 30 minutes later with a working parameterized Karate scenario suite. What Youll Do Author and maintain API and integration tests using the Karate framework (Karate DSL JUnit5 Maven Surefire Cucumber JDBC for Oracle DB assertions) against Java/Spring Boot microservices Build Playwright Python UI automation scripts for web-facing components; use Claude Code to generate page object models and test fixtures from design specs or running UIs Build bespoke CLI test tools in Python - driven by AI pair programming - that can hit service endpoints validate responses seed test data or parse logs from the command line without needing a full test harness spun up Integrate all test suites into Jenkins CI/CD pipelines: write and own the pipeline stages fix flaky tests and keep quality gates green across dev QA perf and e2e environments Use AI tooling daily to: generate Karate .feature files from JIRA acceptance criteria write Python test utilities from a prompt and a curl example and debug pipeline failures by feeding stack traces to Claude Code for root-cause acceleration Generate and manage synthetic test data (XML claim payloads JSON request bodies Oracle seed scripts) - using AI to extrapolate realistic variant coverage from known examples Write contract tests that validate service API boundaries across independently deployed microservices on Kubernetes/OpenShift Partner with engineers during feature development to define testability requirements shift coverage left and prevent regressions before merge What We Expect from Your AI Usage This is not a familiarity with Copilot checkbox. We expect: You can prompt GitHub Copilot or Claude Code to generate a full Karate feature file suite from an OpenAPI spec in under an hour You can build a working Python CLI tool from scratch using Claude Code in a single pairing session - something that calls a REST endpoint parses the response and outputs a pass/fail summary You use AI to accelerate debugging: pasting Jenkins log output or a failed test stack trace into Claude Code and iterating to a fix not waiting for someone else to triage it You review and refine AI-generated test code - you are the quality bar not the generator. Required Skills & Experience 3 years of hands-on SDET or automation engineering experience on production systems Proficiency in the Karate framework: DSL feature files JUnit5 runner configuration Maven Surefire integration Cucumber reporting and JDBC-backed DB assertions Playwright experience with Python for UI automation - locator strategy async test patterns CI integration Strong Python scripting skills: building CLI tools calling REST APIs with requests or httpx parsing JSON/XML writing reusable test utilities Jenkins CI/CD: writing and debugging pipeline stages not just running them Familiarity with Docker and Kubernetes/OpenShift for understanding service topology and test targeting Active practiced daily user of GitHub Copilot and Claude Code - able to demonstrate concrete examples of test code you built with AI that you would not have written as fast by hand Comfortable in Bitbucket-based Git workflows with PR-based collaboration Nice to Have Experience with Oracle/SQL - writing JDBC assertion queries for claim data validation Familiarity with Spring Boot test slices (@SpringBootTest) for white-box unit/integration test contribution Exposure to Dynatrace or Kibana for correlating test failures to observability signals Familiarity with Helm or GitOps-based deployment models for understanding what changed between test runs Experience with RabbitMQ or message-driven integration testing

Required Skills :

Basic Qualification :

Additional Skills :

Background Check : No

Drug Screen : No

N/A

Stipend :true

We are seeking two hands-on Software Development Engineers in Test (SDETs) who treat AI as a core part of their workflow - not a convenience. You will be embedded on delivery teams executing key platform initiatives within a high-stakes high-velocity delivery program writing automated tests from day...
View more view more