Job Description: Quality Engineer Conversational L&D Platform
About the Role
We are building a conversational Learning & Development platform powered by Generative AI reimagining how courses are created and delivered. As a Quality Engineer youll ensure that both the application and the underlying AI services are accurate reliable ethical and delightful to use. Youll collaborate with developers designers and product managers to maintain the highest bar for quality in this next-generation EdTech product.
What Youll Do
Ensure quality across all phases of the platform lifecycle from ideation to release partnering with cross-functional teams.
Design and execute test strategies for web application and AI-driven features ensuring accuracy reliability and ethical AI behavior.
Build and optimize automation frameworks using Playwright (with MCP server extensions where applicable) to accelerate coverage and reliability.
Run and evolve regression test suites for new releases combining manual and automated testing to uncover and isolate issues.
Leverage AI tools (e.g. Cursor copilots) to accelerate test creation generate rules/checks dynamically and improve coverage.
Evaluate AI-generated content for clarity factual accuracy inclusivity and bias minimization; build reusable eval test cases for generative features.
Certify microservices-based systems ensuring robust integrations scalability and resilience under load.
Experiment with prompt engineering techniques and validation frameworks to refine AI-driven course creation and learner experiences.
Advocate for privacy security and transparency in AI involvement aligning with ethical AI best practices.
Skills:
5 years of experience in software quality engineering/testing.
2 years of hands-on Playwright experience (JavaScript/TypeScript) OR 1 2 years of experience in GenAI evaluation testing.
Proficiency in testing and automating web applications and microservices (API testing integrations performance).
Comfort with AI productivity tools (e.g. Cursor Playwright MCP servers AI copilots) to accelerate testing workflows.
Familiarity with prompt engineering concepts and frameworks for evaluating AI-generated outputs
Strong analytical documentation and problem-solving skills with the ability to design creative adversarial tests.
Commitment to responsible AI user privacy and building reliable transparent experiences.
Bonus Skills
Experience with AI evaluation metrics and pipelines (semantic similarity bias/toxicity detection hallucination checks).
Background in EdTech or L&D product testing.
Familiarity with cloud-native testing and CI/CD pipelines.
Exposure to human-in-the-loop QA frameworks.