Test Automation Engineer – AI Platform

METROMAKRO

Not Interested
Bookmark
Report This Job

profile Job Location:

Pune - India

profile Monthly Salary: Not Disclosed
Posted on: 6 hours ago
Vacancies: 1 Vacancy

Job Summary

Role Overview

We are seeking a highly skilled Test Automation Engineer to join the Agentic AI Squad within our AI Platform organization. The ideal candidate will have strong experience in designing implementing and maintaining automated test frameworks for complex distributed systems with particular focus on AI/LLM-powered capabilities agentic workflows and high-scale backend services.

This role involves building robust automation pipelines validating model-driven behaviors ensuring system reliability through automated quality gates and collaborating closely with AI Engineers and Architects to ensure end-to-end quality across the platform.

Key Responsibilities

Automation & Quality Engineering

  • Design implement and maintain scalable automated test suites for API integration end-to-end and workflow-level testing of agentic AI systems.
  • Build automation frameworks that validate correctness robustness safety and determinism in LLM-driven behaviors and multi-agent orchestration.
  • Develop synthetic datasets fixtures and mocks to replicate complex agentic scenarios and edge cases.

AI/LLM-Specific Testing

  • Implement automated evaluation pipelines for LLM outputs including functional tests regression checks safety tests and model-behavior verification.
  • Collaborate with AI Engineers to integrate model evaluation metrics (e.g. hallucination detection grounding accuracy response consistency) into CI/CD systems.
  • Validate prompt templates agent tools retrieval workflows and chain-of-thought instrumentation.

Platform Integration

  • Work with backend and platform teams to ensure test coverage across microservices feature flags workflow orchestrators and event-driven systems.
  • Contribute to testable system design and ensure new features include clear acceptance criteria and automation strategies.

CI/CD & Release Quality

  • Integrate automated testing pipelines into GitHub Actions (or equivalent) ensuring fast feedback cycles and continuous verification.
  • Implement quality gates flakiness detection and test result analytics to improve reliability and reduce manual QA overhead.
  • Participate in release readiness reviews and ensure that automated checks provide full coverage for functional and non-functional requirements.

Bug Reproducing and Root Cause Analysis

  • Understanding user feedback and reproducing bugs that have been submitted to the team
  • Performing root cause analysis to enable AI Engineers to work on bug fixes and providing guidance on acceptance criteria

 


    Qualifications :

    Must-Have Qualifications

    Education

    • Bachelors or Masters degree in Computer Science Software Engineering or related technical field or equivalent practical experience.

    Work Experience & Skills

    • 5 years of experience in Test Automation Engineering Software Development in Test (SDET) or similar roles.
    • Profound experience working with a Cloud Platform (Google Cloud Platform preferred)
    • Proven hands-on experience building automation frameworks at scale (Python or JavaScript/TypeScript preferred).
    • Strong experience testing distributed systems microservices asynchronous workflows and APIs.
    • Familiarity with AI/LLM-based product testing including prompt evaluation workflow validation or model-behavior testing.
    • Strong experience with CI/CD (GitHub Actions preferred).
    • Strong skills in debugging root-cause analysis and building reliable automation pipelines.
    • Excellent communication skills in English (written and spoken).

     


      Additional Information :

      Other Requirements

      • Strong analytical mindset and ability to reason rigorously about system behavior.
      • Ability to operate in a fast-paced environment with evolving AI technologies and requirements.
      • Passion for automation reproducibility and engineering excellence.
      • Collaborative approach and willingness to work closely with AI research and product teams.

      Nice-to-Have

      • Experience evaluating LLMs agentic frameworks or retrieval-augmented generation pipelines.
      • Familiarity with synthetic data generation dataset curation or annotation workflows.
      • Experience with Agile/Scrum methodologies.
      • Understanding of safety frameworks model confidence scoring or risk-based testing in AI systems.

      Remote Work :

      No


      Employment Type :

      Full-time

      Role OverviewWe are seeking a highly skilled Test Automation Engineer to join the Agentic AI Squad within our AI Platform organization. The ideal candidate will have strong experience in designing implementing and maintaining automated test frameworks for complex distributed systems with particular ...
      View more view more

      Key Skills

      • Google Analytics
      • Automation
      • ASP.NET
      • Automation Testing
      • Electrical & Automation

      About Company

      Company Logo

      METRO is a leading international wholesale company with food and non-food assortments that specialises in serving the needs of hotels, restaurants and caterers (HoReCa) as well as independent traders. Around the world, METRO has 15 million customers who can choose whether to shop in o ... View more

      View Profile View Profile