Quality Engineer
Job Summary
( is supporting Granter our client to find an amazing Quality Engineer in Lisbon)
Granter is building an AI Grant Consultant an AI agent that handles grants for your organization end-to-end. It helps companies discover funding opportunities draft stronger proposals faster and manage funded projects after approval.
Our mission is to enable businesses of all sizes to leverage incentives to grow and innovate. We are deeply involved in the Portuguese startup ecosystem and were the winners of the Web Summit Pitch Competition 2025.
We are now looking for a Quality Engineer.
You will be responsible for quality across the product. That means understanding how the system works end-to-end validating new features identifying risks early and building the systems that ensure quality over time.
You will start hands-on. The goal is to progressively automate the parts that matter most.
Part of the product includes AI-powered features and requires structured evaluations (e.g. LLM-as-a-Judge) and new approaches to testing. You dont need prior AI experience but you should be curious and willing to learn.
What This Role Involves
You sit with the product team and are responsible for quality across the platform
You work closely with engineering and operations understanding both the technical perspective and how the product is used by customers
You identify bugs inconsistencies and edge cases early before they reach users
Define what should be tested how and at what level (manual automated or both)
Build and evolve automated tests for high-value and repetitive workflows
Build automated evaluations for AI features where relevant
Proactively review product behavior to detect recurring risks and improve overall reliability
What We Are Looking For
We care more about evidence than years but this role requires hands-on experience working on real products.
Experience
2 years of experience in QA Quality Engineering or a similar role
Additional Plus
Experience working in a startup environment
Experience with tools like Cypress pytest Postman or similar
Familiarity with LLM-based features or AI systems
Exposure to evaluation tools or approaches (e.g. structured evals LLM-as-a-Judge)
Core Skills
Experience writing and maintaining automated tests
Ability to test workflows in a structured and rigorous way
Comfortable scripting in Python
Comfortable debugging issues across frontend backend and APIs
Basic understanding of CI/CD and how testing fits into it
Ability to identify where something breaks why it breaks and how to communicate it clearly
Communication