Purpose of the role: To ensure AI and Copilot solutions are safe reliable and compliant covering both traditional QA and AIspecific risks (bias hallucination explainability). The role defines assurance methods quality gates and postdeployment monitoring to meet internal policy and regulator expectations.
Key Accountabilities
Role Specific Accountabilities
- Design and manage the enterprise testing strategy for AI/Copilot blending traditional QA with AIspecific methods.
- Define test approaches for functional performance accuracy reliability ethical compliance and bias detection.
- Establish modelevaluation techniques (prompt variability edgecase simulation output consistency scenario reasoning).
- Validate explainability traceability and safety controls against policy and regulatory requirements.
- Evaluate and test humanintheloop workflows and decision checkpoints for appropriate oversight.
- Embed quality gates in iterative delivery preventing progression without assurance evidence.
- Develop and maintain specialised test datasets including adversarial lowquality domainspecific and edgecase inputs to rigorously challenge model robustness and identify systemic weaknesses.
- Provide AI test engineering support to delivery squads advising on modelreadiness criteria testability risks and quality implications of design decisions ensuring solutions are verifiable throughout the lifecycle.
- Define and run postdeployment validation drift detection incident triage and continuous modelmonitoring.
- Partner with Risk Legal Security and Compliance teams to meet control frameworks and audit standards.
- Provide inputs to risk/impact assessments policy adherence checks and governance submissions.
- Lead incident investigations for unexpected AI behaviours conducting deepdive rootcause analysis across data quality model logic prompt flows integration layers and humanintheloop steps; identify systemic failure points recommend corrective actions and drive endtoend remediation to prevent recurrence.
- Maintain test documentation evaluation logs datasets and reproducible evidence for audit.
- Uplift AI testing capability across teams through standards templates training and handson support.
- Champion continuous improvement of AI assurance evaluating new testing tooling (LLMmonitoring biasscanners promptdiff tools synthetic data generators) and maturing standards as organisational AI adoption scales.
- Ensure responsible AI principles (e.g. transparency explainability ISO42001) are incorporated into all development.
- Provide insight to support business cases investment decisions risk assessments and prioritisation discussions at AI governance forums.
- Managing escalations supporting the wider Data & AI Leadership team.
Shared Accountabilities
- Translate Divisional priorities into plans and deliverables to deliver overall Group strategic priorities
- Build the capability & capacity of functional resources to drive sustained commercial success
- Interpret & communicate the priorities for the Function motivating and developing a high performing team
- Own functional priorities applying specialist expertise to put the customer at the heart of everything and drive a profitable business
- Initiate and develop critical external and internal relationships which create value collaborating to deliver commercial and customer priorities
- Uphold corporate legal & regulatory responsibilities
- Implement and manage transformation activity & harness innovation to create a high performing & sustainable business
Qualifications :
Functional/Technical (Role Specific)
Essential
- Higher education qualification (or equivalent experience) in Ethics Law Risk Management Social Sciences Data/Computer Science or relevant field
- Experience with designing and leading testing for complex digital or datadriven systems including multicomponent architectures APIintegrated platforms eventdriven workflows and systems operating under regulatory or highassurance constraints.
- Clear understanding of AIspecific risks such as hallucinations bias drift explainability gaps safety breaches and misuse pathways paired with the ability to design targeted tests that uncover model blind spots and systemic weaknesses.
- Knowledge of modelevaluation techniques prompttesting strategies and scenariobased testing approaches including stresstesting prompts adversarial input creation failuremode exploration and behaviourdriven evaluation.
- Familiarity with governance audit and regulatory standards for AI data and digital services ensuring testing evidence aligns with internal risk frameworks ISO42001 controls Responsible AI policies and external regulatory expectations.
- Experience developing structured QA strategies that integrate traditional and AIspecific assurance mapping out test plans riskbased prioritisation acceptance criteria modelreadiness thresholds and quality gates aligned to lifecycle stages.
- Ability to define and execute test plans across functional nonfunctional ethical and performance dimensions validating accuracy latency robustness security fairness reliability and userjourney consistency.
- Strong analytical mindset with the ability to identify root causes of defects or unexpected AI behaviour performing deepdive diagnostics across data pipelines vector stores prompt flows orchestration logic and humanintheloop checkpoints.
- Experience with postdeployment monitoring drift detection and continuous validation designing alerts retraining triggers performance thresholds and evaluation cadences to maintain longterm model integrity.
- Comfortable learning and adapting to emerging AI technologies and engineering patterns.
- Excellent stakeholder management and communication skills including seniorlevel engagement.
- Commercial awareness and a valuedriven mindset.
Use of professional networks and external influencers with clear evidence of learning and development to build and maintain skills and expertise
Additional Information :
Sector (desirable)
- Understanding of financial services industry markets and competitors
- Understanding of how financial services organisations operate and the associated regulatory environment or other regulated industries
- Awareness of the Mutual Sector and the needs and interests of Members
Remote Work :
No
Employment Type :
Full-time
Purpose of the role: To ensure AI and Copilot solutions are safe reliable and compliant covering both traditional QA and AIspecific risks (bias hallucination explainability). The role defines assurance methods quality gates and postdeployment monitoring to meet internal policy and regulator expect...
Purpose of the role: To ensure AI and Copilot solutions are safe reliable and compliant covering both traditional QA and AIspecific risks (bias hallucination explainability). The role defines assurance methods quality gates and postdeployment monitoring to meet internal policy and regulator expectations.
Key Accountabilities
Role Specific Accountabilities
- Design and manage the enterprise testing strategy for AI/Copilot blending traditional QA with AIspecific methods.
- Define test approaches for functional performance accuracy reliability ethical compliance and bias detection.
- Establish modelevaluation techniques (prompt variability edgecase simulation output consistency scenario reasoning).
- Validate explainability traceability and safety controls against policy and regulatory requirements.
- Evaluate and test humanintheloop workflows and decision checkpoints for appropriate oversight.
- Embed quality gates in iterative delivery preventing progression without assurance evidence.
- Develop and maintain specialised test datasets including adversarial lowquality domainspecific and edgecase inputs to rigorously challenge model robustness and identify systemic weaknesses.
- Provide AI test engineering support to delivery squads advising on modelreadiness criteria testability risks and quality implications of design decisions ensuring solutions are verifiable throughout the lifecycle.
- Define and run postdeployment validation drift detection incident triage and continuous modelmonitoring.
- Partner with Risk Legal Security and Compliance teams to meet control frameworks and audit standards.
- Provide inputs to risk/impact assessments policy adherence checks and governance submissions.
- Lead incident investigations for unexpected AI behaviours conducting deepdive rootcause analysis across data quality model logic prompt flows integration layers and humanintheloop steps; identify systemic failure points recommend corrective actions and drive endtoend remediation to prevent recurrence.
- Maintain test documentation evaluation logs datasets and reproducible evidence for audit.
- Uplift AI testing capability across teams through standards templates training and handson support.
- Champion continuous improvement of AI assurance evaluating new testing tooling (LLMmonitoring biasscanners promptdiff tools synthetic data generators) and maturing standards as organisational AI adoption scales.
- Ensure responsible AI principles (e.g. transparency explainability ISO42001) are incorporated into all development.
- Provide insight to support business cases investment decisions risk assessments and prioritisation discussions at AI governance forums.
- Managing escalations supporting the wider Data & AI Leadership team.
Shared Accountabilities
- Translate Divisional priorities into plans and deliverables to deliver overall Group strategic priorities
- Build the capability & capacity of functional resources to drive sustained commercial success
- Interpret & communicate the priorities for the Function motivating and developing a high performing team
- Own functional priorities applying specialist expertise to put the customer at the heart of everything and drive a profitable business
- Initiate and develop critical external and internal relationships which create value collaborating to deliver commercial and customer priorities
- Uphold corporate legal & regulatory responsibilities
- Implement and manage transformation activity & harness innovation to create a high performing & sustainable business
Qualifications :
Functional/Technical (Role Specific)
Essential
- Higher education qualification (or equivalent experience) in Ethics Law Risk Management Social Sciences Data/Computer Science or relevant field
- Experience with designing and leading testing for complex digital or datadriven systems including multicomponent architectures APIintegrated platforms eventdriven workflows and systems operating under regulatory or highassurance constraints.
- Clear understanding of AIspecific risks such as hallucinations bias drift explainability gaps safety breaches and misuse pathways paired with the ability to design targeted tests that uncover model blind spots and systemic weaknesses.
- Knowledge of modelevaluation techniques prompttesting strategies and scenariobased testing approaches including stresstesting prompts adversarial input creation failuremode exploration and behaviourdriven evaluation.
- Familiarity with governance audit and regulatory standards for AI data and digital services ensuring testing evidence aligns with internal risk frameworks ISO42001 controls Responsible AI policies and external regulatory expectations.
- Experience developing structured QA strategies that integrate traditional and AIspecific assurance mapping out test plans riskbased prioritisation acceptance criteria modelreadiness thresholds and quality gates aligned to lifecycle stages.
- Ability to define and execute test plans across functional nonfunctional ethical and performance dimensions validating accuracy latency robustness security fairness reliability and userjourney consistency.
- Strong analytical mindset with the ability to identify root causes of defects or unexpected AI behaviour performing deepdive diagnostics across data pipelines vector stores prompt flows orchestration logic and humanintheloop checkpoints.
- Experience with postdeployment monitoring drift detection and continuous validation designing alerts retraining triggers performance thresholds and evaluation cadences to maintain longterm model integrity.
- Comfortable learning and adapting to emerging AI technologies and engineering patterns.
- Excellent stakeholder management and communication skills including seniorlevel engagement.
- Commercial awareness and a valuedriven mindset.
Use of professional networks and external influencers with clear evidence of learning and development to build and maintain skills and expertise
Additional Information :
Sector (desirable)
- Understanding of financial services industry markets and competitors
- Understanding of how financial services organisations operate and the associated regulatory environment or other regulated industries
- Awareness of the Mutual Sector and the needs and interests of Members
Remote Work :
No
Employment Type :
Full-time
View more
View less