Risk, Operational Risk (Artificial Intelligence Coverage), Vice President, Dallas or Salt Lake City
Dallas, IA - USA
Job Summary
Organization: Risk Division Operational Risk
Team / Role:Lead for AI Architecture Artificial Intelligence Coverage / Operational Risk
Level/Location:Vice President Dallas/ Salt Lake City
The Operational Risk Department at Goldman Sachs is an independent risk management function responsible for developing and implementing a standardized framework to identify measure and monitor operational risk across the firm. The AI Lead for AI Architecture is a specialized role within this framework dedicated to strengthening the firms oversight of AI-related risks arising from model development deployment infrastructure technical standards and the internal AI technology stack. This professional will be responsible for continuously identifying monitoring measuring and assessing operational risks associated with the firms AI architecture decisions including secure-by-design principles model governance within the tech stack infrastructure resilience explainability data quality and drift prompt injection defenses and the alignment of technical architecture with the firms AI risk appetite. The role ensures that the firms AI systems are architected deployed and operated in a manner that is secure resilient explainable and compliant with regulatory obligations.
Responsibilities:
- Identify monitor and analyze operational risksarising from the design development and deployment of AI systems with a focus on risks such as inadequate system alignment lack of explainability data quality and drift prompt injection hallucination and inaccurate outputs non-deterministic behavior bias and discrimination model overreach/expanded use reputational risk from AI failures agent action authorization bypass tool chain manipulation and injection agent state persistence poisoning and multi-agent trust boundary violations. Develop evidence-based challenges focused on improving architectural risk posture.
- Monitor the firms AI architecture control inventoryfor sufficiency and completeness challenging the absence of controls and the implementation of controls within engineering standards. This includes oversight of mitigations such as AI Firewall Implementation and Management User/App/Model Firewalling/Filtering AI System Observability System Acceptance Testing Data Quality and Classification/Sensitivity Human Feedback Loop for AI Systems LLM-as-a-Judge automated evaluation Providing Citations and Source Traceability AI Model Version Pinning Agent Authority Least Privilege Framework Tool Chain Validation and Sanitization Agent Decision Audit and Explainability Multi-Agent Isolation and Segmentation Data Filtering From External Knowledge Bases Preserving Source Data Access Controls in AI Systems Role-Based Access Control for AI Data Encryption of AI Data at Rest and Quality of Service and DDoS Prevention for AI Systems.
- Champion secure-by-design principlesacross the AI technology stack ensuring that security privacy and risk controls are embedded into AI system architecture from inception rather than retrofitted.
- Conduct data analysis to identify trends and patternsin AI system performance model behavior observability telemetry and security events augmenting such analysis with qualitative observations to monitor risk-taking trends through bespoke metrics at firmwide and divisional/sub-divisional levels. Escalate concerns to senior management when warranted.
- Contribute to divisional and functional risk profile assessmentsby highlighting AI architecture risk issues and trends to senior divisional managers and the senior Operational Risk management team.
- Conduct evidence-based scenario analysisby working with stakeholders to develop plausible tail risk scenarios around AI architecture failures including prompt injection attacks leading to data exfiltration hallucination-driven erroneous financial advice cascading failures in multi-agent systems agent authorization bypass leading to unauthorized transactions data drift causing model degradation and infrastructure resilience failures. These scenarios are used in quantifying specific business exposure to potential risk.
- Oversee model governance within the tech stackensuring that AI models are subject to version pinning system acceptance testing observability human feedback loops and automated evaluation before and during production deployment.
- Ensure alignment of technical architecture with the firms AI risk appetitereviewing architectural decisions for consistency with risk tolerance levels regulatory requirements and internal policies.
- Oversee infrastructure resilience for AI systemsincluding monitoring for availability risks Denial of Wallet attacks VRAM exhaustion and GPU infrastructure dependencies. Ensure Quality of Service and DDoS prevention controls are implemented and effective.
- Facilitate operational risk event and data collectionrelated to AI architecture incidents; perform detailed reviews of trends to identify significant risks and ensure monitoring and remediation.
- Review New Activitiesand ensure operational risks arising from new AI model deployments new architectural patterns agentic system rollouts and infrastructure migrations are properly considered.
- Contribute to review and challenge of AI architecture control assessmentsto ensure the risk and control self-assessment outcomes are consistent credible and underpinned by appropriate evidence.
- Remain current on business drivers regulatory and industry changesimpacting the firms AI architecture activities and obligations including the EU AI Act NIST AI 600-1 NIST Cybersecurity Framework FFIEC IT Booklets and ISO 27001.
- Identify and drive initiativesthat improve AI architecture risk management activities at the firm.
Qualifications
- Strong understanding of AI/ML architecture concepts including foundation models LLMs RAG systems agentic AI frameworks MCP servers vector databases embedding pipelines and model deployment infrastructure.
- Experience with secure-by-design principles AI firewalling prompt injection defenses model observability and explainability frameworks.
- Knowledge of internal control frameworks such as NIST 800-53 NIST AI 600-1 ISO 27001 COBIT Cloud Security Alliance Cloud Controls Matrix and the EU AI Act.
- Strong business acumen with general awareness of technology-related processes risks and business flows in financial services.
- 7 years of relevant experience which could include working in operational risk a financial institutions technology division a technology company that builds or maintains enterprise AI/ML systems cloud services offensive or defensive cybersecurity or IT/Information Security auditors.
- Strong verbal and written communication skills with the ability to present with impact and influence.
- Ability to work in a fast-paced environment with a strong delivery focus.
- Strong organizational skills; project management experience a plus.
- Proficiency in Word Excel PowerPoint SharePoint/OneDrive; SQL graph databases and Tableau would be a plus.
- Relevant certifications like CISA CISM or related AI/ML and cybersecurity certifications.
- Familiarity with enterprise risk management best practices and controls.
- Bachelors Degree in Computer Science Cybersecurity Business and Technology Management Finance Data Science or related disciplines.
ABOUT GOLDMAN SACHS
Required Experience:
Exec
About Company
The Goldman Sachs Group, Inc. is a leading global investment banking, securities, and asset and wealth management firm that provides a wide range of financial services.