At EY youll have the chance to build a career as unique as you are with the global scale support inclusive culture and technology to become the best version of you. And were counting on your unique voice and perspective to help EY become even better too. Join us and build an exceptional experience for yourself and a better working world for all.
L2 Support Engineer Gen AI Projects (GCP)
Experience: 24 years
Role Summary
The L2 Support Engineer will monitor troubleshoot and support multiple Gen AI applications built on Google Cloud Platform (GCP). The role focuses on following runbooks identifying where issues are occurring providing first-level fixes and handing off deep technical problems to the L3 engineering team.
Key Responsibilities
- Monitor Gen AI pipelines dashboards alerts and system health across various applications.
- Follow runbooks to investigate failures and identify exactly where the process or workflow is breaking.
- Perform basic troubleshooting and resolve issues related to access permissions environment setup or configuration.
- Validate errors using logs job history and GCP console (Cloud Functions Cloud Run Pub/Sub Firestore GCS).
- Raise detailed tickets to L3 with clear findings logs and reproduction steps.
- Support daily operations such as checking stuck workflows failed requests or incomplete data ingestions.
- Perform minor fixes like restarting jobs updating config values clearing queues or re-triggering pipelines.
- Handle user access requests IAM role updates and environment-related incidents within approved guidelines.
- Ensure SLAs are met for incident acknowledgment analysis and escalation.
- Maintain documentation update runbooks and contribute to process improvements.
- Coordinate closely with L3 engineers cloud teams and application owners as needed.
Required Skills
- Basic understanding of cloud environments (preferably GCP).
- Ability to read logs and identify errors across functions pipelines and APIs.
- Familiarity with JSON REST APIs and debugging simple configuration issues.
- Good understanding of Python basics or the ability to read/debug simple scripts (not mandatory).
- Strong problem-solving skills with an ability to follow structured troubleshooting steps.
- Good communication skills for reporting issues writing summaries and escalating to L3.
Nice-to-Have
- Exposure to Gen AI/LLM applications Document AI or RAG systems.
- Experience with ticketing systems (Jira ServiceNow).
- Basic knowledge of Pub/Sub Cloud Run Cloud Functions or Firestore.
L3 Gen AI Engineer Support & Engineering (GCP)
Experience: 510 years (with 2 years in AI/LLM/RAG-specific systems)
Role Summary
The L3 Gen AI Engineer will provide deep technical support debugging and enhancements for enterprise Gen AI applications running on Google Cloud Platform (GCP). This role handles escalations from L2 resolves complex failures across AI pipelines improves system performance and implements fixes across models orchestration backend services and integrations.
Key Responsibilities
- Diagnose and resolve complex issues in Gen AI workflows (LLM pipelines RAG retrieval document processing audio-to-text summarization etc.).
- Debug model-related problems such as prompt failures accuracy drops hallucinations versioning issues and latency.
- Troubleshoot backend services including Cloud Run Cloud Functions Pub/Sub Firestore Composer DAGs and API integrations.
- Fix failures in document ingestion Document AI extraction vector indexing and data transformation pipelines.
- Review logs traces and metrics to identify root causes and apply permanent fixes.
- Implement hotfixes patch releases new prompt versions and workflow improvements.
- Collaborate with UI middleware and ML teams to resolve cross-component issues.
- Optimize performance throughput token usage and cloud costs across AI workloads.
- Improve observability by enhancing logging monitoring and alerting strategies.
- Manage model lifecycle tasksdeploying new LLM endpoints tuning prompts updating embeddings where needed.
- Ensure best practices for security PII handling role-based access and environment integrity.
- Provide guidance to L2 support by updating runbooks troubleshooting guides and automation scripts.
Required Skills
- Strong hands-on experience in building or supporting GenAI/LLM-based systems (OpenAI Vertex AI Claude etc.).
- Deep understanding of RAG workflows vector databases embeddings and retrieval pipelines.
- Strong programming skills in Python (must-have).
- Experience with LLM orchestration frameworks (LangChain LiteLLM custom orchestrators).
- Good knowledge of GCP cloud services:
- Cloud Run Cloud Functions
- Pub/Sub Firestore GCS
- Vertex AI / Document AI
- Logging Monitoring IAM
- Ability to debug distributed workflows async jobs and event-driven architectures.
- Experience with CI/CD pipelines and Git-based workflows.
- Strong troubleshooting and root-cause analysis capabilities.
Nice-to-Have
- Experience with multimodal pipelines (Speech-to-Text audio processing call summarization).
- Experience deploying or fine-tuning custom models.
- Knowledge of Typescript or middleware frameworks for end-to-end debugging.
- Familiarity with enterprise security networking and role-based access controls.
EY Building a better working world
EY exists to build a better working world helping to create long-term value for clients people and society and build trust in the capital markets.
Enabled by data and technology diverse EY teams in over 150 countries provide trust through assurance and help clients grow transform and operate.
Working across assurance consulting law strategy tax and transactions EY teams ask better questions to find new answers for the complex issues facing our world today.