DescriptionEmbark on a dynamic career in tech support where your skills contribute to maintaining world-class technology solutions to ensure a seamless user experience.
As a Technology Support I team member in Commercial & Investment Banks Markets Tech team you will ensure the operational stability availability and performance of our production application flows. Be part of the team responsible for troubleshooting maintaining identifying escalating and resolving production service interruptions for all internally and externally developed systems ensuring a seamless user experience.
Job responsibilities
- Execute creative LLM assisted software solutions design develop and troubleshoot LLMpowered applications and services (e.g. retrievalaugmented generation agent workflows structured extraction classification) with a willingness to think beyond routine approaches to break down technical problems and deliver measurable outcomes and think in the novel Agentic AI way.
- Develop data quality rules and controls using LLM define and enforce guardrails for prompts retrieved context model inputs/outputs and postprocessing including PII redaction toxicity/safety filters hallucination mitigation output schema validation and policy compliance.
- Provide Level 3 (L3) support for LLM assisted production systems own complex incidents model and prompt rollouts/rollbacks dependency issues (vector stores embeddings feature stores) and ensure high availability reliability and adherence to SLAs including latency and cost budgets.
- Support BAU operations for Markets businesses: maintain and evolve LLM use cases supporting markets workflows with disciplined change management canary releases A/B tests and close partnership with product controls and operations.
- Create secure highquality production code: implement LLM assisted micro services synchronous and asynchronous inference pipelines (streaming where appropriate) deterministic fallbacks circuit breakers and observability for reliability in production.
- Produce architecture and design artifacts deliver model cards system/data lineage RAG/agent reference architectures prompt libraries and versioning strategies evaluation plans and control evidence ensuring design constraints and regulatory expectations are met during development.
- Identify hidden problems and patterns use telemetry error analysis prompt and context analytics and drift detection to improve model selection prompt strategies retrieval quality chunking/embedding strategies and system architecture.
- Drive LLM Ops best practices integrate models prompts and evaluation into CI/CD enforce approvals segregation of duties and reproducibility automate regression and guardrail tests and manage lifecycle across environments.
- Ensure that model strengths limitations and risk profiles are understood documented and appropriately applied across different classes of software work and maintain deep understanding of the strengths limitations and risk characteristics of approved LLMs (e.g. Claude Chat GPT and successor models) including safety profiles context limits determinism strategies and fine tuning vs. prompt only tradeoffs design multi agent workflows that incorporate LLM driven analysis code generation testing and review with explicit human approval gates and segregation of duties.
- Ensure LLM driven systems meet enterprise reliability and resilience expectations including disaster recovery fallback behaviors regional resiliency and performance SLOs.
Required qualifications capabilities and skills
- 1 years of experience or equivalent expertise in troubleshooting resolving and maintaining information technology services
- Strong coding skills in Java/Python and SQL applied to building LLM enabled micro services retrieval pipelines evaluators and data tooling; solid understanding of data structures algorithms and objectoriented programming as applied to LLM latency caching and throughput.
- Handson experience with AWS and cloud data management (e.g. Redshift Dynamo DB Aurora Data bricks) plus experience integrating managed model endpoints and embedding/vector services; familiarity with secure secret management networking and leastprivilege access.
- Proficiency in automation CI/CD and agile methodologies with LLM Ops extensions: prompt and config versioning automated evaluations canary releases and rollback strategies.
- Experience in system design application development and operational stability for LLM architectures including retrieval layers vector stores caching observability rate limiting and backpressure strategies.
- Strong analytical problemsolving and communication skills including the ability to explain model behaviors tradeoffs and control decisions to both technical and nontechnical stakeholders.
- Provide L3 and BAU support for Markets by leveraging LLMs for incident triage run book retrieval and preapproved autoremediation with oncall coverage for LLM services and dependencies.
- Expert-level knowledge of how large language models work and hands-on experience training and fine-tuning approved models (e.g. Claude Chat GPT and successors) with a proven track record integrating LLMs as controlled reliable components of the software engineering lifecycle in regulated environments ensuring determinism reproducibility safety and traceability.
- Strong understanding of data modeling challenges in big data and LLM contexts embeddings chunking strategies vector similarity nuances retrieval quality measures and document lineage.
Preferred qualifications capabilities and skills
- Define model usage guidelines outlining which models are appropriate for requirements analysis code generation and refactoring test generation documentation and explanation and lead the use of LLMs for structured requirements analysis translating business and regulatory requirements into clear technical specifications and control implementations.
- Establish best practices for prompt driven design and development treating prompts and system instructions as versioned reviewable engineering artifacts and ensuring change control and traceability ensure prompt strategies support determinism reproducibility and traceability in regulated environments (e.g. seeded examples constrained decoding output schemas and canonical evaluation sets) and oversee prompt libraries and reusable patterns aligned with enterprise coding and architectural standards including shared retrieval components and guardrail policies.
- Ability to continuously learn the new developments happening in Agentic AI and LLM driven software coding
DescriptionEmbark on a dynamic career in tech support where your skills contribute to maintaining world-class technology solutions to ensure a seamless user experience.As a Technology Support I team member in Commercial & Investment Banks Markets Tech team you will ensure the operational stability a...
DescriptionEmbark on a dynamic career in tech support where your skills contribute to maintaining world-class technology solutions to ensure a seamless user experience.
As a Technology Support I team member in Commercial & Investment Banks Markets Tech team you will ensure the operational stability availability and performance of our production application flows. Be part of the team responsible for troubleshooting maintaining identifying escalating and resolving production service interruptions for all internally and externally developed systems ensuring a seamless user experience.
Job responsibilities
- Execute creative LLM assisted software solutions design develop and troubleshoot LLMpowered applications and services (e.g. retrievalaugmented generation agent workflows structured extraction classification) with a willingness to think beyond routine approaches to break down technical problems and deliver measurable outcomes and think in the novel Agentic AI way.
- Develop data quality rules and controls using LLM define and enforce guardrails for prompts retrieved context model inputs/outputs and postprocessing including PII redaction toxicity/safety filters hallucination mitigation output schema validation and policy compliance.
- Provide Level 3 (L3) support for LLM assisted production systems own complex incidents model and prompt rollouts/rollbacks dependency issues (vector stores embeddings feature stores) and ensure high availability reliability and adherence to SLAs including latency and cost budgets.
- Support BAU operations for Markets businesses: maintain and evolve LLM use cases supporting markets workflows with disciplined change management canary releases A/B tests and close partnership with product controls and operations.
- Create secure highquality production code: implement LLM assisted micro services synchronous and asynchronous inference pipelines (streaming where appropriate) deterministic fallbacks circuit breakers and observability for reliability in production.
- Produce architecture and design artifacts deliver model cards system/data lineage RAG/agent reference architectures prompt libraries and versioning strategies evaluation plans and control evidence ensuring design constraints and regulatory expectations are met during development.
- Identify hidden problems and patterns use telemetry error analysis prompt and context analytics and drift detection to improve model selection prompt strategies retrieval quality chunking/embedding strategies and system architecture.
- Drive LLM Ops best practices integrate models prompts and evaluation into CI/CD enforce approvals segregation of duties and reproducibility automate regression and guardrail tests and manage lifecycle across environments.
- Ensure that model strengths limitations and risk profiles are understood documented and appropriately applied across different classes of software work and maintain deep understanding of the strengths limitations and risk characteristics of approved LLMs (e.g. Claude Chat GPT and successor models) including safety profiles context limits determinism strategies and fine tuning vs. prompt only tradeoffs design multi agent workflows that incorporate LLM driven analysis code generation testing and review with explicit human approval gates and segregation of duties.
- Ensure LLM driven systems meet enterprise reliability and resilience expectations including disaster recovery fallback behaviors regional resiliency and performance SLOs.
Required qualifications capabilities and skills
- 1 years of experience or equivalent expertise in troubleshooting resolving and maintaining information technology services
- Strong coding skills in Java/Python and SQL applied to building LLM enabled micro services retrieval pipelines evaluators and data tooling; solid understanding of data structures algorithms and objectoriented programming as applied to LLM latency caching and throughput.
- Handson experience with AWS and cloud data management (e.g. Redshift Dynamo DB Aurora Data bricks) plus experience integrating managed model endpoints and embedding/vector services; familiarity with secure secret management networking and leastprivilege access.
- Proficiency in automation CI/CD and agile methodologies with LLM Ops extensions: prompt and config versioning automated evaluations canary releases and rollback strategies.
- Experience in system design application development and operational stability for LLM architectures including retrieval layers vector stores caching observability rate limiting and backpressure strategies.
- Strong analytical problemsolving and communication skills including the ability to explain model behaviors tradeoffs and control decisions to both technical and nontechnical stakeholders.
- Provide L3 and BAU support for Markets by leveraging LLMs for incident triage run book retrieval and preapproved autoremediation with oncall coverage for LLM services and dependencies.
- Expert-level knowledge of how large language models work and hands-on experience training and fine-tuning approved models (e.g. Claude Chat GPT and successors) with a proven track record integrating LLMs as controlled reliable components of the software engineering lifecycle in regulated environments ensuring determinism reproducibility safety and traceability.
- Strong understanding of data modeling challenges in big data and LLM contexts embeddings chunking strategies vector similarity nuances retrieval quality measures and document lineage.
Preferred qualifications capabilities and skills
- Define model usage guidelines outlining which models are appropriate for requirements analysis code generation and refactoring test generation documentation and explanation and lead the use of LLMs for structured requirements analysis translating business and regulatory requirements into clear technical specifications and control implementations.
- Establish best practices for prompt driven design and development treating prompts and system instructions as versioned reviewable engineering artifacts and ensuring change control and traceability ensure prompt strategies support determinism reproducibility and traceability in regulated environments (e.g. seeded examples constrained decoding output schemas and canonical evaluation sets) and oversee prompt libraries and reusable patterns aligned with enterprise coding and architectural standards including shared retrieval components and guardrail policies.
- Ability to continuously learn the new developments happening in Agentic AI and LLM driven software coding
View more
View less