Role: Principal AI Engineer Agentic Systems & LLM Platforms (W2 Only)
Duration: 6 Months
Location: Irvine CA (Hybrid)
Must Have Skills:
- Strong programming in Python (Java/Go/TypeScript is a plus)
- Hands-on experience with LLM applications in production (tool calling structured outputs RAG evaluation)
- Experience with agentic systems / multi-agent workflows
- Strong in distributed systems & APIs (REST RPC Kafka SQS Pub/Sub)
- Knowledge of vector databases hybrid search knowledge graphs
- Cloud experience (AWS/Azure/GCP) Docker Kubernetes
- Solid data engineering fundamentals (SQL data modeling)
- Experience with LLM evaluation (A/B testing adversarial testing eval frameworks)
- Familiarity with Codex / AI coding assistants
Core Responsibilities:
- Design and build agentic AI systems (planner-executor multi-agent orchestration)
- Develop RAG pipelines for enterprise data (policies rules documents)
- Integrate LLMs with ML models (fraud risk decisioning systems)
- Implement AI safety & guardrails (PII prompt injection auditability HITL)
- Drive LLMOps/MLOps (CI/CD monitoring telemetry evaluation pipelines)
- Collaborate with cross-functional teams and mentor engineers
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Contract
Role: Principal AI Engineer Agentic Systems & LLM Platforms (W2 Only)Duration: 6 MonthsLocation: Irvine CA (Hybrid)Must Have Skills:Strong programming in Python (Java/Go/TypeScript is a plus)Hands-on experience with LLM applications in production (tool calling structured outputs RAG evaluation)Expe...
Role: Principal AI Engineer Agentic Systems & LLM Platforms (W2 Only)
Duration: 6 Months
Location: Irvine CA (Hybrid)
Must Have Skills:
- Strong programming in Python (Java/Go/TypeScript is a plus)
- Hands-on experience with LLM applications in production (tool calling structured outputs RAG evaluation)
- Experience with agentic systems / multi-agent workflows
- Strong in distributed systems & APIs (REST RPC Kafka SQS Pub/Sub)
- Knowledge of vector databases hybrid search knowledge graphs
- Cloud experience (AWS/Azure/GCP) Docker Kubernetes
- Solid data engineering fundamentals (SQL data modeling)
- Experience with LLM evaluation (A/B testing adversarial testing eval frameworks)
- Familiarity with Codex / AI coding assistants
Core Responsibilities:
- Design and build agentic AI systems (planner-executor multi-agent orchestration)
- Develop RAG pipelines for enterprise data (policies rules documents)
- Integrate LLMs with ML models (fraud risk decisioning systems)
- Implement AI safety & guardrails (PII prompt injection auditability HITL)
- Drive LLMOps/MLOps (CI/CD monitoring telemetry evaluation pipelines)
- Collaborate with cross-functional teams and mentor engineers
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Contract
View more
View less