About the Role
We are seeking a Lead AI Engineer with strong hands-on experience in Google Cloud Platform (GCP) and production-grade GenAI systems to design build and operate scalable LLM and RAG solutions.
This is a delivery-first hands-on engineering role with ownership across solution design development and operationalization of AI systems. You will work with Gemini and Vertex AI while applying strong engineering fundamentals to build reliable secure and scalable AI services.
This role is ideal for a senior engineer with a strong GCP background who has already delivered GenAI/RAG solutions in production even if not exclusively on Gemini.
Location & Engagement
- Location: Anywhere in India (Fully Remote)
- Working Model: Offshore with 34 hours overlap with US time zones
- Contract Duration: 6 months (strong potential for extension)
Role Level & Expectations
- Profile: Senior / Lead AI Engineer (Senior Individual Contributor)
- Ownership: End-to-end technical ownership (architecture hands-on delivery)
- Leadership: Technical leadership by example (no people management)
- Focus: Production delivery with flexibility for research and experimentation
Key Responsibilities
Design build and operate LLM-powered systems using Gemini and Vertex AI
Implement RAG architectures at scale including ingestion retrieval and generation
Build and orchestrate LLM agents using LangChain or similar frameworks
Integrate AI capabilities via API-driven architectures
Debug and optimize end-to-end LLM pipelines:
- Chunking strategies
- Embeddings
- Retrieval logic
- LLM response behavior
Deliver production-ready AI services including:
- Monitoring and observability
- Rate limiting and cost controls
- Reliability and fallback strategies
Contribute to solution design and technical decision-making
Continuously evaluate and experiment with new LLM models and platform features
Implement AI safety security and compliance controls
Collaborate with cross-functional teams across time zones
MUST-HAVE Skills & Experience
Cloud & Platform (Required)
- Strong hands-on experience with Google Cloud Platform (GCP) in production
- Experience with services such as Cloud Run GKE Storage Pub/Sub BigQuery IAM
- Proven ability to design and operate production workloads on GCP
- Experience integrating Vertex AI services is a strong plus
GenAI & LLM Engineering
Hands-on experience delivering GenAI solutions in production
Experience integrating LLM platforms (Gemini OpenAI Anthropic Bedrock etc.)
Strong experience with LangChain or similar LLM orchestration frameworks
Solid understanding of:
- Prompt engineering
- Agent orchestration
- LLM pipeline debugging
RAG & Vector Search
Hands-on experience building RAG systems
Experience with vector databases such as Pinecone FAISS Chroma or similar
Strong understanding of vector similarity search fundamentals
Practical knowledge of RAG evaluation metrics such as:
-
- MRR nDCG
- Faithfulness and Answer Relevance
Programming & APIs
AI Safety Security & Compliance
NICE-TO-HAVE (Strong Plus)
- Experience with Terraform or Infrastructure-as-Code tools
- Background in data engineering analytics or data science
- Knowledge of data warehousing concepts (ETL/ELT analytics platforms)
- Experience operating large-scale production RAG systems
- Experience evaluating and comparing new LLM models
- Exposure to bias mitigation techniques in enterprise AI systems
- Multi-cloud experience (AWS alongside GCP)
What Success Looks Like
- You independently design build deploy and debug Gemini-powered AI systems
- You deliver production-grade AI solutions not prototypes
- You contribute meaningfully to architecture and technical strategy
- You adapt quickly as GenAI platforms and models evolve
- You take full ownership of AI systems from design to production
About Opplane
Opplane specializes in delivering advanced data-driven and AI-powered solutions for financial services telecommunications and reg-tech companies accelerating digital transformation.
Our leadership team includes Silicon Valley entrepreneurs and executives from organizations such as PayPal Xerox PARC Amazon Wells Fargo and SoFi with deep expertise in product management data governance privacy machine learning and risk management.
Team & Culture
Global & Multicultural Diverse perspectives global collaboration
Startup Energy Fast-moving impact-driven environment
Ownership Mindset Engineers own what they build
Collaborative & Friendly Open curious and supportive culture
About the RoleWe are seeking a Lead AI Engineer with strong hands-on experience in Google Cloud Platform (GCP) and production-grade GenAI systems to design build and operate scalable LLM and RAG solutions.This is a delivery-first hands-on engineering role with ownership across solution design develo...
About the Role
We are seeking a Lead AI Engineer with strong hands-on experience in Google Cloud Platform (GCP) and production-grade GenAI systems to design build and operate scalable LLM and RAG solutions.
This is a delivery-first hands-on engineering role with ownership across solution design development and operationalization of AI systems. You will work with Gemini and Vertex AI while applying strong engineering fundamentals to build reliable secure and scalable AI services.
This role is ideal for a senior engineer with a strong GCP background who has already delivered GenAI/RAG solutions in production even if not exclusively on Gemini.
Location & Engagement
- Location: Anywhere in India (Fully Remote)
- Working Model: Offshore with 34 hours overlap with US time zones
- Contract Duration: 6 months (strong potential for extension)
Role Level & Expectations
- Profile: Senior / Lead AI Engineer (Senior Individual Contributor)
- Ownership: End-to-end technical ownership (architecture hands-on delivery)
- Leadership: Technical leadership by example (no people management)
- Focus: Production delivery with flexibility for research and experimentation
Key Responsibilities
Design build and operate LLM-powered systems using Gemini and Vertex AI
Implement RAG architectures at scale including ingestion retrieval and generation
Build and orchestrate LLM agents using LangChain or similar frameworks
Integrate AI capabilities via API-driven architectures
Debug and optimize end-to-end LLM pipelines:
- Chunking strategies
- Embeddings
- Retrieval logic
- LLM response behavior
Deliver production-ready AI services including:
- Monitoring and observability
- Rate limiting and cost controls
- Reliability and fallback strategies
Contribute to solution design and technical decision-making
Continuously evaluate and experiment with new LLM models and platform features
Implement AI safety security and compliance controls
Collaborate with cross-functional teams across time zones
MUST-HAVE Skills & Experience
Cloud & Platform (Required)
- Strong hands-on experience with Google Cloud Platform (GCP) in production
- Experience with services such as Cloud Run GKE Storage Pub/Sub BigQuery IAM
- Proven ability to design and operate production workloads on GCP
- Experience integrating Vertex AI services is a strong plus
GenAI & LLM Engineering
Hands-on experience delivering GenAI solutions in production
Experience integrating LLM platforms (Gemini OpenAI Anthropic Bedrock etc.)
Strong experience with LangChain or similar LLM orchestration frameworks
Solid understanding of:
- Prompt engineering
- Agent orchestration
- LLM pipeline debugging
RAG & Vector Search
Hands-on experience building RAG systems
Experience with vector databases such as Pinecone FAISS Chroma or similar
Strong understanding of vector similarity search fundamentals
Practical knowledge of RAG evaluation metrics such as:
-
- MRR nDCG
- Faithfulness and Answer Relevance
Programming & APIs
AI Safety Security & Compliance
NICE-TO-HAVE (Strong Plus)
- Experience with Terraform or Infrastructure-as-Code tools
- Background in data engineering analytics or data science
- Knowledge of data warehousing concepts (ETL/ELT analytics platforms)
- Experience operating large-scale production RAG systems
- Experience evaluating and comparing new LLM models
- Exposure to bias mitigation techniques in enterprise AI systems
- Multi-cloud experience (AWS alongside GCP)
What Success Looks Like
- You independently design build deploy and debug Gemini-powered AI systems
- You deliver production-grade AI solutions not prototypes
- You contribute meaningfully to architecture and technical strategy
- You adapt quickly as GenAI platforms and models evolve
- You take full ownership of AI systems from design to production
About Opplane
Opplane specializes in delivering advanced data-driven and AI-powered solutions for financial services telecommunications and reg-tech companies accelerating digital transformation.
Our leadership team includes Silicon Valley entrepreneurs and executives from organizations such as PayPal Xerox PARC Amazon Wells Fargo and SoFi with deep expertise in product management data governance privacy machine learning and risk management.
Team & Culture
Global & Multicultural Diverse perspectives global collaboration
Startup Energy Fast-moving impact-driven environment
Ownership Mindset Engineers own what they build
Collaborative & Friendly Open curious and supportive culture
View more
View less