REQUIREMENTS:
Total Experience: 6 years in Data Science / Applied Machine Learning including 12 years in Generative AI / LLM-based solutions.
Strong proficiency in Python with hands-on experience using LangChain Pandas NumPy scikit-learn Hugging Face Transformers Pydantic etc.
Experience deploying GenAI workloads on AWS including Amazon Bedrock SageMaker Lambda ECS API Gateway Step Functions CloudWatch.
Deep understanding of LLM architectures tokenization RAG embeddings and vector databases such as FAISS Qdrant Pinecone OpenSearch.
Hands-on experience implementing agent-based architectures and multi-step reasoning pipelines.
Strong expertise in data wrangling feature engineering and building ML models for classification regression clustering and NLP.
Experience with CI/CD API development and integrating GenAI models into production-grade applications.
Familiarity with LangGraph AutoGPT CrewAI or other custom agent frameworks.
Knowledge of vector databases such as Qdrant Snowflake Cortex embedding models and custom RAG pipelines.
Prior contributions to open-source GenAI projects or research work in LLM/NLP domains.
Hands-on experience with MLOps including model versioning and monitoring tools (MLflow Weights & Biases SageMaker Model Monitor).
Exposure to fine-tuning and parameter-efficient tuning (LoRA QLoRA) of LLMs.
Understanding of data privacy PII redaction security and compliance considerations in GenAI applications.
Strong analytical thinking problem-solving and communication skills.
RESPONSIBILITIES:
Build deploy and optimize Generative AI and LLM-based applications for scalable enterprise use cases.
Design and implement RAG systems vector search pipelines and agent-based reasoning workflows.
Architect and develop end-to-end ML/GenAI solutions including data ingestion preprocessing experimentation evaluation and deployment.
Work with cross-functional teams to define problem statements and translate business requirements into technical solutions.
Develop and maintain GenAI-powered APIs microservices and automation workflows using AWS services.
Optimize model performance cost efficiency token usage prompt structures and overall system reliability.
Build reusable GenAI components including prompt templates toolchains memory modules evaluators and monitoring dashboards.
Implement robust MLOps practices for version control experiment tracking CI/CD pipelines and automated deployment.
Ensure high-quality production readiness through testing model validation drift detection and continuous monitoring.
Conduct research POCs and benchmarking of emerging LLMs embeddings vector databases and agent frameworks.
Prepare documentation architecture diagrams and best practices for internal and client-facing teams.
Collaborate with engineering product and cloud teams to ensure seamless integration of GenAI features into applications.
Mentor junior engineers and contribute to knowledge sharing code reviews and innovation initiatives.
Ensure compliance with security privacy and responsible AI guidelines across all GenAI implementations.
Additional Information :
Bachelors or masters degree in computer science Information Technology or a related field.
Remote Work :
No
Employment Type :
Full-time
Nagarro helps future-proof your business through a forward-thinking, fluidic, and CARING mindset. We excel at digital engineering and help our clients become human-centric, digital-first organizations, augmenting their ability to be responsive, efficient, intimate, creative, and susta ... View more