Job Title: Gen AI Engineer - Model Fine-Tuning
Location: Dallas TX (3 days a week Hybrid)
Engagement Type: Contract
Overview
This is a hands-on role requiring deep expertise in LLM fine-tuning data curation and reinforcement learning optimization with the goal of reducing model hallucinations and enhancing contextual accuracy for production-grade cognitive systems.
Key Responsibilities
- Fine-tune large-scale LLMs (e.g. GPT Claude LLaMA Mistral) using curated domain datasets for banking risk and compliance workflows.
- Collaborate with data engineering teams to build high-quality labeled datasets for supervised and reinforcement learning.
- Apply advanced context engineering and prompt optimization techniques to improve model interpretability and reasoning.
- Evaluate and mitigate model drift bias and hallucination using quantitative performance metrics.
- Develop and automate evaluation pipelines for continuous fine-tuning and model retraining.
- Partner with the Cognitive Agent Development team to integrate tuned models into agentic workflows and decision chains.
- Contribute to model governance versioning and audit frameworks to ensure explainability and compliance.
Required Skills & Experience
- 5 10 years of hands-on experience in AI/ML with a focus on LLM fine-tuning prompt engineering or context adaptation.
- Strong proficiency with Python PyTorch TensorFlow and frameworks like Hugging Face Transformers LangChain and PEFT (Parameter-Efficient Fine-Tuning).
- Proven experience building and labeling domain-specific datasets and applying data augmentation strategies.
- Familiarity with RLHF (Reinforcement Learning with Human Feedback) and evaluation metrics for generative models.
- Understanding of multi-agent architectures orchestration frameworks (LangGraph CrewAI AutoGen etc.) and memory management for AI agents.
- Exposure to banking risk analytics or compliance data preferred.
- Strong grounding in data security privacy and model governance standards in regulated industries.
Preferred Qualifications
- Masters or PhD in Computer Science AI or related discipline.
- Experience deploying LLM-based agents in production environments.
- Knowledge of vector databases (FAISS Pinecone Chroma) and retrieval-augmented generation (RAG) pipelines.
- Contributions to open-source AI projects or publications in fine-tuning evaluation or multi-agent systems.
Job Title: Gen AI Engineer - Model Fine-Tuning Location: Dallas TX (3 days a week Hybrid) Engagement Type: Contract Overview This is a hands-on role requiring deep expertise in LLM fine-tuning data curation and reinforcement learning optimization with the goal of reducing model hallucinations ...
Job Title: Gen AI Engineer - Model Fine-Tuning
Location: Dallas TX (3 days a week Hybrid)
Engagement Type: Contract
Overview
This is a hands-on role requiring deep expertise in LLM fine-tuning data curation and reinforcement learning optimization with the goal of reducing model hallucinations and enhancing contextual accuracy for production-grade cognitive systems.
Key Responsibilities
- Fine-tune large-scale LLMs (e.g. GPT Claude LLaMA Mistral) using curated domain datasets for banking risk and compliance workflows.
- Collaborate with data engineering teams to build high-quality labeled datasets for supervised and reinforcement learning.
- Apply advanced context engineering and prompt optimization techniques to improve model interpretability and reasoning.
- Evaluate and mitigate model drift bias and hallucination using quantitative performance metrics.
- Develop and automate evaluation pipelines for continuous fine-tuning and model retraining.
- Partner with the Cognitive Agent Development team to integrate tuned models into agentic workflows and decision chains.
- Contribute to model governance versioning and audit frameworks to ensure explainability and compliance.
Required Skills & Experience
- 5 10 years of hands-on experience in AI/ML with a focus on LLM fine-tuning prompt engineering or context adaptation.
- Strong proficiency with Python PyTorch TensorFlow and frameworks like Hugging Face Transformers LangChain and PEFT (Parameter-Efficient Fine-Tuning).
- Proven experience building and labeling domain-specific datasets and applying data augmentation strategies.
- Familiarity with RLHF (Reinforcement Learning with Human Feedback) and evaluation metrics for generative models.
- Understanding of multi-agent architectures orchestration frameworks (LangGraph CrewAI AutoGen etc.) and memory management for AI agents.
- Exposure to banking risk analytics or compliance data preferred.
- Strong grounding in data security privacy and model governance standards in regulated industries.
Preferred Qualifications
- Masters or PhD in Computer Science AI or related discipline.
- Experience deploying LLM-based agents in production environments.
- Knowledge of vector databases (FAISS Pinecone Chroma) and retrieval-augmented generation (RAG) pipelines.
- Contributions to open-source AI projects or publications in fine-tuning evaluation or multi-agent systems.
View more
View less