Senior Generative AI Model Training Engineer (OSS / LLM Fine-Tuning)
GEN AI Engineer/Data Scientist
Location: NY or VA preferred open to remote
Length: 6 months to start possibility of extensions
Interviews: Week of 4/20
Location: if we have anyone around New York those candidates will be given preference
New York NY and McLean VA
EST hours at a minimum -- but ok with remote
We have an immediate opening for 5 seasoned GEN AI consultants who have worked on specifically training the models
They need someone to support fine tuning AI models and prompt engineering specifically
Backend technologies arent a focus here changes you have to make are based on the type of model
Whenever we create any AI model OSS model we have to train it specifically for our business those are generic models do have that specific mindset for the model make sure you have Capital One constraints in mind -- if we have one sphere of all the information we are creating one more guard around it the info can be used but when fetching outside it goes through guardrails then we hand it over
if we have any such resource that is available
we still need a base that base still comes out of the model that is general and published we cant do it all from scratch
that is the whole idea of those OSS models
llama and OSS
use cases for the models -- high level training level more on the agent customer
Overview
We are hiring five experienced Generative AI practitioners with a strong background in training fine-tuning and hardening open source large language models (LLMs) for enterprise environments. This role is not about building models from scratch but about taking general-purpose OSS models (e.g. LLaMA and similar) and adapting them safely responsibly and effectively to enterprise specific use cases and constraints.
You will work hands on with model training pipelines guardrail design and domain adaptation ensuring models align with business context governance security and compliance requirements while still leveraging the power of foundational OSS models.
Key Responsibilities
Model Training & Adaptation
Fine tune and customize open source foundation models (e.g. LLaMA and similar OSS LLMs) for enterprise specific use cases.
Design and implement domain specific training strategies that adapt general models to business context without retraining from scratch.
Optimize models using techniques such as instruction tuning supervised fine-tuning (SFT) parameter efficient tuning (LoRA/PEFT) and reinforcement approaches where appropriate.
Guardrails & Controlled Knowledge Access
Architect guardrail frameworks that constrain model behavior within approved enterprise knowledge boundaries.
Enable secure usage of internal data while enforcing controlled access when fetching or referencing external information.
Ensure models respect governance requirements data sensitivity and usage policies at all times.
Enterprise AI Constraints & Governance
Build models with enterprise grade constraints in mind including security compliance etc.
Partner with platform risk and governance teams to ensure models are production ready and policy aligned.
Translate enterprise constraints into practical training and inference time safeguards.
GenAI Use Case Enablement
Enable agent based and customer facing AI use cases focusing on high level model training strategies rather than application only development.
Support AI agents that operate within controlled information spheres balancing utility with safety and trust.
Collaborate with product teams to continuously refine models based on real world usage signals.
Required Qualifications
Experience
7 years in Machine Learning / AI with 3 years hands on experience in Generative AI / LLM model training.
Proven experience fine tuning and deploying open source LLMs in production environments.
Strong understanding of the limitations of general-purpose models and how to adapt them for enterprise needs.
Technical Expertise
Deep knowledge of:
oLLM fine tuning techniques (SFT LoRA PEFT prompt tuning etc.)
oModel evaluation alignment and hallucination reduction
oOSS LLM ecosystems (LLaMA related tooling and training stacks)
Experience building and enforcing guardrails at training and inference time.
Familiarity with secure data pipelines and enterprise ML infrastructure.
Mindset & Approach
Strong enterprise first mindset: understands that models must operate within constraints not just optimize performance.
Practical understanding that base models come from published OSS foundations and business value is created through adaptation-not reinvention.
Ability to think holistically about models data guardrails and downstream agent behavior.
Nice to Have
Experience supporting AI agents or conversational systems in regulated environments.
Familiarity with internal/external knowledge boundary enforcement patterns (retrieval gating policy layers sandboxing).
Contributions to OSS AI projects or prior work with large scale AI platforms.
Why This Role
Work on real-world production GenAI systems at enterprise scale.
Tackle some of the most complex challenges in model alignment control and safe deployment.
Shape how foundational OSS models are responsibly leveraged for high impact customer and agent use cases.
Senior Generative AI Model Training Engineer (OSS / LLM Fine-Tuning) GEN AI Engineer/Data Scientist Location: NY or VA preferred open to remote Length: 6 months to start possibility of extensions Interviews: Week of 4/20 Location: if we have anyone around New York those candidates will be give...
Senior Generative AI Model Training Engineer (OSS / LLM Fine-Tuning)
GEN AI Engineer/Data Scientist
Location: NY or VA preferred open to remote
Length: 6 months to start possibility of extensions
Interviews: Week of 4/20
Location: if we have anyone around New York those candidates will be given preference
New York NY and McLean VA
EST hours at a minimum -- but ok with remote
We have an immediate opening for 5 seasoned GEN AI consultants who have worked on specifically training the models
They need someone to support fine tuning AI models and prompt engineering specifically
Backend technologies arent a focus here changes you have to make are based on the type of model
Whenever we create any AI model OSS model we have to train it specifically for our business those are generic models do have that specific mindset for the model make sure you have Capital One constraints in mind -- if we have one sphere of all the information we are creating one more guard around it the info can be used but when fetching outside it goes through guardrails then we hand it over
if we have any such resource that is available
we still need a base that base still comes out of the model that is general and published we cant do it all from scratch
that is the whole idea of those OSS models
llama and OSS
use cases for the models -- high level training level more on the agent customer
Overview
We are hiring five experienced Generative AI practitioners with a strong background in training fine-tuning and hardening open source large language models (LLMs) for enterprise environments. This role is not about building models from scratch but about taking general-purpose OSS models (e.g. LLaMA and similar) and adapting them safely responsibly and effectively to enterprise specific use cases and constraints.
You will work hands on with model training pipelines guardrail design and domain adaptation ensuring models align with business context governance security and compliance requirements while still leveraging the power of foundational OSS models.
Key Responsibilities
Model Training & Adaptation
Fine tune and customize open source foundation models (e.g. LLaMA and similar OSS LLMs) for enterprise specific use cases.
Design and implement domain specific training strategies that adapt general models to business context without retraining from scratch.
Optimize models using techniques such as instruction tuning supervised fine-tuning (SFT) parameter efficient tuning (LoRA/PEFT) and reinforcement approaches where appropriate.
Guardrails & Controlled Knowledge Access
Architect guardrail frameworks that constrain model behavior within approved enterprise knowledge boundaries.
Enable secure usage of internal data while enforcing controlled access when fetching or referencing external information.
Ensure models respect governance requirements data sensitivity and usage policies at all times.
Enterprise AI Constraints & Governance
Build models with enterprise grade constraints in mind including security compliance etc.
Partner with platform risk and governance teams to ensure models are production ready and policy aligned.
Translate enterprise constraints into practical training and inference time safeguards.
GenAI Use Case Enablement
Enable agent based and customer facing AI use cases focusing on high level model training strategies rather than application only development.
Support AI agents that operate within controlled information spheres balancing utility with safety and trust.
Collaborate with product teams to continuously refine models based on real world usage signals.
Required Qualifications
Experience
7 years in Machine Learning / AI with 3 years hands on experience in Generative AI / LLM model training.
Proven experience fine tuning and deploying open source LLMs in production environments.
Strong understanding of the limitations of general-purpose models and how to adapt them for enterprise needs.
Technical Expertise
Deep knowledge of:
oLLM fine tuning techniques (SFT LoRA PEFT prompt tuning etc.)
oModel evaluation alignment and hallucination reduction
oOSS LLM ecosystems (LLaMA related tooling and training stacks)
Experience building and enforcing guardrails at training and inference time.
Familiarity with secure data pipelines and enterprise ML infrastructure.
Mindset & Approach
Strong enterprise first mindset: understands that models must operate within constraints not just optimize performance.
Practical understanding that base models come from published OSS foundations and business value is created through adaptation-not reinvention.
Ability to think holistically about models data guardrails and downstream agent behavior.
Nice to Have
Experience supporting AI agents or conversational systems in regulated environments.
Familiarity with internal/external knowledge boundary enforcement patterns (retrieval gating policy layers sandboxing).
Contributions to OSS AI projects or prior work with large scale AI platforms.
Why This Role
Work on real-world production GenAI systems at enterprise scale.
Tackle some of the most complex challenges in model alignment control and safe deployment.
Shape how foundational OSS models are responsibly leveraged for high impact customer and agent use cases.
View more
View less