We are looking for a Principal Data Engineer to own and build the production-grade data layer that powers a Claims AI / Intelligent Suite running on Azure Databricks. This is a hands-on role embedded in the delivery team responsible for ingestion transformation storage quality and serving of claims-related data used by AI models and agent workflows.
You will work closely with AI Engineers the Lead Databricks Architect and the clients Cloud and Platform teams to ensure data pipelines and data foundations are reliable scalable and ready for AI-driven workloads. This is not a BI or reporting role: the primary consumers of your work are AI systems agents and vector search pipelines.
Design build and maintain production-grade data pipelines in Azure Databricks using Delta Live Tables and Structured Streaming.
Implement and operate medallion architecture (bronze silver gold) with clear data contracts quality controls and freshness SLAs.
Build and maintain scalable data models and feature tables for claims policies litigation and adjuster data.
Engineer data preparation pipelines for AI workloads including structured data serving and unstructured document processing for vector search and RAG use cases.
Enforce data quality observability and reliability through automated checks lineage schema enforcement and freshness monitoring.
Own pipeline orchestration CI/CD monitoring and failure recovery for production data systems.
Collaborate closely with AI Engineers and the Lead Databricks Architect to align data architecture with agentic AI and platform decisions.
Work with client data owners and platform teams to manage data access upstream changes and source system dependencies.
Qualifications :
7 years of experience in Data Engineering with strong hands-on experience building production pipelines on Databricks (PySpark Delta Lake DLT) Databricks is a must.
Deep knowledge of Delta Lake optimization streaming and batch patterns schema evolution and performance tuning.
Solid experience with Unity Catalog data governance and secure data access patterns.
Strong SQL and PySpark skills with a production engineering mindset.
Experience building CI/CD for data pipelines and operating data systems in production.
Strong Azure fundamentals (ADLS Gen2 identities Key Vault security and networking concepts).
Data quality and reliability mindset with experience implementing observability and quality frameworks.
Ability to work independently and take ownership after high-level direction is provided (run with the ball).
Experience with Insurance (P&C / Commercial Liability) and AI/ML data platform experience is a strong plus.
Databricks Data Engineer Professional or Azure Data Engineer Associate certification is a plus.
What about languages
- You will need excellent written and verbal English for clear and effective communication with the team.
Additional Information :
Our Perks and Benefits:
Learning Opportunities:
- Certifications in AWS (we are AWS Partners) Databricks and Snowflake.
- Access to AI learning paths to stay up to date with the latest technologies.
- Study plans courses and additional certifications tailored to your role.
- Access to Udemy Business offering thousands of courses to boost your technical and soft skills.
- English lessons to support your professional communication.
Travel opportunities to attend industry conferences and meet clients.
Mentoring and Development:
- Career development plans and mentorship programs to help shape your path.
Celebrations & Support:
- Special day rewards to celebrate birthdays work anniversaries and other personal milestones.
- Company-provided equipment.
Flexible working options to help you strike the right balance.
Other benefits may vary according to your location in LATAM. For detailed information regarding the benefits applicable to your specific location please consult with one of our recruiters.
So what are the next steps
Our team is eager to learn about you! Send us your resume or LinkedIn profile below and well explore working together!
Remote Work :
Yes
Employment Type :
Full-time
We are looking for a Principal Data Engineer to own and build the production-grade data layer that powers a Claims AI / Intelligent Suite running on Azure Databricks. This is a hands-on role embedded in the delivery team responsible for ingestion transformation storage quality and serving of claims-...
We are looking for a Principal Data Engineer to own and build the production-grade data layer that powers a Claims AI / Intelligent Suite running on Azure Databricks. This is a hands-on role embedded in the delivery team responsible for ingestion transformation storage quality and serving of claims-related data used by AI models and agent workflows.
You will work closely with AI Engineers the Lead Databricks Architect and the clients Cloud and Platform teams to ensure data pipelines and data foundations are reliable scalable and ready for AI-driven workloads. This is not a BI or reporting role: the primary consumers of your work are AI systems agents and vector search pipelines.
Design build and maintain production-grade data pipelines in Azure Databricks using Delta Live Tables and Structured Streaming.
Implement and operate medallion architecture (bronze silver gold) with clear data contracts quality controls and freshness SLAs.
Build and maintain scalable data models and feature tables for claims policies litigation and adjuster data.
Engineer data preparation pipelines for AI workloads including structured data serving and unstructured document processing for vector search and RAG use cases.
Enforce data quality observability and reliability through automated checks lineage schema enforcement and freshness monitoring.
Own pipeline orchestration CI/CD monitoring and failure recovery for production data systems.
Collaborate closely with AI Engineers and the Lead Databricks Architect to align data architecture with agentic AI and platform decisions.
Work with client data owners and platform teams to manage data access upstream changes and source system dependencies.
Qualifications :
7 years of experience in Data Engineering with strong hands-on experience building production pipelines on Databricks (PySpark Delta Lake DLT) Databricks is a must.
Deep knowledge of Delta Lake optimization streaming and batch patterns schema evolution and performance tuning.
Solid experience with Unity Catalog data governance and secure data access patterns.
Strong SQL and PySpark skills with a production engineering mindset.
Experience building CI/CD for data pipelines and operating data systems in production.
Strong Azure fundamentals (ADLS Gen2 identities Key Vault security and networking concepts).
Data quality and reliability mindset with experience implementing observability and quality frameworks.
Ability to work independently and take ownership after high-level direction is provided (run with the ball).
Experience with Insurance (P&C / Commercial Liability) and AI/ML data platform experience is a strong plus.
Databricks Data Engineer Professional or Azure Data Engineer Associate certification is a plus.
What about languages
- You will need excellent written and verbal English for clear and effective communication with the team.
Additional Information :
Our Perks and Benefits:
Learning Opportunities:
- Certifications in AWS (we are AWS Partners) Databricks and Snowflake.
- Access to AI learning paths to stay up to date with the latest technologies.
- Study plans courses and additional certifications tailored to your role.
- Access to Udemy Business offering thousands of courses to boost your technical and soft skills.
- English lessons to support your professional communication.
Travel opportunities to attend industry conferences and meet clients.
Mentoring and Development:
- Career development plans and mentorship programs to help shape your path.
Celebrations & Support:
- Special day rewards to celebrate birthdays work anniversaries and other personal milestones.
- Company-provided equipment.
Flexible working options to help you strike the right balance.
Other benefits may vary according to your location in LATAM. For detailed information regarding the benefits applicable to your specific location please consult with one of our recruiters.
So what are the next steps
Our team is eager to learn about you! Send us your resume or LinkedIn profile below and well explore working together!
Remote Work :
Yes
Employment Type :
Full-time
View more
View less