Databricks Engineer Commercial Insurance Fulltime

Acunor Inc

Not Interested
Bookmark
Report This Job

profile Job Location:

Columbus, NE - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy

Job Summary

ob Title: Data Engineer (Databricks Azure)

Client: One of our Consulting Clients (Global Analytics & Digital Transformation Firm)

Location: Columbus OH (Remote/Hybrid)

Duration: Full-Time

About the Role

We are seeking a highly skilled Data Engineer with deep expertise in Databricks and Azure Cloud to join a decision analytics and data engineering team within one of our global consulting clients. The role involves building optimizing and maintaining large-scale data pipelines that fuel enterprise analytics reporting and AI-driven insights-primarily supporting clients in the insurance and financial services domains.

Key Responsibilities

Data Pipeline Development & Optimization

  • Design build and enhance ETL/ELT data pipelines using Azure Data Factory Databricks (PySpark SQL Python) and related services.
  • Develop and manage Delta Live Tables Autoloader and Unity Catalog within the Databricks ecosystem for structured incremental data processing.
  • Implement data ingestion transformation and validation frameworks that ensure high performance scalability and reliability.
  • Monitor data pipelines troubleshoot issues and ensure optimal system performance and SLA adherence.

Data Modeling & Architecture

  • Collaborate with business analysts and reporting teams to define logical and physical data models supporting analytical and operational needs.
  • Implement data warehousing and lakehouse solutions using Azure Data Lake and Delta Lake.
  • Optimize data structures for query performance cost efficiency and reusability.

Data Quality Governance & Automation

  • Design and implement robust data quality checks and validation mechanisms to maintain integrity across sources and transformations.
  • Automate repetitive processes using scripts parameterized pipelines and reusable frameworks.
  • Conduct periodic audits and compliance checks aligned with governance policies.
  • Contribute to metadata management documentation and lineage tracking.

Required Skills & Experience

  • 7 12 years of experience in Data Engineering with proven expertise in Databricks and Azure Cloud ecosystems.
  • Strong hands-on experience in PySpark Python and SQL for data transformation validation and performance tuning.
  • Solid understanding of Delta Lake architecture ETL/ELT frameworks and data warehousing principles.
  • Proficiency with Azure services including Data Factory (ADF) Data Lake (ADLS) and Databricks Notebooks.
  • Experience with Delta Live Tables Unity Catalog and Autoloader for batch and streaming data processing.
  • Strong background in data modeling performance optimization and automation scripting.
  • Familiarity with Agile methodologies and DevOps-based deployment practices (Git CI/CD preferred).
  • Strong analytical communication and problem-solving skills to collaborate effectively across diverse teams.
  • Preferred: Exposure to insurance healthcare or financial services data ecosystems.

Nice to Have

  • Experience in data migration projects (on-prem to cloud or multi-cloud).
  • Familiarity with Delta Sharing Databricks SQL Warehouses or MLflow for advanced use cases.
  • Experience with data cataloging lineage or quality frameworks such as Purview Collibra or Great Expectations.
  • Exposure to BI/reporting tools like Power BI or Tableau for end-to-end integration understanding.
ob Title: Data Engineer (Databricks Azure) Client: One of our Consulting Clients (Global Analytics & Digital Transformation Firm) Location: Columbus OH (Remote/Hybrid) Duration: Full-Time About the Role We are seeking a highly skilled Data Engineer with deep expertise in Databricks and...

ob Title: Data Engineer (Databricks Azure)

Client: One of our Consulting Clients (Global Analytics & Digital Transformation Firm)

Location: Columbus OH (Remote/Hybrid)

Duration: Full-Time

About the Role

We are seeking a highly skilled Data Engineer with deep expertise in Databricks and Azure Cloud to join a decision analytics and data engineering team within one of our global consulting clients. The role involves building optimizing and maintaining large-scale data pipelines that fuel enterprise analytics reporting and AI-driven insights-primarily supporting clients in the insurance and financial services domains.

Key Responsibilities

Data Pipeline Development & Optimization

  • Design build and enhance ETL/ELT data pipelines using Azure Data Factory Databricks (PySpark SQL Python) and related services.
  • Develop and manage Delta Live Tables Autoloader and Unity Catalog within the Databricks ecosystem for structured incremental data processing.
  • Implement data ingestion transformation and validation frameworks that ensure high performance scalability and reliability.
  • Monitor data pipelines troubleshoot issues and ensure optimal system performance and SLA adherence.

Data Modeling & Architecture

  • Collaborate with business analysts and reporting teams to define logical and physical data models supporting analytical and operational needs.
  • Implement data warehousing and lakehouse solutions using Azure Data Lake and Delta Lake.
  • Optimize data structures for query performance cost efficiency and reusability.

Data Quality Governance & Automation

  • Design and implement robust data quality checks and validation mechanisms to maintain integrity across sources and transformations.
  • Automate repetitive processes using scripts parameterized pipelines and reusable frameworks.
  • Conduct periodic audits and compliance checks aligned with governance policies.
  • Contribute to metadata management documentation and lineage tracking.

Required Skills & Experience

  • 7 12 years of experience in Data Engineering with proven expertise in Databricks and Azure Cloud ecosystems.
  • Strong hands-on experience in PySpark Python and SQL for data transformation validation and performance tuning.
  • Solid understanding of Delta Lake architecture ETL/ELT frameworks and data warehousing principles.
  • Proficiency with Azure services including Data Factory (ADF) Data Lake (ADLS) and Databricks Notebooks.
  • Experience with Delta Live Tables Unity Catalog and Autoloader for batch and streaming data processing.
  • Strong background in data modeling performance optimization and automation scripting.
  • Familiarity with Agile methodologies and DevOps-based deployment practices (Git CI/CD preferred).
  • Strong analytical communication and problem-solving skills to collaborate effectively across diverse teams.
  • Preferred: Exposure to insurance healthcare or financial services data ecosystems.

Nice to Have

  • Experience in data migration projects (on-prem to cloud or multi-cloud).
  • Familiarity with Delta Sharing Databricks SQL Warehouses or MLflow for advanced use cases.
  • Experience with data cataloging lineage or quality frameworks such as Purview Collibra or Great Expectations.
  • Exposure to BI/reporting tools like Power BI or Tableau for end-to-end integration understanding.
View more view more

Key Skills

  • Sales Experience
  • Time Management
  • Marketing
  • Customer Service
  • Communication skills
  • Retail Sales
  • Business Management
  • Outside Sales
  • Telemarketing
  • Insurance Sales
  • Medicare
  • Phone Etiquette