Location:100% Remote
Type: Contract
Industry: Healthcare
Client is transitioning from SAS to Databricks to power next-generation actuarial analysis underwriting and risk modeling.
Were seeking a Data Engineer to lead this transformation-
refactoring legacy SAS pipelines modernizing data architecture and building scalable solutions on Databricks and Azure.
What Youll Do
Migrate and refactor SAS-based actuarial and underwriting models to Databricks
Design and implement scalable data pipelines using Azure Data Factory Synapse and Spark
Collaborate with actuarial teams business stakeholders and engineering peers
Optimize performance reliability and agility of data workflows
Contribute to real-time analytics and AI-driven innovation across health plan operations
Required Qualifications
Bachelors or Masters in Computer Science Statistics Applied Math or related field
5 years of programming experience (SQL and Python required; Java R or Spark a plus)
4 years working with Databricks or similar cloud-native platforms
Hands-on experience with SAS-to-Databricks migration projects
Strong SQL skills and experience with Oracle PostgreSQL MySQL or SQL Server
Experience with Git and collaborative development practices
Proven ability to work cross-functionally and communicate with technical and business teams
Expertise in data management software engineering and infrastructure operations
Preferred Skills
Healthcare domain knowledge
Agile/Scrum experience
Azure ecosystem: Data Factory Synapse Purview Cosmos App Insights Power BI
Python libraries: NumPy pandas matplotlib; Jupyter notebooks
CI/CD ML Ops and DataOps practices
Event streaming tools: Kafka NiFi Flink
Big Data tools: Spark Hive Sqoop
NoSQL experience (MongoDB)
Advanced Power BI modeling (Power Query DAX)
Location:100% Remote Type: Contract Industry: Healthcare Client is transitioning from SAS to Databricks to power next-generation actuarial analysis underwriting and risk modeling. Were seeking a Data Engineer to lead this transformation- refactoring legacy SAS pipelines modernizing data archi...
Location:100% Remote
Type: Contract
Industry: Healthcare
Client is transitioning from SAS to Databricks to power next-generation actuarial analysis underwriting and risk modeling.
Were seeking a Data Engineer to lead this transformation-
refactoring legacy SAS pipelines modernizing data architecture and building scalable solutions on Databricks and Azure.
What Youll Do
Migrate and refactor SAS-based actuarial and underwriting models to Databricks
Design and implement scalable data pipelines using Azure Data Factory Synapse and Spark
Collaborate with actuarial teams business stakeholders and engineering peers
Optimize performance reliability and agility of data workflows
Contribute to real-time analytics and AI-driven innovation across health plan operations
Required Qualifications
Bachelors or Masters in Computer Science Statistics Applied Math or related field
5 years of programming experience (SQL and Python required; Java R or Spark a plus)
4 years working with Databricks or similar cloud-native platforms
Hands-on experience with SAS-to-Databricks migration projects
Strong SQL skills and experience with Oracle PostgreSQL MySQL or SQL Server
Experience with Git and collaborative development practices
Proven ability to work cross-functionally and communicate with technical and business teams
Expertise in data management software engineering and infrastructure operations
Preferred Skills
Healthcare domain knowledge
Agile/Scrum experience
Azure ecosystem: Data Factory Synapse Purview Cosmos App Insights Power BI
Python libraries: NumPy pandas matplotlib; Jupyter notebooks
CI/CD ML Ops and DataOps practices
Event streaming tools: Kafka NiFi Flink
Big Data tools: Spark Hive Sqoop
NoSQL experience (MongoDB)
Advanced Power BI modeling (Power Query DAX)
View more
View less