Role: Data Engineer
Location: Edison NJ (Onsite)
Primary Skills: Property and Casualty insurance domain Azure Databricks and Azure Data Factory
About the job
We are seeking a Databricks Data Engineer with experience in designing and developing data pipelines using Azure Databricks Data Factory and Datalake. The role involves managing large volumes of data building complex ETL solutions and working closely with business teams to deliver robust data transformations and analytics solutions.
Responsibilities:
- Design and develop ETL pipelines using ADF for data ingestion and transformation.
- Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions.
- Write SQL Python and PySpark code for efficient data processing and transformation.
- Understand and translate business requirements into technical designs.
- Develop mapping documents and transformation rules as per project scope.
- Communicate project status with stakeholders ensuring smooth project execution.
Requirements Must have:
- 6 years of experience in data ingestion data processing and analytical pipelines for big data and relational databases.
- Hands-on experience with Azure services: ADLS Azure Databricks Data Factory Synapse Azure SQL DB.
- Experience in SQL Python and PySpark for data transformation and processing.
- Familiarity with DevOps and CI/CD deployments.
- Strong communication skills and attention to detail in high-pressure situations.
- Experience in the insurance or financial industry is preferred.
Regards
Manoj
Derex Technologies INC
Contact : Ext 206
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Full-time
Role: Data EngineerLocation: Edison NJ (Onsite)Primary Skills: Property and Casualty insurance domain Azure Databricks and Azure Data Factory About the jobWe are seeking a Databricks Data Engineer with experience in designing and developing data pipelines using Azure Databricks Data Factory and Data...
Role: Data Engineer
Location: Edison NJ (Onsite)
Primary Skills: Property and Casualty insurance domain Azure Databricks and Azure Data Factory
About the job
We are seeking a Databricks Data Engineer with experience in designing and developing data pipelines using Azure Databricks Data Factory and Datalake. The role involves managing large volumes of data building complex ETL solutions and working closely with business teams to deliver robust data transformations and analytics solutions.
Responsibilities:
- Design and develop ETL pipelines using ADF for data ingestion and transformation.
- Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions.
- Write SQL Python and PySpark code for efficient data processing and transformation.
- Understand and translate business requirements into technical designs.
- Develop mapping documents and transformation rules as per project scope.
- Communicate project status with stakeholders ensuring smooth project execution.
Requirements Must have:
- 6 years of experience in data ingestion data processing and analytical pipelines for big data and relational databases.
- Hands-on experience with Azure services: ADLS Azure Databricks Data Factory Synapse Azure SQL DB.
- Experience in SQL Python and PySpark for data transformation and processing.
- Familiarity with DevOps and CI/CD deployments.
- Strong communication skills and attention to detail in high-pressure situations.
- Experience in the insurance or financial industry is preferred.
Regards
Manoj
Derex Technologies INC
Contact : Ext 206
Additional Information :
All your information will be kept confidential according to EEO guidelines.
Remote Work :
No
Employment Type :
Full-time
View more
View less