Title: Databricks Data Engineer
Location: Deerfield IL (Onsite)
Duration: Full Time
Video Interviews
Experience Required: 6-8 years
Job Description
Must have : Azure data bricks Scala is MUST
- Mandatory skills: Databricks Spark Azure Python Cosmos DB Azure DevOps (GitHub CI/CD pipelines Boards etc.) Docker & Azure Kubernetes Service Graffana JUnit Postman SonarQube
- Additional Skills: 3 6 years of experience in data engineering on cloud data platforms. Hands on experience building Spark jobs with Scala/Spark and/or PySpark on Databricks. Experience ingesting data from batch and streaming sources into ADLS Gen2 using Delta or Apache Iceberg tables. Good SQL skills for joins aggregations and data quality checks. Understanding of core Azure data services (Event Hubs/Kafka Data Factory/Databricks Workflows Key Vault). Experience working with Git based workflows and CI/CD in Azure DevOps or GitHub.
- Good to have skills: Exposure to Spark Structured Streaming for near real time use cases. Experience with data quality tools or frameworks and writing unit/integration tests for data pipelines. Familiarity with data modeling and performance considerations in Lakehouse environments.
Roles & Responsibilities:
- Responsible for building data products in Databricks using Scala/Spark
- Responsible for Ops work managing production for the data products developed and deployed to production
- Responsible for testing data products based on product specification and end to end validation
- Set up monitoring logging and alerting Spark jobs and data pipelines using Azure Monitor/Log Analytics or similar tools
- Coordinate with offshore team in India
Title: Databricks Data Engineer Location: Deerfield IL (Onsite) Duration: Full Time Video Interviews Experience Required: 6-8 years Job Description Must have : Azure data bricks Scala is MUST Mandatory skills: Databricks Spark Azure Python Cosmos DB Azure DevOps (GitHub CI/CD pipelines Boa...
Title: Databricks Data Engineer
Location: Deerfield IL (Onsite)
Duration: Full Time
Video Interviews
Experience Required: 6-8 years
Job Description
Must have : Azure data bricks Scala is MUST
- Mandatory skills: Databricks Spark Azure Python Cosmos DB Azure DevOps (GitHub CI/CD pipelines Boards etc.) Docker & Azure Kubernetes Service Graffana JUnit Postman SonarQube
- Additional Skills: 3 6 years of experience in data engineering on cloud data platforms. Hands on experience building Spark jobs with Scala/Spark and/or PySpark on Databricks. Experience ingesting data from batch and streaming sources into ADLS Gen2 using Delta or Apache Iceberg tables. Good SQL skills for joins aggregations and data quality checks. Understanding of core Azure data services (Event Hubs/Kafka Data Factory/Databricks Workflows Key Vault). Experience working with Git based workflows and CI/CD in Azure DevOps or GitHub.
- Good to have skills: Exposure to Spark Structured Streaming for near real time use cases. Experience with data quality tools or frameworks and writing unit/integration tests for data pipelines. Familiarity with data modeling and performance considerations in Lakehouse environments.
Roles & Responsibilities:
- Responsible for building data products in Databricks using Scala/Spark
- Responsible for Ops work managing production for the data products developed and deployed to production
- Responsible for testing data products based on product specification and end to end validation
- Set up monitoring logging and alerting Spark jobs and data pipelines using Azure Monitor/Log Analytics or similar tools
- Coordinate with offshore team in India
View more
View less