We are looking for an experienced Azure Databricks Engineer with strong hands-on expertise in Python SQL and Apache Spark to design build and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate should have experience working with large datasets distributed data processing analytics use cases and modern data engineering practices.
Responsibilities
- Design develop and maintain scalable data pipelines using Azure Databricks
- Implement ETL/ELT workflows using PySpark Spark SQL and Python
- Implement pipelines for data ingestion using Azure Data Factory
- Optimize Spark jobs for performance cost and scalability
- Work with structured and semi-structured data (Parquet Delta JSON CSV)
- Build and manage Delta Lake tables (ACID time travel schema evolution)
- Integrate Databricks with Azure Data Lake Storage (ADLS Gen2)
- Develop complex queries and transformations using SQL
- Collaborate with Data Science teams to prepare data for modelling use cases
ensuring appropriate transformations feature generation and storage.
- Follow best practices for security access control and governance in Azure
- Ensure data quality validation and monitoring using testing tools
- Deployment of solutions to Production environments
Qualifications :
- 4 years of experience in Data Engineering ideally supporting POS and SKU datasets.
- Handling high volume transactional datasets
- Strong hands-on experience with Azure Databricks
- Understanding of the Medallion Architecture and implementing it within Databricks
- Good understanding of data modelling techniques
- Proficiency in Python for data processing
- Strong knowledge of SQL (joins window functions performance tuning)
- Hands-on experience with Apache Spark / PySpark
- Experience working with Delta Lake
- Knowledge of Azure Data Lake Storage (ADLS Gen2)
- Understanding of distributed computing concepts
- Experience with Git version control
- Understanding of ML use cases and data considerations for model development
Remote Work :
No
Employment Type :
Full-time
We are looking for an experienced Azure Databricks Engineer with strong hands-on expertise in Python SQL and Apache Spark to design build and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate should have experience working with large datasets d...
We are looking for an experienced Azure Databricks Engineer with strong hands-on expertise in Python SQL and Apache Spark to design build and optimize scalable data pipelines and analytics solutions on the Azure cloud platform. The ideal candidate should have experience working with large datasets distributed data processing analytics use cases and modern data engineering practices.
Responsibilities
- Design develop and maintain scalable data pipelines using Azure Databricks
- Implement ETL/ELT workflows using PySpark Spark SQL and Python
- Implement pipelines for data ingestion using Azure Data Factory
- Optimize Spark jobs for performance cost and scalability
- Work with structured and semi-structured data (Parquet Delta JSON CSV)
- Build and manage Delta Lake tables (ACID time travel schema evolution)
- Integrate Databricks with Azure Data Lake Storage (ADLS Gen2)
- Develop complex queries and transformations using SQL
- Collaborate with Data Science teams to prepare data for modelling use cases
ensuring appropriate transformations feature generation and storage.
- Follow best practices for security access control and governance in Azure
- Ensure data quality validation and monitoring using testing tools
- Deployment of solutions to Production environments
Qualifications :
- 4 years of experience in Data Engineering ideally supporting POS and SKU datasets.
- Handling high volume transactional datasets
- Strong hands-on experience with Azure Databricks
- Understanding of the Medallion Architecture and implementing it within Databricks
- Good understanding of data modelling techniques
- Proficiency in Python for data processing
- Strong knowledge of SQL (joins window functions performance tuning)
- Hands-on experience with Apache Spark / PySpark
- Experience working with Delta Lake
- Knowledge of Azure Data Lake Storage (ADLS Gen2)
- Understanding of distributed computing concepts
- Experience with Git version control
- Understanding of ML use cases and data considerations for model development
Remote Work :
No
Employment Type :
Full-time
View more
View less