Mandatory Skills: Data Warehousing Azure Data Engineer Azure Data Lake Databricks PySpark Python
Job Description: - Proficiency in Azure Databricks and hands-on experience with Apache Spark for large-scale data processing
- Strong programming skills in Python / PySpark (Scala is a plus)
- Good knowledge of SQL relational databases and data warehousing concepts
- Experience with Azure cloud services related to data storage and processing
- Understanding of data modeling ETL pipelines and data integration methodologies
- Strong problem-solving skills with attention to detail
- Ability to work effectively in a collaborative team-oriented environment
Roles & Responsibilities: - Design and develop scalable ETL pipelines using Azure Databricks to process large data volumes
- Collaborate with business and technical stakeholders to understand data requirements and deliver solutions
- Optimize data storage and retrieval for performance and efficiency
- Implement data quality checks governance standards and validation processes
- Stay updated with emerging data engineering tools and best practices
Tools & Technologies: - Data Engineering: Databricks Apache Spark Delta Lake
- Programming: Python SQL PySpark
- Cloud Platform: Azure
Mandatory Skills: Data Warehousing Azure Data Engineer Azure Data Lake Databricks PySpark Python Job Description: Proficiency in Azure Databricks and hands-on experience with Apache Spark for large-scale data processing Strong programming skills in Python / PySpark (Scala is a plus) Good knowledge ...
Mandatory Skills: Data Warehousing Azure Data Engineer Azure Data Lake Databricks PySpark Python
Job Description: - Proficiency in Azure Databricks and hands-on experience with Apache Spark for large-scale data processing
- Strong programming skills in Python / PySpark (Scala is a plus)
- Good knowledge of SQL relational databases and data warehousing concepts
- Experience with Azure cloud services related to data storage and processing
- Understanding of data modeling ETL pipelines and data integration methodologies
- Strong problem-solving skills with attention to detail
- Ability to work effectively in a collaborative team-oriented environment
Roles & Responsibilities: - Design and develop scalable ETL pipelines using Azure Databricks to process large data volumes
- Collaborate with business and technical stakeholders to understand data requirements and deliver solutions
- Optimize data storage and retrieval for performance and efficiency
- Implement data quality checks governance standards and validation processes
- Stay updated with emerging data engineering tools and best practices
Tools & Technologies: - Data Engineering: Databricks Apache Spark Delta Lake
- Programming: Python SQL PySpark
- Cloud Platform: Azure
View more
View less