- Develop maintain and optimize scalable ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks.
- Design and implement batch and streaming data processing solutions using Azure Databricks (ADB) and PySpark.
- Work with structured and unstructured data sources ensuring data ingestion transformation and storage is efficient and secure.
- Design and implement relational and dimensional data models for data warehousing and analytics solutions.
- Implement big data processing workflows using Apache Spark (PySpark) within Azure Databricks.
Required Skills & Qualifications:
- 3-7 years of experience in data engineering roles.
- Hands-on experience with Azure Databricks and PySpark for large-scale data processing.
- Azure Cloud Services: Azure Databricks Azure Data Lake Azure Data Factory Azure SQL Database Azure Blob Storage.
- Big Data & ETL: PySpark Spark SQL .
- Programming: Python (PySpark) SQL T-SQL Scala (optional).
- Database Management: SQL Server Azure Synapse Cosmos DB.
- Data Modeling: Relational Dimensional and NoSQL Data Modeling.
sql,pyspark,cosmos db,azure data factory,t-sql,python,sql server,scala,data modeling,apache spark,azure synapse,azure databricks