JOB DESCRIPTION
Overview
We are seeking aSenior Data Engineerwith5 years of hands-on experiencein building and scaling data solutions within theAzure ecosystem. The ideal candidate will have deep expertise inDatabricksPySpark andmodern data lake architectures and be adept at designing robustETL pipelinesand data workflows.
Key Responsibilities
Design develop and maintainscalable data pipelinesusingAzureandDatabricks.
Build efficientdata ingestionandtransformationworkflows withPySparkandDelta Lake.
Develop optimize and maintainETL/ELT pipelinesusingPythonandSQL.
Implementdata modelingdata quality andgovernancebest practices.
Work withrelational databases(e.g. PostgreSQL) andlakehouse architectures(e.g. Delta Lake).
Collaborate with cross-functional teams totroubleshoot optimize and supportproduction workloads.
Required Skills & Experience
5 yearsof data engineering experience.
Strong proficiency in theAzure ecosystem including:
Azure Databricks
Azure Data Lake Storage (ADLS)
Azure Functions
Azure Data Factory(preferred)
Advanced PySparkandSQLexpertise including performance tuning.
Deep understanding ofETL/ELT design patternsanddata warehousing concepts.
Experience withDelta Lake(ACID transactions schema evolution time travel).
Hands-on experience withPostgreSQLor similar RDBMS.
Strong analytical problem-solving and communication skills.
Self-motivated and comfortable infast-paced agile environments.
Preferred Skills
Familiarity withCI/CDpipelines (Azure DevOps Git).
Knowledge ofInfrastructure-as-Codetools (Terraform).
Exposure toreal-time streamingtechnologies (Kafka Event Hub).
Awareness ofdata governancetools (e.g. Microsoft Purview).
Required Skills:
AzureETL
JOB DESCRIPTIONOverviewWe are seeking aSenior Data Engineerwith5 years of hands-on experiencein building and scaling data solutions within theAzure ecosystem. The ideal candidate will have deep expertise inDatabricksPySpark andmodern data lake architectures and be adept at designing robustETL pipeli...
JOB DESCRIPTION
Overview
We are seeking aSenior Data Engineerwith5 years of hands-on experiencein building and scaling data solutions within theAzure ecosystem. The ideal candidate will have deep expertise inDatabricksPySpark andmodern data lake architectures and be adept at designing robustETL pipelinesand data workflows.
Key Responsibilities
Design develop and maintainscalable data pipelinesusingAzureandDatabricks.
Build efficientdata ingestionandtransformationworkflows withPySparkandDelta Lake.
Develop optimize and maintainETL/ELT pipelinesusingPythonandSQL.
Implementdata modelingdata quality andgovernancebest practices.
Work withrelational databases(e.g. PostgreSQL) andlakehouse architectures(e.g. Delta Lake).
Collaborate with cross-functional teams totroubleshoot optimize and supportproduction workloads.
Required Skills & Experience
5 yearsof data engineering experience.
Strong proficiency in theAzure ecosystem including:
Azure Databricks
Azure Data Lake Storage (ADLS)
Azure Functions
Azure Data Factory(preferred)
Advanced PySparkandSQLexpertise including performance tuning.
Deep understanding ofETL/ELT design patternsanddata warehousing concepts.
Experience withDelta Lake(ACID transactions schema evolution time travel).
Hands-on experience withPostgreSQLor similar RDBMS.
Strong analytical problem-solving and communication skills.
Self-motivated and comfortable infast-paced agile environments.
Preferred Skills
Familiarity withCI/CDpipelines (Azure DevOps Git).
Knowledge ofInfrastructure-as-Codetools (Terraform).
Exposure toreal-time streamingtechnologies (Kafka Event Hub).
Awareness ofdata governancetools (e.g. Microsoft Purview).
Required Skills:
AzureETL
View more
View less