Job Title: Azure Data Tech Lead
Location: Alpharetta Georgia
Core Skills: Azure Databricks ADLS Spark (Python) SQL ETL Delta Lake PostgreSQL Data Architecture Batch & Real-time Processing Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with 8 years of expertise in designing and delivering scalable high-performing data solutions on the Azure ecosystem.
The ideal candidate will have deep hands-on experience with Databricks Spark modern data lakehouse architectures data modelling and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives influencing architectural decisions and ensuring robust high-quality data pipelines.
Key Responsibilities
Architect design and implement scalable data platforms and pipelines on Azure and Databricks.
Build and optimize data ingestion transformation and processing workflows across batch and real-time data streams.
Work extensively with ADLS Delta Lake and Spark (Python) for large-scale data engineering.
Lead the development of complex ETL/ELT pipelines ensuring high quality reliability and performance.
Design and implement data models including conceptual logical and physical models for analytics and operational workloads.
Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
Define and enforce best practices in data governance data quality security and architecture.
Collaborate with architects data scientists analysts and business teams to translate requirements into technical solutions.
Troubleshoot production issues optimize performance and support continuous improvement of the data platform.
Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
8 years of hands-on data engineering experience in enterprise environments.
Strong expertise in Azure services especially Azure Databricks Functions and Azure Data Factory (preferred).
Advanced proficiency in Apache Spark with Python (PySpark).
Strong command over SQL query optimization and performance tuning.
Deep understanding of ETL/ELT methodologies data pipelines and scheduling/orchestration.
Hands-on experience with Delta Lake (ACID transactions optimization schema evolution).
Strong experience in data modelling (normalized dimensional lakehouse modelling).
Experience in both batch processing and real-time/streaming data (Kafka Event Hub or similar).
Solid understanding of data architecture principles distributed systems and cloud-native design patterns.
Ability to design end-to-end solutions evaluate trade-offs and recommend best-fit architectures.
Strong analytical problem-solving and communication skills.
Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
Experience with CI/CD tools such as Azure DevOps and Git.
Familiarity with IaC tools (Terraform ARM).
Exposure to data governance and cataloging tools (Azure Purview).
Experience supporting machine learning or BI workloads on Databricks.
Job Title: Azure Data Tech Lead Location: Alpharetta Georgia Core Skills: Azure Databricks ADLS Spark (Python) SQL ETL Delta Lake PostgreSQL Data Architecture Batch & Real-time Processing Data Modelling Overview We are looking for an experienced Senior/Lead Data Engineer with 8 years of expertis...
Job Title: Azure Data Tech Lead
Location: Alpharetta Georgia
Core Skills: Azure Databricks ADLS Spark (Python) SQL ETL Delta Lake PostgreSQL Data Architecture Batch & Real-time Processing Data Modelling
Overview
We are looking for an experienced Senior/Lead Data Engineer with 8 years of expertise in designing and delivering scalable high-performing data solutions on the Azure ecosystem.
The ideal candidate will have deep hands-on experience with Databricks Spark modern data lakehouse architectures data modelling and both batch and real-time data processing. You will be responsible for driving end-to-end data engineering initiatives influencing architectural decisions and ensuring robust high-quality data pipelines.
Key Responsibilities
Architect design and implement scalable data platforms and pipelines on Azure and Databricks.
Build and optimize data ingestion transformation and processing workflows across batch and real-time data streams.
Work extensively with ADLS Delta Lake and Spark (Python) for large-scale data engineering.
Lead the development of complex ETL/ELT pipelines ensuring high quality reliability and performance.
Design and implement data models including conceptual logical and physical models for analytics and operational workloads.
Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
Define and enforce best practices in data governance data quality security and architecture.
Collaborate with architects data scientists analysts and business teams to translate requirements into technical solutions.
Troubleshoot production issues optimize performance and support continuous improvement of the data platform.
Mentor junior engineers and contribute to building engineering standards and reusable components.
Required Skills & Experience
8 years of hands-on data engineering experience in enterprise environments.
Strong expertise in Azure services especially Azure Databricks Functions and Azure Data Factory (preferred).
Advanced proficiency in Apache Spark with Python (PySpark).
Strong command over SQL query optimization and performance tuning.
Deep understanding of ETL/ELT methodologies data pipelines and scheduling/orchestration.
Hands-on experience with Delta Lake (ACID transactions optimization schema evolution).
Strong experience in data modelling (normalized dimensional lakehouse modelling).
Experience in both batch processing and real-time/streaming data (Kafka Event Hub or similar).
Solid understanding of data architecture principles distributed systems and cloud-native design patterns.
Ability to design end-to-end solutions evaluate trade-offs and recommend best-fit architectures.
Strong analytical problem-solving and communication skills.
Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills
Experience with CI/CD tools such as Azure DevOps and Git.
Familiarity with IaC tools (Terraform ARM).
Exposure to data governance and cataloging tools (Azure Purview).
Experience supporting machine learning or BI workloads on Databricks.
View more
View less