Job Title: Senior Data Engineer
Location: Alpharetta GA
Duration: 12 months
Term: FTE
Job Description:
Experience Desired: 8 Years.
Required Skills & Experience:
- Architect design and implement scalable data platforms and pipelines on Azure and Databricks.
- Build and optimize data ingestion transformation and processing workflows across batch and real-time data streams.
- Work extensively with ADLS Delta Lake and Spark (Python) for large-scale data engineering.
- Lead the development of complex ETL/ELT pipelines ensuring high quality reliability and performance.
- Design and implement data models including conceptual logical and physical models for analytics and operational workloads.
- Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
- Define and enforce best practices in data governance data quality security and architecture.
- Collaborate with architects data scientists analysts and business teams to translate requirements into technical solutions.
- Troubleshoot production issues optimize performance and support continuous improvement of the data platform.
- Mentor junior engineers and contribute to building engineering standards and reusable components.
- This position description identifies the responsibilities and tasks typically associated with the performance of the position. Other relevant essential functions may be required.
What You Need:
- 8 years of hands-on data engineering experience in enterprise environments.
- Strong expertise in Azure services especially Azure Databricks Functions and Azure Data Factory (preferred).
- Advanced proficiency in Apache Spark with Python (PySpark).
- Strong command over SQL query optimization and performance tuning.
- Deep understanding of ETL/ELT methodologies data pipelines and scheduling/orchestration.
- Hands-on experience with Delta Lake (ACID transactions optimization schema evolution).
- Strong experience in data modelling (normalized dimensional lakehouse modelling).
- Experience in both batch processing and real-time/streaming data (Kafka Event Hub or similar).
- Solid understanding of data architecture principles distributed systems and cloud-native design patterns.
- Ability to design end-to-end solutions evaluate trade-offs and recommend best-fit architectures.
- Strong analytical problem-solving and communication skills.
- Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills:
- Experience with CI/CD tools such as Azure DevOps and Git.
- Familiarity with IaC tools (Terraform ARM).
- Exposure to data governance and cataloging tools (Azure Purview).
- Experience supporting machine learning or BI workloads on Databricks.
Key Skills:
Azure Databricks ADF Python
Job Title: Senior Data Engineer Location: Alpharetta GA Duration: 12 months Term: FTE Job Description: Experience Desired: 8 Years. Required Skills & Experience: Architect design and implement scalable data platforms and pipelines on Azure and Databricks. Build and optimize data ingestion transf...
Job Title: Senior Data Engineer
Location: Alpharetta GA
Duration: 12 months
Term: FTE
Job Description:
Experience Desired: 8 Years.
Required Skills & Experience:
- Architect design and implement scalable data platforms and pipelines on Azure and Databricks.
- Build and optimize data ingestion transformation and processing workflows across batch and real-time data streams.
- Work extensively with ADLS Delta Lake and Spark (Python) for large-scale data engineering.
- Lead the development of complex ETL/ELT pipelines ensuring high quality reliability and performance.
- Design and implement data models including conceptual logical and physical models for analytics and operational workloads.
- Work with relational and lakehouse systems including PostgreSQL and Delta Lake.
- Define and enforce best practices in data governance data quality security and architecture.
- Collaborate with architects data scientists analysts and business teams to translate requirements into technical solutions.
- Troubleshoot production issues optimize performance and support continuous improvement of the data platform.
- Mentor junior engineers and contribute to building engineering standards and reusable components.
- This position description identifies the responsibilities and tasks typically associated with the performance of the position. Other relevant essential functions may be required.
What You Need:
- 8 years of hands-on data engineering experience in enterprise environments.
- Strong expertise in Azure services especially Azure Databricks Functions and Azure Data Factory (preferred).
- Advanced proficiency in Apache Spark with Python (PySpark).
- Strong command over SQL query optimization and performance tuning.
- Deep understanding of ETL/ELT methodologies data pipelines and scheduling/orchestration.
- Hands-on experience with Delta Lake (ACID transactions optimization schema evolution).
- Strong experience in data modelling (normalized dimensional lakehouse modelling).
- Experience in both batch processing and real-time/streaming data (Kafka Event Hub or similar).
- Solid understanding of data architecture principles distributed systems and cloud-native design patterns.
- Ability to design end-to-end solutions evaluate trade-offs and recommend best-fit architectures.
- Strong analytical problem-solving and communication skills.
- Ability to collaborate with cross-functional teams and lead technical discussions.
Preferred Skills:
- Experience with CI/CD tools such as Azure DevOps and Git.
- Familiarity with IaC tools (Terraform ARM).
- Exposure to data governance and cataloging tools (Azure Purview).
- Experience supporting machine learning or BI workloads on Databricks.
Key Skills:
Azure Databricks ADF Python
View more
View less