Junior Azure Data Engineer
Wilmington, DE - USA
Job Summary
Job Summary Junior Azure Data Engineer (Wilmington/Boston MA)
- Support migration of LDW (Logical Data Warehouse) to Azure Fabric Lakehouse/Warehouse.
- Write and optimize SQL queries for data extraction transformation and validation.
- Develop Python scripts for automation data ingestion and data quality checks.
- Build dataflows pipelines and lakehouse tables in Azure Fabric to meet standards.
- Collaborate with engineering team on ETL design performance tuning and optimization.
- Test review code and maintain technical documentation using Confluence and Git.
- Monitor data loads troubleshoot failures and update project tracking tools (Zephyr/JIRA).
- Stay updated on Azure tools and contribute to ongoing process improvements.
- Utilize Azure Data Factory Databricks and Data Lakes for data engineering tasks.
- Apply basic data warehousing concepts including ETL staging and data modeling (fact/dimension).
- Execute version control and CI/CD processes with Git/Azure DevOps.
- Apply knowledge of metadata management and data quality best practices.
- Work in Agile or Scrum development environments.
- Leverage Power BI and basic DAX concepts as needed.
Requirements:
- 1 2 years hands-on experience with SQL (preferably T-SQL) and Python scripting.
- Direct experience with Azure Data Factory Databricks Data Lakes and related services.
- Familiarity with data modeling metadata management and data-quality concepts.
- Experience with Git/Azure DevOps Confluence and JIRA.
- Understanding of Agile/Scrum methodologies.
- Local to Boston MA (hybrid work requirement).
- No sponsorship available; LinkedIn profile required for submissions.
- Support migration of LDW (Logical Data Warehouse) to Azure Fabric Lakehouse/Warehouse.
- Write and optimize SQL queries for data extraction transformation and validation.
- Develop Python scripts for automation data ingestion and data quality checks.
- Build dataflows pipelines and lakehouse tables in Azure Fabric to meet standards.
- Collaborate with engineering team on ETL design performance tuning and optimization.
- Test review code and maintain technical documentation using Confluence and Git.
- Monitor data loads troubleshoot failures and update project tracking tools (Zephyr/JIRA).
- Stay updated on Azure tools and contribute to ongoing process improvements.
- Utilize Azure Data Factory Databricks and Data Lakes for data engineering tasks.
- Apply basic data warehousing concepts including ETL staging and data modeling (fact/dimension).
- Execute version control and CI/CD processes with Git/Azure DevOps.
- Apply knowledge of metadata management and data quality best practices.
- Work in Agile or Scrum development environments.
- Leverage Power BI and basic DAX concepts as needed.
Requirements:
- 1 2 years hands-on experience with SQL (preferably T-SQL) and Python scripting.
- Direct experience with Azure Data Factory Databricks Data Lakes and related services.
- Familiarity with data modeling metadata management and data-quality concepts.
- Experience with Git/Azure DevOps Confluence and JIRA.
- Understanding of Agile/Scrum methodologies.
- Local to Boston MA (hybrid work requirement).
- No sponsorship available; LinkedIn profile required for submissions.