To design and implement scalable and efficient data pipelines using Microsoft Fabric components such as OneLake Dataflows Gen2 and Lakehouse. Develop ETL/ELT processes using Azure Data Factory PySpark Spark SQL and Python. Ensure data quality integrity and security across all platforms. Collaborate with stakeholders to gather requirements and deliver technical solutions. Optimize data workflows and troubleshoot performance issues. Support hybrid cloud deployments and integrate on-premises and cloud environments while maintaining documentation and following best practice in data engineering including version control and modular code design.
Qualifications :
- Bsc Computer Science or Information Technology as well as Microsoft certification in Azure Data Engineering or Microsoft Fabric
- Minimum 3 years experience in a Data engineering role with strong hands-on experience in Microsoft Fabric Azure Synapse Azure SQL and Databricks
- Proficiency in SQL Python and Power BI
- Solid understanding of data modelling data governance and data warehousing
- Experience with CI/CD pipelines DevOps or machine learning workflows is a plus.
Additional Information :
Behavioural Competencies:
- Adopting Practical Approaches
- Checking Things
- Developing Expertise
- Embracing Change
- Examining Information
Technical Competencies:
- Big Data Frameworks and Tools
- Data Engineering
- Data Integrity
- IT Knowledge
- Stakeholder Management (IT)
Remote Work :
No
Employment Type :
Full-time
To design and implement scalable and efficient data pipelines using Microsoft Fabric components such as OneLake Dataflows Gen2 and Lakehouse. Develop ETL/ELT processes using Azure Data Factory PySpark Spark SQL and Python. Ensure data quality integrity and security across all platforms. Collaborate ...
To design and implement scalable and efficient data pipelines using Microsoft Fabric components such as OneLake Dataflows Gen2 and Lakehouse. Develop ETL/ELT processes using Azure Data Factory PySpark Spark SQL and Python. Ensure data quality integrity and security across all platforms. Collaborate with stakeholders to gather requirements and deliver technical solutions. Optimize data workflows and troubleshoot performance issues. Support hybrid cloud deployments and integrate on-premises and cloud environments while maintaining documentation and following best practice in data engineering including version control and modular code design.
Qualifications :
- Bsc Computer Science or Information Technology as well as Microsoft certification in Azure Data Engineering or Microsoft Fabric
- Minimum 3 years experience in a Data engineering role with strong hands-on experience in Microsoft Fabric Azure Synapse Azure SQL and Databricks
- Proficiency in SQL Python and Power BI
- Solid understanding of data modelling data governance and data warehousing
- Experience with CI/CD pipelines DevOps or machine learning workflows is a plus.
Additional Information :
Behavioural Competencies:
- Adopting Practical Approaches
- Checking Things
- Developing Expertise
- Embracing Change
- Examining Information
Technical Competencies:
- Big Data Frameworks and Tools
- Data Engineering
- Data Integrity
- IT Knowledge
- Stakeholder Management (IT)
Remote Work :
No
Employment Type :
Full-time
View more
View less