MS Fabric Data Engineer

Not Interested
Bookmark
Report This Job

profile Job Location:

Hyderabad - India

profile Monthly Salary: Not Disclosed
Posted on: 3 hours ago
Vacancies: 1 Vacancy

Job Summary

  • Strong hands-on experience with Microsoft Fabric (Data Factory Pipelines Dataflows Gen2 Lakehouse OneLake).
  • Proven ability to migrate legacy ETL workloads (ADF/SSIS/on-prem SQL) into Fabric.
  • Advanced skills in PySpark Python and Fabric Notebooks for complex transformations.
  • Expertise in Power BI Semantic Models Datasets and Direct Lake mode.
  • Strong SQL skills including data modeling and star-schema design.
  • Solid understanding of Azure ADLS Gen2 security governance and Entra ID/ACLs.
  • Ability to build optimized pipelines star schemas and reporting-ready data models.
  • Strong collaboration skills to work with BI/reporting teams on data quality and consistency.
  • Strong hands-on experience with Microsoft Fabric (Data Factory Pipelines Dataflows Gen2 Lakehouse OneLake).
  • Proven ability to migrate legacy ETL workloads (ADF/SSIS/on-prem SQL) into Fabric.
  • Advanced skills in PySpark Python and Fabric Notebooks for complex transformations.
  • Expertise in Power BI Semantic Models Datasets and Direct Lake mode.
  • Strong SQL skills including data modeling and star-schema design.
  • Solid understanding of Azure ADLS Gen2 security governance and Entra ID/ACLs.
  • Ability to build optimized pipelines star schemas and reporting-ready data models.
  • Strong collaboration skills to work with BI/reporting teams on data quality and consistency.

  • Microsoft Fabric Core: Deep proficiency in Fabric architecture specifically Data Factory pipelines Dataflows Gen2 and Lakehouse/OneLake storage strategies.
  • Migration Expertise: Proven experience migrating legacy ETL processes (e.g. from Azure Data Factory SSIS or on-prem SQL) into the Microsoft Fabric ecosystem.
  • Scripting & Transformation: Expert-level coding in PySpark and Python for Fabric Notebooks to handle complex data transformations and enrichment.
  • Power BI Backend: Ability to build robust Semantic Models and Power BI Datasets directly on top of OneLake (using Direct Lake mode where applicable) for high-performance reporting.
  • SQL Proficiency: Advanced SQL skills for data modeling star schema design and querying within the SQL Endpoint of the Lakehouse.
  • Azure Ecosystem: Strong grasp of Azure Data Lake Storage (ADLS Gen2) and security governance (Entra ID/ACLs).
  • Key Responsibilities (The Ask)

  • Architect & Migrate: Lead the backend migration of data from diverse sources into a unified Fabric Lakehouse architecture.
  • Pipeline Optimization: Re-engineer and optimize data pipelines to ensure seamless data ingestion and transformation for high availability.
  • Model for Reporting: Design purpose-driven data views and efficient Star Schemas specifically tailored to support rapid Power BI report rendering.
  • Cross-Functional Support: Bridge the gap between backend data engineering and frontend reporting by ensuring data quality and consistency for the BI team.
Strong hands-on experience with Microsoft Fabric (Data Factory Pipelines Dataflows Gen2 Lakehouse OneLake). Proven ability to migrate legacy ETL workloads (ADF/SSIS/on-prem SQL) into Fabric. Advanced skills in PySpark Python and Fabric Notebooks for complex transformations. Expertise in Power BI Se...
View more view more

Key Skills

  • Disaster Recovery
  • Customer Service
  • Database
  • Benefits
  • AV