Azure Data Engineer P&ampSC Data and Intelligence

INFT Solutions Inc

Not Interested
Bookmark
Report This Job

profile Job Location:

Frisco, TX - USA

profile Monthly Salary: Not Disclosed
Posted on: 8 hours ago
Vacancies: 1 Vacancy

Job Summary

Job Description:

Mandatory Areas
Microsoft fabric
Databricks
Data pipelines
Data products
Kafka Azure event hub


Mandatory Skills:
Microsoft fabric Databricks Data pipelines Data products Kafka Azure event hub
Preferred Skills:
Apache airflow Azure data factory Medallion architecture ci/cd devops tms

The Data Engineer (Level IV) designs builds and operates the data pipelines and lakehouse data products that power P&SC intelligence across device supply chain reverse logistics procurement network supply chain and real-time control tower capabilities. Operating within a squad focused on a specific SC domain or cross-cutting platform function the Data Engineer is a hands-on technical contributor who builds to production quality owns pipeline reliability and KTLO and actively raises the engineering standard of the team around them.


CORE RESPONSIBILITIES:
Design and build production-grade data pipelines spanning source ingestion bronze landing silver transformation and gold-layer data product delivery within the squads domain scope.
Own pipeline KTLO (Keep the Lights On) for assigned data products including monitoring alerting incident response and ongoing reliability improvements.
Implement data ingestion patterns for assigned source systems including batch file ingestion API-based ingestion and event-driven streaming (Kafka Azure Event Hub) depending on squad scope.
Apply medallion architecture (bronze silver gold) and Fabric IQ certification standards consistently across all data product builds.
Collaborate with System Analysts to implement field-level transformations business rule logic and data quality checks as specified in product requirements documentation.
Participate in and contribute to pipeline design reviews ensuring solutions align with the organizations Databricks and Fabric engineering standards.
Support the migration and deprecation of legacy platforms including SCOpsBI SQL Server and SAP boundary systems following the organizations extract validate rebuild cutover and decommission pattern. Write and maintain comprehensive pipeline documentation including data lineage transformation logic SLA definitions and dependency maps. Contribute to the organizations DevOps and engineering reliability practices including CI/CD pipeline setup testing frameworks and incident runbooks.
Mentor and technically guide junior and mid-level data engineers within the squad.


REQUIRED QUALIFICATIONS:
7-10 years of data engineering experience with a strong track record delivering production data pipelines in large enterprise environments.
Expert proficiency in PySpark or Spark SQL for large-scale data transformation on distributed compute platforms.
Hands-on experience with Databricks including Delta Lake Unity Catalog and workflow orchestration.
Experience with Microsoft Fabric or Azure Synapse Analytics; familiarity with Fabric IQ and OneLake is a plus.
Proficiency with pipeline orchestration tools such as Azure Data Factory Databricks Workflows or Apache Airflow.
Solid SQL skills for data validation transformation logic and ad-hoc source system analysis.
Experience building and maintaining ingestion pipelines from enterprise operational systems such as ERP WMS TMS or comparable platforms.
Strong understanding of data quality frameworks including implementing checks alerting on anomalies and maintaining SLA-compliant pipeline health.
Experience with DevOps practices for data pipelines including version control (Git) CI/CD and automated testing.
Ability to operate independently on complex technical problems with minimal oversight while maintaining clear technical documentation.

PREFERRED QUALIFICATIONS:
Experience with real-time streaming technologies including Kafka Azure Event Hub Delta Live Tables or Spark Structured Streaming.
Familiarity with Oracle ERP ingestion patterns or large-scale ERP migration programs. Background in supply chain logistics reverse logistics or procurement data domains.
Experience with legacy platform migration and decommission programs.
Python development skills beyond PySpark including utility scripting and framework contributions.

Job Description: Mandatory AreasMicrosoft fabricDatabricksData pipelinesData productsKafka Azure event hub Mandatory Skills:Microsoft fabric Databricks Data pipelines Data products Kafka Azure event hubPreferred Skills:Apache airflow Azure data factory Medallion architecture ci/cd devops tmsThe Data...
View more view more