Supply Chain Data Lead
Philadelphia, PA - USA
Job Summary
Key Responsibilities
Data Architecture & Design
Responsible in designing SupplyChain Anomaly Detection and Revenue Assurance platform for Order processing data platform.
Define and own end-to-end supply chain data architecture including source ingestion transformation storage and consumption layers.
Design data models for supply chain domains such as inventory logistics fulfillment and supplier performance.
Establish architecture standards patterns and design guidelines aligned with the enterprise data strategy.
Data Engineering & Platforms
Architect and guide development of scalable data pipelines using
o PySpark and Spark-based processing
o Python for transformation orchestration and data services
o Enterprise ETL/ELT frameworks
o Advanced SQL for data modeling and analytics
Support both batch and near real-time data processing use cases
Optimize pipelines for data quality performance scalability and cost.
Supply Chain Analytics Enablement
Enable downstream usage for
o Supply chain planning and forecasting
o Inventory optimization and demand analytics
o Vendor and procurement performance reporting
o Operational KPIs and executive dashboards
o SKU Management
Partner with analytics and data science teams to ensure data is fit for advanced analytics platforms.
Cloud & Data Storage
Design and oversee implementation of data solutions leveraging cloud-native data platforms.
Ensure secure compliant and resilient data storage and access patterns
Data Governance & Quality
Partner with governance and security teams to ensure
o Data quality consistency and reliability
o Data lineage metadata management and documentation
o Compliance with data privacy security and internal policies
Leadership & Collaboration
Collaborate with product owners supply chain leaders engineering teams and vendors
Translate business and operational needs into technical architecture solutions
Mentor data engineers and architects on best practices and design principles
Qualifications
Data Engineering: 10 years building data pipelines with Kafka/CDC ETL tooling.
Streaming Expertise: Hands on with stream processing using Spark Streaming Kafka Streams etc..
SQL & BI: Strong SQL/analytics skills and experience building dashboards
Data Governance: Familiarity with lineage/audit tools (OpenLineage) data privacy and regulatory controls.
Communication: Strong cross functional collaboration and experience presenting to executives.
Education: Bachelors degree in CS/Engineering or equivalent practical experience.