About US:-
We turn customer challenges into growth opportunities.
Material is a global strategy partner to the worlds most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences.
Srijan a Material company is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners.Be a part of an Awesome Tribe
Role- Lead Data Engineer (Azure Data Engineering Snowflake Data warehousing Kafka Optional)
Job Responsibilities
We are seeking a Lead Data Engineer to design and deliver scalable high-performance data platforms for real-time and batch analytics. The ideal candidate has deep expertise in data engineering data modelling & warehousing Snowflake and Azure services with proven ability to build orchestrate and optimise data pipelines end-to-end.
Azure Data Engineering Pipelines & Processing (Datawarehouse)
- Architect design and build scalable batch and real-time data pipelines using Azure Data Engineering services (ADF Synapse Data Lake Event Hub Functions) and PySpark.
- Apply orchestration and load optimisation strategies for reliable high-throughput pipelines.
- Implement both streaming (low-latency) and batch (high-volume) processing solutions.
- Drive best practices in data modelling data warehousing and SQL development.
Snowflake Cloud Data Warehouse
- Design and optimise data ingestion pipelines from multiple sources into Snowflake ensuring availability scalability and cost efficiency.
- Implement ELT/ETL patterns partitioning clustering and performance tuning for large datasets.
- Develop and maintain data models and data warehouses leveraging Snowflake-specific features (streams tasks warehouses).
Real-Time Streaming (Kafka Optional)
- Design and implement event-driven architectures using Kafka (topic design partitioning consumer groups schema management monitoring).
- Ensure high-throughput low-latency stream processing and data reliability.
Collaboration & Leadership
- Partner with data scientists ML engineers and business stakeholders to deliver high-quality trusted datasets.
- Translate business requirements into scalable reusable data engineering solutions.
- Provide technical leadership mentoring and knowledge-sharing within the team.
Required Skills & Qualifications
- 5 years of Data Engineering experience in large-scale enterprise projects.
- Strong expertise in Snowflake: ELT/ETL pipelines performance tuning query optimisation advanced features (streams tasks warehouses).
- Hands-on with Azure Data Engineering stack: ADF Event Hub Synapse Databricks Data Lake Functions scaling/load balancing strategies.
- Advanced SQL skills with proven ability to optimise transformations at scale.
- Proficiency in Python & PySpark for distributed high-performance data processing.
- Demonstrated success delivering real-time and batch pipelines in cloud environments.
Preferred Skills
- CI/CD Docker DevOps and server management.
- Monitoring with Azure Monitor Log Analytics.
- Kafka (preferred but optional).
Required Experience:
Senior IC