Data Engineer to join our team in New York NY (Need Onsite day 1 hybrid 3 days from office).
Our challenge
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will possess expertise in data pipeline development data warehousing and have a solid understanding of ITIL processes to support operational efficiency.
Responsibilities:
-
Design develop and maintain scalable data pipelines using SQL Python and PySpark.
-
Build and optimize data warehouses leveraging Snowflake for efficient data storage and retrieval.
-
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
-
Monitor and troubleshoot data workflows ensuring data quality and performance.
-
Document data processes and procedures following ITIL best practices.
Requirements:
-
10 years of experience in data engineering or related roles.
-
Strong proficiency in writing complex queries for data extraction transformation and loading (ETL).
-
Experience in Python coding and PySpark frameworks.
-
Strong experience in SQL
-
Hands-on experience with designing implementing and managing data warehouses.
-
Deep understanding of Snowflake platform features architecture and best practices.
-
Experience with ITIL framework especially in Incident Management and Problem Management.
-
Experience in Servicenow or Jenkins
-
Proven experience handling incident resolution root cause analysis and problem tracking within ITSM tools.
Preferred Certifications:
Data Engineer to join our team in New York NY (Need Onsite day 1 hybrid 3 days from office). Our challenge We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will possess expertise in data pipeline development data warehousing and have a solid understanding ...
Data Engineer to join our team in New York NY (Need Onsite day 1 hybrid 3 days from office).
Our challenge
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will possess expertise in data pipeline development data warehousing and have a solid understanding of ITIL processes to support operational efficiency.
Responsibilities:
-
Design develop and maintain scalable data pipelines using SQL Python and PySpark.
-
Build and optimize data warehouses leveraging Snowflake for efficient data storage and retrieval.
-
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
-
Monitor and troubleshoot data workflows ensuring data quality and performance.
-
Document data processes and procedures following ITIL best practices.
Requirements:
-
10 years of experience in data engineering or related roles.
-
Strong proficiency in writing complex queries for data extraction transformation and loading (ETL).
-
Experience in Python coding and PySpark frameworks.
-
Strong experience in SQL
-
Hands-on experience with designing implementing and managing data warehouses.
-
Deep understanding of Snowflake platform features architecture and best practices.
-
Experience with ITIL framework especially in Incident Management and Problem Management.
-
Experience in Servicenow or Jenkins
-
Proven experience handling incident resolution root cause analysis and problem tracking within ITSM tools.
Preferred Certifications:
View more
View less