PLEASE NOTE:
- It is a 100% on-site position in New York NY.
Job Description
We are seeking a Senior Data Engineer to design build and optimize scalable data solutions that power critical business operations and client initiatives. This role requires deep expertise in SQL Python and enterprise-scale ETL development with strong experience supporting cloud-based data platforms and migration efforts.
Youll collaborate with cross-functional teams to develop high-performing data pipelines ensure data reliability and contribute to engineering best practices across the data ecosystem.
Skill Set:
- 7 years in Python Pyspark and Snowflake. Proven expertise in data modeling performance tuning optimization AWS and Cortex AI.
- Strong experience with ETL tools and processes.
- Working knowledge of .
Responsibilities
- Design build and maintain scalable data pipelines using Python PySpark and Snowflake.
- Develop and optimize Snowflake data models including schema design clustering micro-partitioning strategies and query performance tuning.
- Implement AI-enabled data workflows using Snowflake Cortex AI or similar AI/ML-integrated data platforms.
- Build robust backend data services and APIs supporting enterprise analytics and application data needs.
- Lead end-to-end implementation of data engineering solutions across cloud environments especially AWS.
- Develop reusable ELT frameworks and support data ingestion from diverse structured and unstructured sources.
- Apply strong SQL expertise to create efficient scalable data transformations and data marts.
- Support or collaborate with frontend teams using in data-driven product development.
- Work with distributed data processing tools and modern compute frameworks.
- Ensure data quality data governance and compliance with enterprise security policies.
- Partner with data science analytics architecture and product teams to deliver scalable high-impact data products.
- Troubleshoot complex data issues optimize compute costs and drive continuous improvement.
Requirements
Required Qualifications:
- 7 years of experience in data engineering or backend software development.
- Strong hands-on experience with Python PySpark and Snowflake.
- Proven expertise in Snowflake performance tuning query optimization and warehousing best practices.
- Experience with Snowflake Cortex AI or other AI-enhanced data platform capabilities.
- Deep understanding of modern data warehousing concepts including dimensional modeling ELT/ETL patterns and data mart design.
- Advanced SQL skills and experience designing scalable production-grade data models.
- Experience delivering end-to-end data engineering solutions in cloud environments (AWS preferred).
- Working knowledge of for supporting or developing data-driven UI components.
- Strong understanding of distributed data processing frameworks (e.g. Spark Snowpark Kafka).
- Solid understanding of data governance security and privacy best practices.
Preferred Qualifications:
- Experience with Snowpark Streamlit-in-Snowflake or Snowflake Native Apps.
- Familiarity with CI/CD for data engineering workloads.
- Knowledge of containerization technologies such as Docker or Kubernetes.
- Experience supporting AI/ML workloads in production environments.
Required Skills:
Required Qualifications: 7 years of experience in data engineering or backend software development. Strong hands-on experience with Python PySpark and Snowflake. Proven expertise in Snowflake performance tuning query optimization and warehousing best practices. Experience with Snowflake Cortex AI or other AI-enhanced data platform capabilities. Deep understanding of modern data warehousing concepts including dimensional modeling ELT/ETL patterns and data mart design. Advanced SQL skills and experience designing scalable production-grade data models. Experience delivering end-to-end data engineering solutions in cloud environments (AWS preferred). Working knowledge of for supporting or developing data-driven UI components. Strong understanding of distributed data processing frameworks (e.g. Spark Snowpark Kafka). Solid understanding of data governance security and privacy best practices. Preferred Qualifications: Experience with Snowpark Streamlit-in-Snowflake or Snowflake Native Apps. Familiarity with CI/CD for data engineering workloads. Knowledge of containerization technologies such as Docker or Kubernetes. Experience supporting AI/ML workloads in production environments.
Required Education:
7 years of experience in data engineering or backend software development
PLEASE NOTE:It is a 100% on-site position in New York NY.Job DescriptionWe are seeking a Senior Data Engineer to design build and optimize scalable data solutions that power critical business operations and client initiatives. This role requires deep expertise in SQL Python and enterprise-scale ETL ...
PLEASE NOTE:
- It is a 100% on-site position in New York NY.
Job Description
We are seeking a Senior Data Engineer to design build and optimize scalable data solutions that power critical business operations and client initiatives. This role requires deep expertise in SQL Python and enterprise-scale ETL development with strong experience supporting cloud-based data platforms and migration efforts.
Youll collaborate with cross-functional teams to develop high-performing data pipelines ensure data reliability and contribute to engineering best practices across the data ecosystem.
Skill Set:
- 7 years in Python Pyspark and Snowflake. Proven expertise in data modeling performance tuning optimization AWS and Cortex AI.
- Strong experience with ETL tools and processes.
- Working knowledge of .
Responsibilities
- Design build and maintain scalable data pipelines using Python PySpark and Snowflake.
- Develop and optimize Snowflake data models including schema design clustering micro-partitioning strategies and query performance tuning.
- Implement AI-enabled data workflows using Snowflake Cortex AI or similar AI/ML-integrated data platforms.
- Build robust backend data services and APIs supporting enterprise analytics and application data needs.
- Lead end-to-end implementation of data engineering solutions across cloud environments especially AWS.
- Develop reusable ELT frameworks and support data ingestion from diverse structured and unstructured sources.
- Apply strong SQL expertise to create efficient scalable data transformations and data marts.
- Support or collaborate with frontend teams using in data-driven product development.
- Work with distributed data processing tools and modern compute frameworks.
- Ensure data quality data governance and compliance with enterprise security policies.
- Partner with data science analytics architecture and product teams to deliver scalable high-impact data products.
- Troubleshoot complex data issues optimize compute costs and drive continuous improvement.
Requirements
Required Qualifications:
- 7 years of experience in data engineering or backend software development.
- Strong hands-on experience with Python PySpark and Snowflake.
- Proven expertise in Snowflake performance tuning query optimization and warehousing best practices.
- Experience with Snowflake Cortex AI or other AI-enhanced data platform capabilities.
- Deep understanding of modern data warehousing concepts including dimensional modeling ELT/ETL patterns and data mart design.
- Advanced SQL skills and experience designing scalable production-grade data models.
- Experience delivering end-to-end data engineering solutions in cloud environments (AWS preferred).
- Working knowledge of for supporting or developing data-driven UI components.
- Strong understanding of distributed data processing frameworks (e.g. Spark Snowpark Kafka).
- Solid understanding of data governance security and privacy best practices.
Preferred Qualifications:
- Experience with Snowpark Streamlit-in-Snowflake or Snowflake Native Apps.
- Familiarity with CI/CD for data engineering workloads.
- Knowledge of containerization technologies such as Docker or Kubernetes.
- Experience supporting AI/ML workloads in production environments.
Required Skills:
Required Qualifications: 7 years of experience in data engineering or backend software development. Strong hands-on experience with Python PySpark and Snowflake. Proven expertise in Snowflake performance tuning query optimization and warehousing best practices. Experience with Snowflake Cortex AI or other AI-enhanced data platform capabilities. Deep understanding of modern data warehousing concepts including dimensional modeling ELT/ETL patterns and data mart design. Advanced SQL skills and experience designing scalable production-grade data models. Experience delivering end-to-end data engineering solutions in cloud environments (AWS preferred). Working knowledge of for supporting or developing data-driven UI components. Strong understanding of distributed data processing frameworks (e.g. Spark Snowpark Kafka). Solid understanding of data governance security and privacy best practices. Preferred Qualifications: Experience with Snowpark Streamlit-in-Snowflake or Snowflake Native Apps. Familiarity with CI/CD for data engineering workloads. Knowledge of containerization technologies such as Docker or Kubernetes. Experience supporting AI/ML workloads in production environments.
Required Education:
7 years of experience in data engineering or backend software development
View more
View less