We are looking for a hands-on Data Engineer to design build and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture optimizing data flow and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact growth balance and well-being.
- Design build and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
- Partner with architects and engineering teams to define and evolve our cloud-based data architecture including data lakes data warehouses and streaming data platforms.
- Work closely with data scientists analysts and business partners to understand requirements and deliver reliable reusable data solutions.
- Develop and maintain scalable data storage solutions (e.g. AWS S3 Redshift Snowflake) with a focus on performance reliability and security.
- Implement data quality checks validation processes and metadata documentation.
- Monitor troubleshoot and improve pipeline performance and workflow efficiency.
- Stay current on industry trends and recommend new technologies and approaches.
Data Engineer (Mid-Level)
- Strong understanding of data integration data modeling and SDLC.
- Experience working on project teams and delivering within Agile environments.
- Hands-on experience with AWS data services (e.g. Glue Lambda Athena Step Functions Lake Formation).
- Associate degree 8 years experience or Bachelors 4 years or Masters 2 years. Or Associate degree 4 years experience or Bachelors 2 years or Masters 1 year experience.
- Expert-level proficiency in at least one major cloud platform (AWS preferred).
- Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
- Experience with big data processing (e.g. Spark Hadoop Flink) is a plus.
- Experience with relational and NoSQL databases (e.g. PostgreSQL MySQL MongoDB Cassandra).
- Familiarity with CI/CD pipelines and DevOps principles.
- Proficiency in Python and SQL (required).
- Experience with ETL/ELT tools (e.g. Airflow dbt AWS Glue ADF).
- Understanding of data governance and metadata management.
- Experience with Snowflake.
- AWS certification is a plus.
- Strong problem-solving skills and ability to troubleshoot pipeline performance issues.
Overview We are looking for a hands-on Data Engineer to design build and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture optimizing data flow and ensuring data quality and availability across the organization....
We are looking for a hands-on Data Engineer to design build and maintain scalable data platforms and pipelines in a modern cloud environment. You will play a key role in shaping our data architecture optimizing data flow and ensuring data quality and availability across the organization.
This role offers the opportunity to contribute directly to meaningful work that supports the development and delivery of life-changing products. You will collaborate with global teams and be part of a culture that values impact growth balance and well-being.
- Design build and optimize data pipelines and ETL/ELT workflows to support analytics and reporting.
- Partner with architects and engineering teams to define and evolve our cloud-based data architecture including data lakes data warehouses and streaming data platforms.
- Work closely with data scientists analysts and business partners to understand requirements and deliver reliable reusable data solutions.
- Develop and maintain scalable data storage solutions (e.g. AWS S3 Redshift Snowflake) with a focus on performance reliability and security.
- Implement data quality checks validation processes and metadata documentation.
- Monitor troubleshoot and improve pipeline performance and workflow efficiency.
- Stay current on industry trends and recommend new technologies and approaches.
Data Engineer (Mid-Level)
- Strong understanding of data integration data modeling and SDLC.
- Experience working on project teams and delivering within Agile environments.
- Hands-on experience with AWS data services (e.g. Glue Lambda Athena Step Functions Lake Formation).
- Associate degree 8 years experience or Bachelors 4 years or Masters 2 years. Or Associate degree 4 years experience or Bachelors 2 years or Masters 1 year experience.
- Expert-level proficiency in at least one major cloud platform (AWS preferred).
- Advanced SQL and strong understanding of data warehousing and data modeling (Kimball/star schema).
- Experience with big data processing (e.g. Spark Hadoop Flink) is a plus.
- Experience with relational and NoSQL databases (e.g. PostgreSQL MySQL MongoDB Cassandra).
- Familiarity with CI/CD pipelines and DevOps principles.
- Proficiency in Python and SQL (required).
- Experience with ETL/ELT tools (e.g. Airflow dbt AWS Glue ADF).
- Understanding of data governance and metadata management.
- Experience with Snowflake.
- AWS certification is a plus.
- Strong problem-solving skills and ability to troubleshoot pipeline performance issues.
View more
View less