About the Role
We are looking for a Senior Data Engineer to join our growing data this role you will be a key contributor to our data infrastructure designing and building the systems that power analytics reporting and business intelligence capabilities. Youll work across modern cloud data platforms architect robust pipelines and partner closely with analytics and engineering teams to ensure data is clean reliable and ready for consumption.
What Youll Do
Design build and maintain scalable data pipelines that ingest transform and deliver data across our cloud data ecosystem
Architect and implement data warehouse schemas optimized for performance scalability and analytical consumption
Prepare and model data to support BI tools and downstream reporting needs ensuring data is accurate well-documented and easily accessible
Collaborate with data analysts scientists and business stakeholders to understand data requirements and translate them into robust engineering solutions
Work across multiple cloud data warehouse platforms as needed applying best practices for each environment
Contribute to data governance practices including lineage cataloging and documentation
Identify and resolve data quality issues proactively
Mentor junior team members and contribute to engineering best practices
Requirements
What Were Looking For
5 years of experience in data engineering
Hands-on production-level experience with Snowflake (mandatory) including schema design performance optimization and administration
Strong experience with at least one additional data warehouse platform; Google BigQuery experience is a strong plus
Solid understanding of data warehouse design principles star schema dimensional modeling slowly changing dimensions and similar patterns
Strong SQL skills you are comfortable writing complex queries optimizing for performance and debugging data issues
Experience building and maintaining data pipelines using modern orchestration and transformation tools (e.g. dbt Apache Airflow Spark or similar)
Proficiency in at least one programming language commonly used in data engineering (Python strongly preferred)
Expert knowledge of one or more relational or analytical databases (e.g. PostgreSQL Redshift SQL Server or similar)
Experience working in cloud-native environments (AWS GCP or Azure)
Strong communication skills ability to clearly explain technical concepts to both technical and non-technical stakeholders and to collaborate effectively across teams
Benefits
Nice to Have
Experience with Google BigQuery including working with large-scale datasets and optimizing query costs
Experience designing semantic data models or working within a semantic layer (e.g. LookML dbt metrics AtScale)
Hands-on experience with a BI tool such as Looker Tableau Power BI or similar particularly in structuring data to serve those tools effectively
Familiarity with data lakehouse patterns or platforms (e.g. Delta Lake Apache Iceberg)
Experience with CI/CD practices applied to data infrastructure
Required Skills:
Required Skills & Experience Core Technical Skills Strong proficiency in Python SQL PySpark. Hands-on expertise with Kafka Kafka Connect Debezium Airflow Databricks. Deep experience with BigQuery Snowflake MySQL Postgres MongoDB. Solid understanding of vector data stores and search indexing. Knowledge of GCP services like Big Query Cloud Functions Cloud Run Data Flow Data Proc Data Stream etc.. Good to have Certifications: GCP Professional Data Engineer Elastic Certified Engineer AI Gemini Enterprise Vertex AI Agent Builder ADK Non-Technical & Leadership Skills Communication: Exceptional verbal and written communication skills with the ability to articulate complex technical concepts to both technical and non-technical audiences. Mentorship & Coaching: Proven experience in mentoring junior and mid-level engineers fostering a culture of continuous learning and growth. Problem-Solving: Strong analytical and debugging skills with a proactive approach to identifying and resolving technical roadblocks. Ownership & Accountability: Demonstrates a high level of responsibility for project outcomes system reliability and code quality. Agile Proficiency: Deep understanding and practical experience with Agile methodologies (Scrum/Kanban). Stakeholder Management: Ability to effectively manage expectations and build consensus across different teams. Qualifications Bachelors or Masters degree in Computer Science Engineering or a related field (or equivalent practical experience). Typically 7 years of progressive experience in data engineering with 2 years in a technical leadership or lead engineer role.
About the RoleWe are looking for a Senior Data Engineer to join our growing data this role you will be a key contributor to our data infrastructure designing and building the systems that power analytics reporting and business intelligence capabilities. Youll work across modern cloud data platform...
About the Role
We are looking for a Senior Data Engineer to join our growing data this role you will be a key contributor to our data infrastructure designing and building the systems that power analytics reporting and business intelligence capabilities. Youll work across modern cloud data platforms architect robust pipelines and partner closely with analytics and engineering teams to ensure data is clean reliable and ready for consumption.
What Youll Do
Design build and maintain scalable data pipelines that ingest transform and deliver data across our cloud data ecosystem
Architect and implement data warehouse schemas optimized for performance scalability and analytical consumption
Prepare and model data to support BI tools and downstream reporting needs ensuring data is accurate well-documented and easily accessible
Collaborate with data analysts scientists and business stakeholders to understand data requirements and translate them into robust engineering solutions
Work across multiple cloud data warehouse platforms as needed applying best practices for each environment
Contribute to data governance practices including lineage cataloging and documentation
Identify and resolve data quality issues proactively
Mentor junior team members and contribute to engineering best practices
Requirements
What Were Looking For
5 years of experience in data engineering
Hands-on production-level experience with Snowflake (mandatory) including schema design performance optimization and administration
Strong experience with at least one additional data warehouse platform; Google BigQuery experience is a strong plus
Solid understanding of data warehouse design principles star schema dimensional modeling slowly changing dimensions and similar patterns
Strong SQL skills you are comfortable writing complex queries optimizing for performance and debugging data issues
Experience building and maintaining data pipelines using modern orchestration and transformation tools (e.g. dbt Apache Airflow Spark or similar)
Proficiency in at least one programming language commonly used in data engineering (Python strongly preferred)
Expert knowledge of one or more relational or analytical databases (e.g. PostgreSQL Redshift SQL Server or similar)
Experience working in cloud-native environments (AWS GCP or Azure)
Strong communication skills ability to clearly explain technical concepts to both technical and non-technical stakeholders and to collaborate effectively across teams
Benefits
Nice to Have
Experience with Google BigQuery including working with large-scale datasets and optimizing query costs
Experience designing semantic data models or working within a semantic layer (e.g. LookML dbt metrics AtScale)
Hands-on experience with a BI tool such as Looker Tableau Power BI or similar particularly in structuring data to serve those tools effectively
Familiarity with data lakehouse patterns or platforms (e.g. Delta Lake Apache Iceberg)
Experience with CI/CD practices applied to data infrastructure
Required Skills:
Required Skills & Experience Core Technical Skills Strong proficiency in Python SQL PySpark. Hands-on expertise with Kafka Kafka Connect Debezium Airflow Databricks. Deep experience with BigQuery Snowflake MySQL Postgres MongoDB. Solid understanding of vector data stores and search indexing. Knowledge of GCP services like Big Query Cloud Functions Cloud Run Data Flow Data Proc Data Stream etc.. Good to have Certifications: GCP Professional Data Engineer Elastic Certified Engineer AI Gemini Enterprise Vertex AI Agent Builder ADK Non-Technical & Leadership Skills Communication: Exceptional verbal and written communication skills with the ability to articulate complex technical concepts to both technical and non-technical audiences. Mentorship & Coaching: Proven experience in mentoring junior and mid-level engineers fostering a culture of continuous learning and growth. Problem-Solving: Strong analytical and debugging skills with a proactive approach to identifying and resolving technical roadblocks. Ownership & Accountability: Demonstrates a high level of responsibility for project outcomes system reliability and code quality. Agile Proficiency: Deep understanding and practical experience with Agile methodologies (Scrum/Kanban). Stakeholder Management: Ability to effectively manage expectations and build consensus across different teams. Qualifications Bachelors or Masters degree in Computer Science Engineering or a related field (or equivalent practical experience). Typically 7 years of progressive experience in data engineering with 2 years in a technical leadership or lead engineer role.