About the Role
We are looking for a Mid-Level Data Engineer to design build and maintain scalable data pipelines and data platforms that support analytics reporting and machine learning use cases. You will work closely with data analysts data scientists and software engineers to ensure reliable high-quality data across the organization.
This role is ideal for someone with solid hands-on experience who can own data pipelines end-to-end and contribute to improving our data architecture.
Key Responsibilities
-
Design build and maintain ETL/ELT pipelines that ingest data from multiple sources
-
Develop and optimize data models for analytics and reporting use cases
-
Ensure data quality reliability and performance across pipelines
-
Work with stakeholders to understand data requirements and translate them into technical solutions
-
Optimize data storage and query performance in data warehouses and/or data lakes
-
Monitor pipelines and troubleshoot data issues in production
-
Write clean maintainable and well-documented code
-
Collaborate with DevOps and engineering teams on deployment scaling and security best practices
Required Qualifications
-
35 years of experience in Data Engineering or a related role
-
Strong proficiency in SQL and at least one programming language (Python preferred)
-
Hands-on experience with ETL/ELT tools or frameworks (e.g. Airflow dbt Spark)
-
Experience working with data warehouses (e.g. Snowflake BigQuery Redshift)
-
Familiarity with cloud platforms (AWS GCP or Azure)
-
Understanding of data modeling concepts (star schema normalization denormalization)
-
Experience with version control systems (Git)
Nice to Have
-
Experience with streaming data (Kafka Kinesis Pub/Sub)
-
Knowledge of CI/CD pipelines for data workflows
-
Exposure to machine learning data pipelines
-
Familiarity with data governance security and compliance practices
Soft Skills
-
Strong problem-solving and analytical skills
-
Ability to work independently and manage priorities
-
Clear communication with both technical and non-technical stakeholders
-
Team-oriented mindset with attention to detail
What We Offer
-
Competitive salary and benefits
-
Flexible working arrangements
-
Opportunities for learning and career growth
-
Collaborative and data-driven culture
About the Role We are looking for a Mid-Level Data Engineer to design build and maintain scalable data pipelines and data platforms that support analytics reporting and machine learning use cases. You will work closely with data analysts data scientists and software engineers to ensure reliable high...
About the Role
We are looking for a Mid-Level Data Engineer to design build and maintain scalable data pipelines and data platforms that support analytics reporting and machine learning use cases. You will work closely with data analysts data scientists and software engineers to ensure reliable high-quality data across the organization.
This role is ideal for someone with solid hands-on experience who can own data pipelines end-to-end and contribute to improving our data architecture.
Key Responsibilities
-
Design build and maintain ETL/ELT pipelines that ingest data from multiple sources
-
Develop and optimize data models for analytics and reporting use cases
-
Ensure data quality reliability and performance across pipelines
-
Work with stakeholders to understand data requirements and translate them into technical solutions
-
Optimize data storage and query performance in data warehouses and/or data lakes
-
Monitor pipelines and troubleshoot data issues in production
-
Write clean maintainable and well-documented code
-
Collaborate with DevOps and engineering teams on deployment scaling and security best practices
Required Qualifications
-
35 years of experience in Data Engineering or a related role
-
Strong proficiency in SQL and at least one programming language (Python preferred)
-
Hands-on experience with ETL/ELT tools or frameworks (e.g. Airflow dbt Spark)
-
Experience working with data warehouses (e.g. Snowflake BigQuery Redshift)
-
Familiarity with cloud platforms (AWS GCP or Azure)
-
Understanding of data modeling concepts (star schema normalization denormalization)
-
Experience with version control systems (Git)
Nice to Have
-
Experience with streaming data (Kafka Kinesis Pub/Sub)
-
Knowledge of CI/CD pipelines for data workflows
-
Exposure to machine learning data pipelines
-
Familiarity with data governance security and compliance practices
Soft Skills
-
Strong problem-solving and analytical skills
-
Ability to work independently and manage priorities
-
Clear communication with both technical and non-technical stakeholders
-
Team-oriented mindset with attention to detail
What We Offer
-
Competitive salary and benefits
-
Flexible working arrangements
-
Opportunities for learning and career growth
-
Collaborative and data-driven culture
View more
View less