As an Analytics Engineer you will play a central role in designing building and operating our Insight Environment. You will be responsible for developing reliable scalable data pipelines modelling data for analytical and machine-learning use cases and ensuring high standards of data quality and observability across the platform.
You will work across the analytics data engineering and ML lifecycle. Owning production-grade data transformations orchestrating workflows and supporting the deployment and monitoring of machine-learning models. While you will engage with event-level and marketing data where relevant your primary impact will be in strengthening the engineering foundations that enable trusted analytics and ML at scale. Your work will directly support data-driven decision-making by ensuring our data and models are robust performant and production-ready.
Key Responsibilities:
- Data Platform Ownership: Own and evolve core datasets and data domains within the Insight Environment applying strong data governance quality controls and stewardship across the platform.
- Analytics Engineering & Data Modelling: Design and maintain production-grade data models and transformations using dbt and BigQuery providing reliable well-structured data for analytics reporting and downstream ML use cases.
- Machine Learning & ML Ops Enablement: Operationalise machine learning models and data science workflows in Databricks supporting scalable deployment monitoring and lifecycle management of models in production.
- Workflow Orchestration & Reliability: Own the orchestration layer of the Insight Environment (Prefect) ensuring resilient observable and well-documented data workflows across ingestion transformation and activation.
- Data Integration & Activation: Build and manage data pipelines including RETL and activation workflows (e.g. via RudderStack) to ensure timely and consistent data flow between analytical operational and ML systems.
Qualifications :
You will have / be:
- Strong experience in data or analytics engineering roles with advanced proficiency in Python and SQL for building and maintaining production-grade data pipelines and models.
- Solid working knowledge of PySpark or similar distributed computing frameworks in real-world data processing environments.
- A degree in computer science data science engineering or a related field or equivalent professional experience demonstrating the same depth of technical capability.
- A practical understanding of how machine learning models are productionised including deployment monitoring and lifecycle considerations.
- Proven experience in data preparation and modelling with a strong focus on accuracy reliability and reusability across analytical and ML use cases.
- Experience designing and operating orchestrated data workflows with an appreciation for reliability observability and maintainability.
- Familiarity with Reverse ETL concepts and data activation patterns and the ability to apply them to real business problems.
- Strong problem-solving skills and the ability to communicate clearly and effectively with analytics data science and engineering stakeholders.
Additional Information :
Why youll love this role:
In this role youll be at the forefront of data technology working with an advanced modern data stack that includes industry-leading tools such as dbt Databricks BigQuery and Prefect. Youll not only apply these powerful tools to propel our data infrastructure forward but also continuously learn and master them. Our team thrives on innovation and efficiency so youll have the chance to contribute to and shape our evolving data ecosystem. The role is designed to be a career-defining opportunity for a data enthusiast who is eager to explore the depths of analytics engineering and take ownership of projects that push the boundaries of what our data can achieve.
Benefits
- 40 Days of Holiday including Bank Holidays which you can take flexibly when you want.
- World class private health insurance with dental coverage.
- Significant Flexible Benefits budget to spend on the things that matter the most to you.
- Employee Assistance Program
- Life Insurance
- Critical Illness Insurance
Remote Work :
No
Employment Type :
Full-time
As an Analytics Engineer you will play a central role in designing building and operating our Insight Environment. You will be responsible for developing reliable scalable data pipelines modelling data for analytical and machine-learning use cases and ensuring high standards of data quality and obse...
As an Analytics Engineer you will play a central role in designing building and operating our Insight Environment. You will be responsible for developing reliable scalable data pipelines modelling data for analytical and machine-learning use cases and ensuring high standards of data quality and observability across the platform.
You will work across the analytics data engineering and ML lifecycle. Owning production-grade data transformations orchestrating workflows and supporting the deployment and monitoring of machine-learning models. While you will engage with event-level and marketing data where relevant your primary impact will be in strengthening the engineering foundations that enable trusted analytics and ML at scale. Your work will directly support data-driven decision-making by ensuring our data and models are robust performant and production-ready.
Key Responsibilities:
- Data Platform Ownership: Own and evolve core datasets and data domains within the Insight Environment applying strong data governance quality controls and stewardship across the platform.
- Analytics Engineering & Data Modelling: Design and maintain production-grade data models and transformations using dbt and BigQuery providing reliable well-structured data for analytics reporting and downstream ML use cases.
- Machine Learning & ML Ops Enablement: Operationalise machine learning models and data science workflows in Databricks supporting scalable deployment monitoring and lifecycle management of models in production.
- Workflow Orchestration & Reliability: Own the orchestration layer of the Insight Environment (Prefect) ensuring resilient observable and well-documented data workflows across ingestion transformation and activation.
- Data Integration & Activation: Build and manage data pipelines including RETL and activation workflows (e.g. via RudderStack) to ensure timely and consistent data flow between analytical operational and ML systems.
Qualifications :
You will have / be:
- Strong experience in data or analytics engineering roles with advanced proficiency in Python and SQL for building and maintaining production-grade data pipelines and models.
- Solid working knowledge of PySpark or similar distributed computing frameworks in real-world data processing environments.
- A degree in computer science data science engineering or a related field or equivalent professional experience demonstrating the same depth of technical capability.
- A practical understanding of how machine learning models are productionised including deployment monitoring and lifecycle considerations.
- Proven experience in data preparation and modelling with a strong focus on accuracy reliability and reusability across analytical and ML use cases.
- Experience designing and operating orchestrated data workflows with an appreciation for reliability observability and maintainability.
- Familiarity with Reverse ETL concepts and data activation patterns and the ability to apply them to real business problems.
- Strong problem-solving skills and the ability to communicate clearly and effectively with analytics data science and engineering stakeholders.
Additional Information :
Why youll love this role:
In this role youll be at the forefront of data technology working with an advanced modern data stack that includes industry-leading tools such as dbt Databricks BigQuery and Prefect. Youll not only apply these powerful tools to propel our data infrastructure forward but also continuously learn and master them. Our team thrives on innovation and efficiency so youll have the chance to contribute to and shape our evolving data ecosystem. The role is designed to be a career-defining opportunity for a data enthusiast who is eager to explore the depths of analytics engineering and take ownership of projects that push the boundaries of what our data can achieve.
Benefits
- 40 Days of Holiday including Bank Holidays which you can take flexibly when you want.
- World class private health insurance with dental coverage.
- Significant Flexible Benefits budget to spend on the things that matter the most to you.
- Employee Assistance Program
- Life Insurance
- Critical Illness Insurance
Remote Work :
No
Employment Type :
Full-time
View more
View less