Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailMultiverse is the upskilling platform for AI and Tech adoption.
We have partnered with 1500 companies in the US & UK to deliver a new kind of learning thats transforming todays workforce.
Multiverse apprenticeships are designed for people of any age and career stage and focus on building critical AI data and tech skills. Multiverse learners have driven $2bn ROI for their employers using the skills theyve learned to improve productivity and measurable performance.
In June 2022 Multiverse announced a $220 million Series D funding round coled by StepStone Group Lightspeed Venture Partners and General Catalyst. With a postmoney valuation of $1.7bn the round makes the company the UKs first EdTech unicorn.
But we arent stopping there. With a strong operational footprint and 800 employees globally we have ambitious plans to continue scaling. Were building a world where tech skills unlock peoples potential and output. Join Multiverse and power our mission to provide equitable access to economic opportunity for everyone.
Were looking for an Analytics Engineer to help build and maintain the data models that power analytics and data science across the business. Youll focus on developing robust and scalable dbt pipelines and contributing to the evolution of our data platform ensuring that data is accessible trusted and wellstructured.
This role is handson and ideal for someone with a strong technical foundation who enjoys solving data problems writing clean and efficient SQL and collaborating with analysts engineers and product teams.
This role sits within the Data & Insight team reporting to the Head of Analytics Engineering. Were looking for someone whos detailoriented solutiondriven and pragmatic someone who takes ownership of their work and is excited to build productfocused data models.
Data Modelling & Transformation
Build and maintain dbt models to transform raw data into clean documented and accessible data sets
Translate business and analytics requirements into scalable data models
Design and implement data warehouse schemas using dimensional modelling techniques (fact and dimension tables slowly changing dimensions etc.)
Participate in design and code reviews to improve model design and query performance
Testing Documentation and CI/CD
Implement and maintain dbt tests to ensure data quality and model accuracy
Document data models clearly to support crossfunctional use
Use GitHub and CI/CD pipelines to manage code and deploy changes safely and efficiently
Performance & Architecture
Optimise dbt models and SQL queries for performance and maintainability
Work with Snowflake; developing on top of a data lake architecture
Ensure dbt models are wellintegrated with data catalogs and accessible for downstream use
Required Skills & Experience
2 years of building and optimising complex SQL (including complex joins window functions and optimisation methods)
Strong understanding of data modelling and warehouse design (e.g. Kimballstyle dimensional modelling)
Experience using dbt in production environments including testing and documentation
Familiar with version control (GitHub)
Experience tuning dbt models and SQL queries for performance
Able to independently transform business logic into technical implementation
Comfortable participating in and contributing to code reviews
Desirable but not required
Experience with Snowflake
Experience with CI/CD for data workflows
Familiarity with Python/Airflow for data transformation or orchestration tasks
Experience with data visualisation tools (e.g. Tableau Looker)
Working knowledge of infrastructureascode tools like Terraform
Full-Time