Data engineer (BI) HFX
Job Summary
At Veepee we believe in the power of data. Our BI teams mission is to build the foundation that connects empowers and supports every Veepee user in making data-driven decisions. We are looking for a passionate Data Engineer to help us build and scale the infrastructure that makes this possible.
Our data warehouse is built on GCP (Google BigQuery) and we use dbt but we are constantly evolving; we recently added Trino to our stack.
As a Data Engineer on our BI team your primary mission will be to design build and maintain the scalable data pipelines and infrastructure that power Veepees Business Intelligence ecosystem.
You will play a crucial role in evolving our technical stack ensuring its reliability and performance while paving the way for AI-driven capabilities.
Develop and maintain high-performance Python-based services and APIs to support data integration automation and the operationalization of analytical models.
Build REST APIs with Flask and flaskrestful.
Document and version APIs with Swagger or similar.
Apply best practices: error handling validation authentication/authorization automated testing and observability.
Own the orchestration scheduling and monitoring of data workflows and pipelines (e.g. using n8n or Airflow) ensuring data freshness and adherence to SLAs in collaboration with the Data Engineering platform team.
Champion and implement software engineering best practices including version control with Git (branching PRs code reviews) and CI/CD for automated testing and deployment of data pipelines.
Collaborate with data governance data engineers data scientists and data analysts in a young and international team (mainly based in Barcelona Brussels and Paris).
You will be a key technical partner for BI Developer and End Users ensuring they have access to clean reliable and timely data to generate valuable insights based on an AI-first strategy.
Core technologies in the role: Strategy (formerly MicroStrategy) Python Google Cloud Platform (BigQuery) dbt Git workflow orchestration in n8n and Swagger.
Your Profile
Have an engineering mindset and are passionate about building scalable reliable data systems.
Highly organized with strong attention to detail and a commitment to quality.
Curious proactive and a natural problem-solver.
A strong team player who enjoys collaborating with technical and non-technical peers.
An excellent communicator capable of explaining complex technical concepts clearly.
Must-have Skills
Proven experience as a Data Engineer with a focus on building and maintaining data pipelines.
Experience with BI tools (e.g. MicroStrategy Power BI Tableau).
Solid command of SQL and data modeling fundamentals (design and optimization).
Experience developing in Python:
Building REST APIs with Flask and/or flaskrestful.
API documentation with Swagger or similar.
Hands-on knowledge of dbt (modeling tests documentation and data lineage).
Professional use of Git (branching pull requests code reviews) and exposure to CI/CD practices.
Familiarity with workflow orchestration (scheduling and monitoring jobs/pipelines).
Skilled at facilitating workshops and communicating effectively with stakeholders.
Fluent English (mandatory).
Nice-to-have
Experience with MicroStrategy is a strong plus.
Use of mstrio-py for automation and MicroStrategy operations.
GCP/BigQuery environment experience (or equivalent cloud data warehouses).
Python data ecosystem: numpy (and/or pandas) for lightweight transformations and utilities.
Familiarity with containerization technologies like Docker and orchestration with Kubernetes.
Hands-on experience with Jupyter Notebook.
A keen interest in generative AI and how it can be applied to improve data engineering and BI processes.
Commitment to continuous learning (conferences trainings etc.).
Required Experience:
IC