We are looking for a Senior Data Engineer to join our growing Data team and play a key role in shaping and scaling our data infrastructure. This position is ideal for a highly skilled and motivated engineer who thrives in a fast-paced iGaming environment and is passionate about building robust data systems that power business-critical decisions.
Reporting directly to the Head of Data you will take ownership of a wide range of engineering responsibilitiesfrom building and maintaining new data integrations (batch and streaming) to automating pipelines and designing scalable data models. This role is pivotal in offloading the day-to-day engineering tasks and driving long-term data excellence across the company.
Please note for this role we are hiring only within Europe.
Visit our website to find out more about us and apply
Tasks
- Data Architecture: Design and implement robust scalable data architecture to support the collection storage and processing of large volumes of iGaming and customer data.
- ETL Development: Build and maintain efficient ETL and ELT processes to ensure seamless data ingestion from multiple sources into our data lake DWH and final consumers.
- Data Modeling: Create and maintain well-structured data models that ensure accuracy integrity and optimal performance for analytical use cases following Kimball best practices.
- Data Integration: Collaborate with cross-functional teams to integrate data across systems with several types of integration as API Kafka DB extractions Webscrapping Email etc.
- Streaming & Real-Time Processing: Develop and manage real-time data pipelines using streaming technologies (e.g. Kafka) to support time-sensitive use cases.
- Data Quality & Governance: Enforce data governance policies and best practices ensuring high standards for data quality security and regulatory compliance.
- Performance Optimization: Continuously monitor troubleshoot and optimize data pipelines queries and infrastructure for speed efficiency and reliability.
- Pipeline Orchestration: Support and expand production-grade orchestration workflows using Apache Airflow.
- Observability & Monitoring: Set up and maintain monitoring alerting and observability for data pipelines and systems using tools like Elementary Datadog Prometheus or Grafana.
- CI/CD and Automation: Develop and maintain CI/CD pipelines for data projects using Bitbucket Pipelines or GitHub Actions ensuring streamlined deployment and testing processes.
- Version Control & Collaboration: Follow best practices in Git-based version control and work collaboratively in a modern DevOps environment.
Requirements
- 5 years of experience with Python preferably including 2 years working with Airflow.
- 5 years of experience with SQL in big data environments preferably including 2 years with dbt.
- 4 years in cloud environments preferably with 2 years each in Snowflake and AWS.
- 2 years of experience with data streaming platforms preferably on Kafka.
- 2 years of hands-on experience with containerization (Docker Kubernetes).
- 3 years managing CI/CD pipelines ideally using Bitbucket Pipelines or GitHub Actions.
- Deep understanding of data governance data quality and data lineage principles.
- Proficient in using monitoring and alerting tools such as Datadog Prometheus or Grafana.
- Solid experience with version control tools (Git) and IaaS (e.g. Terraform).
- Strong communication and problem-solving skills with a proactive and self-driven attitude.
Benefits
- Competitive salary synonymous with skills and experience
- Performance and bonus structure dependent on achievement of set targets and personal performance
- Consultancy contract within EU on a full time basis with 25 days PTO
- or employment contract if you are based in Sofia Budapest or Malta