Mid-Level Data Engineer

OnHires

Not Interested
Bookmark
Report This Job

profile Job Location:

Kutaisi - Georgia

profile Monthly Salary: Not Disclosed
Posted on: 3 days ago
Vacancies: 1 Vacancy

Job Summary

Highlights

  • Remote-first role open to candidates from Brazil / South America Turkey and Northern Africa
  • Work on a data-driven SaaS platform focused on large-scale web data collection and automation
  • Location: Remote preferred: Europe time zone (Eastern Europe Turkey Tunisia). Brazil as alternative.

About Our Client (for Engineers)

Our client is a remote-first SaaS product company based in Berlin building data-intensive solutions that power advanced analytics and operational decision-making.
They work with complex high-volume data flows and rely on a modern well-structured engineering stack to keep things reliable transparent and scalable.

What makes this environment attractive for engineers:

  • You work end-to-end with real data products not abstract pipelines.

  • You collaborate with a small experienced team that values clean code simplicity and engineering discipline.

  • You use modern tooling Python Prefect SQL Metabase cloud services without legacy overload or bureaucratic overhead.

  • You get clear requirements manageable scope and the ability to make meaningful improvements.

  • You grow by solving practical non-trivial data challenges that directly impact the product.

Role Overview

This is a mid-level role designed for engineers who already have solid Python ETL experience and want to deepen their expertise with workflow orchestration data quality and modern BI tooling.

You will build reliable pipelines improve data accuracy and support the team in turning raw data into clean usable datasets and dashboards.
You wont be expected to lead projects or mentor others but you must be able to independently handle well-defined engineering tasks.

Responsibilities

You will work alongside senior engineers and product stakeholders to maintain and evolve the data platform. Your work will include:

ETL & Data Pipelines

  • Develop and maintain ETL pipelines using Python and Prefect

  • Write clean tested code for transformations ingestion jobs and integrations

  • Optimize pipeline performance and reliability

Data Quality

  • Implement validation rules sanity checks and lightweight monitoring

  • Investigate and resolve data issues in collaboration with the team

  • Contribute to improving data consistency and transparency

Data Modeling & Analytics

  • Help develop new data models and contribute to schema design

  • Build and maintain dashboards in Metabase to support internal stakeholders

  • Support analytical reporting by preparing clean datasets

Collaboration

  • Translate business requirements into clear engineering tasks

  • Document your work and follow internal best practices

  • Communicate effectively with engineering product and business teams

Your Profile

Must Have

  • 35 years in Data Engineering or Python Engineering

  • Strong proficiency in Python (pandas SQLAlchemy typing testing)

  • Hands-on experience with ETL orchestration (Prefect preferred; Airflow acceptable)

  • Solid SQL skills and familiarity with relational databases

  • Experience with data quality checks validation or monitoring

  • Experience with Metabase or similar BI tools

  • Understanding of cloud-based data workflows (AWS GCP or Azure)

  • Ability to work independently on assigned tasks

  • English proficiency and clear communication

Nice to Have

  • dbt or similar transformation tools

  • Experience with NoSQL databases

  • Basic analytics or statistics knowledge

  • Exposure to APIs or light backend work

  • Experience in distributed remote teams

What Our Client Offers

  • Work in a small senior engineering-driven environment

  • Fully remote culture with flexible hours

  • Modern stack (Python Prefect SQL Metabase AWS/GCP/Azure)

  • Real opportunity to deepen ETL & data quality expertise

  • Competitive compensation depending on location

  • Personal development support

  • Optional office access on the tech campus near Berlin

  • Regular virtual and onsite team meetups


HighlightsRemote-first role open to candidates from Brazil / South America Turkey and Northern AfricaWork on a data-driven SaaS platform focused on large-scale web data collection and automationLocation: Remote preferred: Europe time zone (Eastern Europe Turkey Tunisia). Brazil as alternative.About...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala