Data Engineer (mfd)

Not Interested
Bookmark
Report This Job

profile Job Location:

Lüneburg - Germany

profile Monthly Salary: Not Disclosed
Posted on: Yesterday
Vacancies: 1 Vacancy

Job Summary

Were looking for a mid-level Data Engineer to strengthen our internal data platform and help product teams ship faster more reliable data-powered features.

Youll join our Engineering Foundation chapter collaborating closely with DevOps product engineers and other data engineers to build operate and evolve the systems that power our products.

There will be close collaboration with our team of cross-functional engineers to help create impactful and reliable insights from our data.

Your Responsibilities:

  • Design and build new data-powered features for internal and external products working closely with product and frontend/backend engineers.

  • Develop and maintain scalable well-documented data pipelines using Airflow and dbt running on Snowflake and other modern cloud tooling.

  • Create internal tools APIs or utilities to make data more accessible and usable across engineering and product teams.

  • Contribute to the architecture and implementation of new data products from ingestion to modeling to serving.

  • Set up and monitor data quality freshness and health integrating observability into everything you ship.

  • Build and maintain CI/CD workflows for DAGs dbt models and platform configuration using GitOps principles.

  • Troubleshoot pipeline issues and performance bottlenecks and proactively improve resilience and execution speed.

  • Collaborate with product teams to identify opportunities to simplify and scale data workflows.

  • Collaborate on platform improvements like cost-optimization model run tracking and efficient use of compute/storage.


Qualifications :

What Were Looking For

Required:

  • 24 years of experience in data engineering or platform-oriented backend roles

  • Solid experience with:

    • Airflow DAGs or other task orchestration

    • dbt (Core or Cloud) for data modeling

    • Snowflake or similar cloud data warehouse

    • SQL and Python for scripting and operational logic

    • CI/CD pipelines (e.g. GitHub Actions)

  • Familiar with observability and monitoring (e.g. Datadog data freshness checks)

  • Comfortable working at the intersection of data pipelines infrastructure and developer experience

Bonus:

  • Experience in cost-aware data operations or platform governance

  • Exposure to ML workflows and model/data validation patterns


Remote Work :

No


Employment Type :

Full-time

Were looking for a mid-level Data Engineer to strengthen our internal data platform and help product teams ship faster more reliable data-powered features.Youll join our Engineering Foundation chapter collaborating closely with DevOps product engineers and other data engineers to build operate and e...
View more view more

Key Skills

  • Laboratory Experience
  • Vendor Management
  • Design Controls
  • C/C++
  • FDA Regulations
  • Intellectual Property Law
  • ISO 13485
  • Research Experience
  • SolidWorks
  • Research & Development
  • Internet Of Things
  • Product Development

About Company

SCAYLE is one of the fastest-growing enterprise commerce platforms in the world and empowers B2C brands and retailers to create outstanding customer experiences with one unified backend. SCAYLE’s headless and composable architecture is based on an API-first approach and is continuousl ... View more

View Profile View Profile