Data engineering is your passion and you love to make sure that data can be turned into valuable assets You are keen on creating data products that enable teams and organisations to increase their productivity You have solid experience in designing and maintaining data architectures You have an engineering mindset and love to analyze complex datasets You love to work in an agile environment and deliver in short iterations If this sounds interesting to you then it is YOU we need in our team!
Tasks
- Design develop optimize and maintain data architecture.
- Design and maintain ingestion of multiple data sources.
- Analyze manipulate transform and process large and complex datasets.
- Enable training and running machine learning models.
- Building realtime data pipelines.
- Help our customers to become cloudnative and datadriven companies.
- Support your team with active knowledge transfer.
- Be part of and influence the introduction of new tools methodologies and techniques.
- Work in an agile environment and crossfunctional team.
Requirements
At heart you are a passionate team player who respects the opinions of his colleagues as:
- You know how to be the best team player.
- You have an eye for details and ace in documenting your work.
- You base your decisions on metrics.
- You are very structured and you set the benchmark for quality.
- You are open to new technologies.
- You have at minimum 5 years of experience as a Data Engineer.
- You have at minimum 3 years of experience in either Python or Scala and SQL.
- You have a bachelors in computer science data science or data engineering or you have a relevant subject such as mathematics or physics.
- You have experience in semantic modelling of complex data landscapes and are familiar with concepts of Data Lake Data Warehouse Data Vault Data Mart.
- You have a deep understanding of various data stores both structured and unstructured and their capabilities (i.e. distributed filesystems SQL and NoSQL data stores).
- You know exactly how to structure data pipelines for reliability scalability and optimal performance.
- You are comfortable working with analytics processing engines (i.e. Spark Flink).
- You have worked with many different storage formats and know when to use which (i.e. JSON Parquet ORC).
- You speak fluent English (maybe even a bit German).
Bonus experience (nice to have):
- ML Engineering & MLOps experience including deploying tracking and monitoring models in production using MLFlow Kubeflow TensorFlow Serving or similar tools.
- Experience with cloud technologies such (Azure) Databricks Fabric Snowflake AWS Athena or Google BigQuery.
- Experience building realtime data pipelines using tools like Azure Stream Analytics Amazon Kinesis Google Cloud Dataflow Kafka or RabbitMQ.
- Familiarity with CI/CD for data pipelines and ML models including tools such as GitHub Actions Jenkins or Airflow.
About us
At MobiLab we are committed to empowering our employees to bring their creative mindset into action guiding our customers toward reaching their full data potential and becoming Cloudnative organizations.
We are a diverse and dynamic team united by delivering engineering excellence. We are committed to inclusivity where individuals of all backgrounds including those with disabilities feel welcomed and valued. You will directly impact how future business works and contribute to industryleading companies.
We are dedicated to growing our employees. Our company culture encourages knowledge sharing and learning with a dedicated MobiLab Career Development. Our headquarters located in the heart of Cologne offers a creative work environment. We provide a range of benefits including a public transport ticket access to industry conferences a company pension scheme and more.
If youre passionate about Cloud Integration and striving for engineering perfection we invite you to join our MobiLab Team. Lets grow together!