Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
For an exciting project with our client in Zurich (remote work only possible within Switzerland) we are looking for a Big Data System Engineer / Platform Engineer (m/f/d).
Key Facts
Start: 02.05.2025
Duration: 6 months / Verlngerungsoption
Capacity: 100
Employment type: Personal leasing
Job site: Zrich
Job country: Schweiz
Your tasks
You are operating Global Data Platform components (VM Servers Kubernetes Kafka) and applications (Apache stack Collibra Dataiku and similar)
Implement automation of infrastructure security components and Continuous Integration & Continuous Delivery for optimal of data pipelines (ELT/ETL)
Develop solutions to build resiliency in data pipelines with platform health checks monitoring and alerting mechanisms quality timeliness recency and accuracy of data delivery are improved
Apply DevSecOps & Agile approaches to deliver the holistic and integrated solution in iterative increments
Liaison and collaborate with enterprise security digital engineering and cloud operations to gain consensus on architecture solution frameworks
Review system issues incidents and alerts to identify root causes and continuously implement features to improve platform performance
Stay up to date with the latest industry developments and technology trends to successfully lead and design new features and capabilities
Must have competences
5 years of experience in building or designing largescale faulttolerant distributed systems
Experience with physical Disaster Recovery testing in large data platforms on premise with upstream and downstream pipelines
Experience of agile project management and methods e.g. Scrum SAFe)
Indepth knowledge of all analytical value streams from enterprise reporting (e.g. Tableau) to data science (incl. ML Ops)
Fluent in English good knowledge of German is beneficial
Handson working knowledge of large data solutions (for example: data lakes delta lakes data meshes data lakehouses data platforms data streaming solutions)
Indepth knowledge and experience in one or more large scale distributed technologies including but not limited to: Hadoop ecosystem Kafka Kubernetes and Spark
Expert in Python and Java or another static language like Scala/R Linux/Unix scripting Jinja templates and puppet scripts firewall config rules setup
Expertise in VM setup and scaling (pods) K8S scaling managing Docker with Harbor pushing Images through CI/CD
Experience in integration of streaming and file based data ingestion / consumption (Kafka Control M AWA)
Wellversed in DevOps data pipeline development and automation using Jenkins and Octopus (optional: Ansible Chef XL Release and XL Deploy)
Experience predominately with onprem Big Data architecture cloud migration experience might come handy
Handson experience in integrating Data Science Workbench platforms (e.g. Dataiku)
Nice to have competences
Higher education e.g. FachhochschuleWirtschaftsinformatik)
Exposure to data formats such as Apache Parquet ORC or Avro Experience in machine learning algorithms is a plus
Lets power the future together
From Business Case to Implementation: As a leading consulting firm for strategic transformations we are a trusted partner for our clientsand for our employees. Responsible highperforming and always with a focus on people. #WeAreWavestone
With our 360 portfolio of consulting services we combine toptier industry expertise with a wide range of crosssector skills work interdisciplinary and think outside the box. This allows us to offer our partner companies and freelancers comprehensive perspectives within our own projects while also supporting them as a longstanding framework agreement partner in filling project vacanciespromptly and directly.
We look forward to hearing from you!
Full Time