Job Overview:
As a Scala Developer in our team you work with large scale manufacturing data coming from our globally distributed plants. You will focus on building efficient scalable & data-driven applications that among other use cases connect IoT devices pre-process standardize or enrich data feed ML models or generate alerts for shopfloor operators.
The data sets produced by these applications whether data streams or data at rest need to be highly available reliable consistent and quality-assured so that they can serve as input to wide range of other use cases and downstream applications.
We run these applications on a Kubernetes based edge data platform in our plants. The platform is currently in ramp-up phase so apart from building applications you will also contribute to scaling the platform including topics such as automation and observability.
Finally you are expected to interact with customers and other technical teams e.g. for requirements clarification & definition of data models.
Qualifications :
- Bachelors degree in computer science Computer Engineering relevant technical field or equivalent; Masters degree preferred.
- 5 years of experience in software engineering and / or backend development
Additional Information :
Key Competencies:
Required Skills:
- Develop deploy and operate data processing applications running on Kubernetes written in Scala (we leverage Kafka for messaging KStreams and ZIO for data processing PostgreSQL and S3 for storage)
- Contribute to ramp-up of our edge data processing platform incl. topics such as deployment automation building CI/CD pipelines (we use Github Actions ArgoCD) and evaluation of platform extensions
- Experience developing software in a JVM-based language. Scala preferred but Java Kotlin or Clojure also accepted.
- Experience with data-driven backend software development
- Experience with object-oriented & functional programming principles
- Deep level of understanding in distributed systems for data storage and processing (e.g. Kafka ecosystem Flink HDFS S3)
- Experience with RDBMS (e.g. Postgres)
- (optional) prior experience with functional stream processing libraries such as fs2 zio-streams or Akka/Pekko streams
- Excellent software engineering skills (i.e. data structures & algorithms software design)
- Excellent problem-solving investigative and troubleshooting skills
- Experience with CI/CD tools such as Jenkins or Github Actions
- Comfortable with Linux and scripting languages for workflow automation
- Discuss requirements with stakeholders such as customers or up- and downstream development teams
- Derive design proposals including meaningful data models
- Engage in design discussions with team members architects & technical leadership
- Review code contributed by other team members
- Depending on experience mentor junior team members
Soft Skills:
- Good Communication Skills
- Ability to coach and Guide young Data Engineers
- Decent Level in English as Business Language
Remote Work :
No
Employment Type :
Full-time