A cybersecurity software solution provider is looking for a passionate data engineer (m/f/x) to support their machine learning team by driving the implementation of data collection pipelines. Do you have unique web scraping abilities, an interest in cybersecurity and passion for machine learning?
Then apply now!
Deine Aufgaben im Team
- You lead the design of a technical solution for automated web content scraping
- You write and maintain software pipelines for scalable data collection for hundreds of websites
- You implement acquisition pipelines (e.g. with Scrapy library in Python), process and clean the data (e.g using Spark), and finally store accurately into our databases and data lake
- You deploy automated web acquisition pipelines using container technologies on a cloud infrastructure
- You write robust scripts for analysis of production datasets
Unsere Erwartungen an Dich
- At least 2 years of experience working as a Python or Full-Stack developer
- Practical experience with Big Data Platforms like Databricks/Spark and container technologies (e.g. Docker Swarm, Kubernetes)
- Experience with web scraping tools like Scrapy, Puppeteer or Playwright.
- Nice to have: experience with other tools such as Beautiful Soup, Xpath, Selenium
- Knowledge and experience with DevOps practices (e.g. test automation, deployment automation) and CI/CD tools (e.g. Gitlab)
- Passion and curiosity to familiarize yourself with new subject areas and ability to work in a team
- Nice to have: Knowledge in network security, AWS Cloud technologies and familiarity with monitoring tools and JavaScript and/or JS Framework
- Fluent in English (spoken & written)
Deine Benefits
- Flexible working hours and possibility of hybrid work
- Friendly working atmosphere in an international and motivated team
- Modern office space centrally located in Vienna