DescriptionSEON is the leading fraud prevention system of record catching fraud before it happens at any point across the customer journey. Trusted by over 5000 global companies we combine your companys data with our proprietary real-time signals to deliver actionable fraud insights tailored to your business outcomes. We deliver the fastest time to value in the market through a single API call enabling quick and seamless onboarding and integration. By analyzing billions of transactions weve prevented $200 billion in fraudulent activities showcasing why the worlds most innovative companies choose SEON.
As a Web Scraping Specialist at SEON you will play a key role in building and evolving the systems that power SEONs real-time data collection engine. Youll develop and maintain scalable resilient scraper infrastructure. Your work will directly support SEONs fraud prevention capabilities enabling robust data gathering across hundreds of platforms. Youll collaborate closely with fellow engineers product teams and infrastructure specialists to deliver resilient high-performance scraping role reports into the Team Lead of Data Enrichment Crawler Team.
This role offers flexibility and is based in our Budapestoffice with a hybrid schedule of being in the office approximately three days per week.
What Youll Do:
- Design develop and maintain scalable multithreaded scraping and crawling services primarily in Java with contributions to Python-based systems as part of an ongoing architecture shift.
- Improve throughput resiliency and performance of scraping infrastructure including proxy management connection handling and fault tolerance.
- Contribute to the design of scraping components and APIs that power SEONs data ingestion pipeline.
- Refactor and modernize existing scrapers identify bottlenecks and reduce technical debt.
- Enhance test coverage observability and maintainability of core components.
- Collaborate with Product Management and Engineering peers to scope refine and deliver features aligned with business goals.
- Support and review pull requests from peers share knowledge across the team and contribute to engineering best practices.
- Analyze team metrics (latency reliability system cost) and suggest actionable improvements.
- Contribute to technical planning and support others during the full SDLC from design to deployment.
What You Bring:
- 3 years of web scraping experience with Java and/or Python.
- Solid understanding of networking connection reuse proxies and scraping at scale.
- Familiarity with RESTful APIs and web backend frameworks.
- Hands-on experience with Docker and Kubernetes in production environments.
- Strong collaboration and communication skills; fluent in English.
- Willingness to work with and contribute to Java/Spring-based systems.
Amazing if You Also Have:
- Previous experience contributing to the design of scraper orchestration error handling and proxy rotation.
- Experience with scraping frameworks or tools like Selenium Puppeteer Scrapy or Playwright.
- Experience working with AWS services.
- Familiarity with monitoring and observability stacks (e.g. Kibana Prometheus Grafana).
- CI/CD exposure especially with GitHub Actions.
- Knowledge of relevant technologies including:
- ElasticSearch
- InfluxDB
- Redis
- DynamoDB
Required Experience:
Unclear Seniority