Legal Data Acquisition Engineer (Scraping & Extraction)

Omnilex

Not Interested
Bookmark
Report This Job

profile Job Location:

Zürich - Switzerland

profile Monthly Salary: CHF 8000 - 12000
Posted on: 15 hours ago
Vacancies: 1 Vacancy

Job Summary

The Role (Behind the Scenes Mission-Critical)

Were looking for an engineer who loves the messy reality of web data: dynamic pages broken markup inconsistent PDFs changing source structures missing metadata rate limits anti-bot protections and jurisdiction-specific publishing habits.

This is a behind-the-scenes role with huge product impact. Youll build and maintain the systems that continuously collect and extract legal content from websites APIs bulk files and document repositories and turn it into reliable inputs for our AI products.

If you enjoy scraping parsing reverse-engineering content structures and designing robust ingestion pipelines that survive real-world change this role is for you.

About Omnilex

Omnilex is a young dynamic AI legal tech startup with roots at ETH Zurich. Our interdisciplinary team is building AI-native tools for legal research and answering complex legal questions across jurisdictions.

A core reason we stand out is our data foundation: combining external legal sources customer-internal sources and our own AI-first legal content. This role strengthens that foundation.

Tasks

What Youll Work On

Your focus will be source acquisition scraping parsing and extraction reliability for legal data.

Core responsibilities

Build and maintain resilient pipelines to ingest legal content from:

  • public websites
  • APIs
  • document portals
  • bulk datasets
  • PDFs / HTML / XML / DOCX-like formats

Design scraping systems that are robust to:

  • layout changes
  • pagination quirks
  • JavaScript-rendered sites
  • inconsistent metadata
  • rate limits and retry behavior
  • Implement parsers and extractors for legal documents (statutes decisions guidance commentaries etc.)

Extract and structure:

  • document text
  • headings/sections
  • citations and references
  • dates courts authorities identifiers
  • language / jurisdiction metadata
  • Build source-specific adapters and reusable extraction components (rather than one-off scripts)
  • Monitor source health and detect breakage quickly (e.g. selector failures coverage drops schema drift)
  • Improve data quality with validation checks deduplication canonicalization and content versioning
  • Work closely with AI/data/search teammates so extracted data is optimized for downstream indexing RAG and analytics
  • Document source behavior and operational playbooks so ingestion remains maintainable as we scale

What Success Looks Like

In this role success is not number of scrapers written. Success looks like:

  • high source coverage across target jurisdictions
  • fast detection and repair when sources change
  • clean structured extractions with fewer downstream fixes
  • stable ingestion SLAs and predictable runtimes/costs
  • reusable tooling that makes adding new sources increasingly faster

Requirements

Minimum Qualifications

  • Degree in Computer Science Data Science Software Engineering or related field or equivalent practical experience
  • Strong hands-on engineering experience with TypeScript (backend/data pipeline context)
  • Real experience building web scraping / crawling / extraction pipelines in production
  • Strong understanding of HTML/DOM parsing HTTP pagination sessions/cookies and common web data edge cases
  • Experience working with messy document formats (especially PDFs) and text extraction challenges
  • Good SQL skills (PostgreSQL) and experience storing structured/unstructured content
  • Strong debugging skills and a pragmatic mindset: you can make unreliable sources reliable
  • Ability to work with ownership in a fast-moving startup
  • Availability full-time; on-site in Zurich at least two days per week (hybrid)

Preferred Qualifications

  • Familiarity with modern scraping and browser automation tools (e.g. Playwright Puppeteer)
  • Experience with PDF/document tooling OCR pipelines and parsing libraries
  • Experience designing queue-based or worker-based ingestion systems
  • Experience with Azure (including storage/search services) Docker and CI/CD
  • Working proficiency in German and proficiency in English
  • Swiss work permit or EU/EFTA citizenship
  • Experience with legal or regulatory document structures (Switzerland / Germany / EU / US is a plus)
  • Familiarity with downstream AI/search use cases (chunking embeddings indexing citation traceability)

Nice-to-Have Strengths (But Not Required)

  • You enjoy source forensics: inspecting network calls hidden endpoints export formats and content variants
  • You think in terms of reusable extraction architecture not just one-off fixes
  • You care about observability and operational quality not just it ran once on my machine
  • You like collaborating with product/AI teams to understand what metadata actually matters downstream

Benefits

Benefits

  • High leverage impact: your work directly improves coverage freshness and trust in legal AI answers
  • Ownership: own the ingestion/scraping layer end-to-end for key legal sources
  • Real engineering challenges: dynamic websites parsing complexity document extraction reliability at scale
  • Interdisciplinary team: work closely with engineers legal experts and AI specialists
  • Compensation: CHF per month ESOP (employee stock options) depending on experience and skills

If youre excited about building the invisible infrastructure that powers great legal AI products wed love to hear from you. Apply via the Apply button.

The Role (Behind the Scenes Mission-Critical)Were looking for an engineer who loves the messy reality of web data: dynamic pages broken markup inconsistent PDFs changing source structures missing metadata rate limits anti-bot protections and jurisdiction-specific publishing habits.This is a behind-...
View more view more

Key Skills

  • Apache Hive
  • S3
  • Hadoop
  • Redshift
  • Spark
  • AWS
  • Apache Pig
  • NoSQL
  • Big Data
  • Data Warehouse
  • Kafka
  • Scala