About You
You like problems with a clear objective messy real-world constraints and lots of room for cleverness.
If youve done competitive programming / optimization competitions youll feel at home here: legal search is basically an optimization game where you trade off quality (F2/NDCG) latency (p95) and cost under strict correctness constraints (citations traceability jurisdiction). Youll build scoring functions retrieval pipelines rerankers and evaluation harnesses; and youll ship improvements that users notice immediately.
You enjoy:
- Turning vague user intent into formal signals algorithms
- Designing fast low-latency systems under tight budgets
- Running ablations debugging failure cases and iterating quickly
- Owning the full loop: idea benchmark ship measure
About Omnilex
Omnilex is a young dynamic AI legal tech startup with roots at ETH Zurich. Our interdisciplinary team (14 people) empowers legal professionals by building AI systems for legal research and answering complex legal questions; across external sources customer-internal documents and our own AI-first legal commentaries.
What Youll Work On
As an Applied Algorithms Engineer - Information Retrieval youll build the retrieval ranking reasoning backbone of our legal research experience.
Tasks
Responsibilities
Retrieval & ranking beyond the defaults
Hybrid retrieval (sparse dense) custom reranking multi-stage pipelines
Domain-specific workflows (e.g. knowledge graphs citation-aware expansions jurisdiction filters)
Scoring & features (where algorithms meet relevance)
Build ranking signals from: citations authority recency jurisdiction document structure paragraph/section anchors
Combine signals into robust scoring functions and reranking strategies
Query understanding & intent routing
Classify query intent detect constraints (Swiss law latest doctrine vs. case law) rewrite/expand queries
Route to the right retrieval strategy with minimal overhead
Evaluation that actually guides shipping
Build offline eval sets define metrics run quick ablations
Use production feedback dashboards to close the loop (what improved what broke)
Search infrastructure performance engineering
Tune indices/analyzers/embeddings manage recall vs. precision deduplicate near-duplicates
Engineer for p95 latency: caching batching early-exit strategies fallbacks
LLM-powered product systems
Design and ship production-grade LLM workflows (RAG tool use citation-grounded answers)
Keep outputs traceable verifiable and safe for legal professionals
Collaboration with domain experts
Work closely with legal experts to translate pain points into ranking logic
Document decisions and build playbooks others can extend
Requirements
Minimum qualifications
- Strong hands-on experience improving search / retrieval systems in production (hybrid retrieval reranking query understanding).
- Proven experience building and deploying LLM-based products from prototype to production.
- Strong algorithms background (data structures complexity graphs probability/statistics) and practical SQL.
- Proficiency in TypeScript/ (our core stack).
- Experience with one or more of: Azure AI Search pgvector/PostgreSQL OpenSearch/Elasticsearch or similar.
- Familiarity with embedding models cross-encoders and the ability to reason about latency/throughput/quality trade-offs.
- Ownership mindset clear communication bias for action.
- Proficiency in English.
- Full-time availability. Zurich-based with on-site presence at least 2 days/week (hybrid).
Preferred qualifications (nice-to-have)
- Swiss work permit or EU/EFTA citizenship.
- Working proficiency in German.
- Experience with evaluation pipelines (human labeling inter-annotator agreement error analysis AI-as-judgeused pragmatically).
- Knowledge of sparse/dense IR methods (BM25 variants SPLADE e5/BGE ColBERT-style) and semantic reranking.
- Experience operating services (Docker; basic Kubernetes/serverless is a plus).
- Familiarity with Azure / NestJS / .
- Exposure to legal systems (especially Switzerland Germany USA).
Competitive programming folks: what maps directly
Youll constantly do contest-style thinking:
define objective pick strategy optimize bottlenecks prove it with measurements
The difference is: the test cases are real users and the constraints include cost latency trust citations.
Benefits
Benefits
- Direct impact: your ranking and retrieval changes immediately improve user trust and result quality.
- Autonomy & ownership: shape the core search pipeline end-to-end (intent retrieval reranking grounded answers).
- Team: sharp interdisciplinary people at the intersection of AI search and law.
- Compensation: CHF /month ESOP depending on experience and skills.
If you want to apply your algorithmic instincts to something that matters and ship improvements that lawyers feel the same day press Apply.
About YouYou like problems with a clear objective messy real-world constraints and lots of room for cleverness.If youve done competitive programming / optimization competitions youll feel at home here: legal search is basically an optimization game where you trade off quality (F2/NDCG) latency (p95...
About You
You like problems with a clear objective messy real-world constraints and lots of room for cleverness.
If youve done competitive programming / optimization competitions youll feel at home here: legal search is basically an optimization game where you trade off quality (F2/NDCG) latency (p95) and cost under strict correctness constraints (citations traceability jurisdiction). Youll build scoring functions retrieval pipelines rerankers and evaluation harnesses; and youll ship improvements that users notice immediately.
You enjoy:
- Turning vague user intent into formal signals algorithms
- Designing fast low-latency systems under tight budgets
- Running ablations debugging failure cases and iterating quickly
- Owning the full loop: idea benchmark ship measure
About Omnilex
Omnilex is a young dynamic AI legal tech startup with roots at ETH Zurich. Our interdisciplinary team (14 people) empowers legal professionals by building AI systems for legal research and answering complex legal questions; across external sources customer-internal documents and our own AI-first legal commentaries.
What Youll Work On
As an Applied Algorithms Engineer - Information Retrieval youll build the retrieval ranking reasoning backbone of our legal research experience.
Tasks
Responsibilities
Retrieval & ranking beyond the defaults
Hybrid retrieval (sparse dense) custom reranking multi-stage pipelines
Domain-specific workflows (e.g. knowledge graphs citation-aware expansions jurisdiction filters)
Scoring & features (where algorithms meet relevance)
Build ranking signals from: citations authority recency jurisdiction document structure paragraph/section anchors
Combine signals into robust scoring functions and reranking strategies
Query understanding & intent routing
Classify query intent detect constraints (Swiss law latest doctrine vs. case law) rewrite/expand queries
Route to the right retrieval strategy with minimal overhead
Evaluation that actually guides shipping
Build offline eval sets define metrics run quick ablations
Use production feedback dashboards to close the loop (what improved what broke)
Search infrastructure performance engineering
Tune indices/analyzers/embeddings manage recall vs. precision deduplicate near-duplicates
Engineer for p95 latency: caching batching early-exit strategies fallbacks
LLM-powered product systems
Design and ship production-grade LLM workflows (RAG tool use citation-grounded answers)
Keep outputs traceable verifiable and safe for legal professionals
Collaboration with domain experts
Work closely with legal experts to translate pain points into ranking logic
Document decisions and build playbooks others can extend
Requirements
Minimum qualifications
- Strong hands-on experience improving search / retrieval systems in production (hybrid retrieval reranking query understanding).
- Proven experience building and deploying LLM-based products from prototype to production.
- Strong algorithms background (data structures complexity graphs probability/statistics) and practical SQL.
- Proficiency in TypeScript/ (our core stack).
- Experience with one or more of: Azure AI Search pgvector/PostgreSQL OpenSearch/Elasticsearch or similar.
- Familiarity with embedding models cross-encoders and the ability to reason about latency/throughput/quality trade-offs.
- Ownership mindset clear communication bias for action.
- Proficiency in English.
- Full-time availability. Zurich-based with on-site presence at least 2 days/week (hybrid).
Preferred qualifications (nice-to-have)
- Swiss work permit or EU/EFTA citizenship.
- Working proficiency in German.
- Experience with evaluation pipelines (human labeling inter-annotator agreement error analysis AI-as-judgeused pragmatically).
- Knowledge of sparse/dense IR methods (BM25 variants SPLADE e5/BGE ColBERT-style) and semantic reranking.
- Experience operating services (Docker; basic Kubernetes/serverless is a plus).
- Familiarity with Azure / NestJS / .
- Exposure to legal systems (especially Switzerland Germany USA).
Competitive programming folks: what maps directly
Youll constantly do contest-style thinking:
define objective pick strategy optimize bottlenecks prove it with measurements
The difference is: the test cases are real users and the constraints include cost latency trust citations.
Benefits
Benefits
- Direct impact: your ranking and retrieval changes immediately improve user trust and result quality.
- Autonomy & ownership: shape the core search pipeline end-to-end (intent retrieval reranking grounded answers).
- Team: sharp interdisciplinary people at the intersection of AI search and law.
- Compensation: CHF /month ESOP depending on experience and skills.
If you want to apply your algorithmic instincts to something that matters and ship improvements that lawyers feel the same day press Apply.
View more
View less