Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Our Mission
Aleph Alpha Researchs mission is to deliver AI innovation that enables open accessible and trustworthy deployment of GenAI in enterprise applications. Our organization develops foundational models and next-generation methods that make it easy and affordable for Aleph Alphas customers to increase productivity in finance administration R&D logistics and manufacturing processes. We do this with a flat hierarchy and IC-driven culture: ideas come from the bottom up and its our shared responsibility to deliver impactful research.
Were looking for skilled Software Engineers to join our research team headquartered in Heidelberg with a focus on evaluating the capabilities safety and trustworthiness of our models. While we highly value in-person work we offer flexibility to work from Berlin or elsewhere in Germany with regular travel to onsite events.
As an AI Software Engineer in Model Evaluation you will help design implement and scale the systems that measure our models performance at the cutting edge. You will work closely with researchers to create evaluation benchmarks datasets and environments that test model capabilities safety and reliability across tasks from multilingual understanding to mathematical reasoning and creativity.
You will own significant portions of our evaluation infrastructure including dataset generation pipelines automated benchmarking tools analysis dashboards and large-scale evaluation orchestration on our compute clusters. Youll be building tools and experiments that drive product decisions shape research priorities and guide responsible deployment of our models.
This is high-scale high-impact engineering: youll work with petabyte-scale data run evaluations across large-scale distributed GPU clusters and deliver insights that inform the direction of Aleph Alphas research.
Our current open source eval-framework can be found here.
You can expect to contribute to the following areas:
Design and develop scalable evaluation tooling to accelerate research and measure model progress.
Collaborate with researchers to design evaluation tasks and benchmarks targeting advanced model capabilities.
Deep-dive on evaluation performance to ensure our tools run efficiently at scale.
Build pipelines for generating curating and maintaining high-quality evaluation datasets.
Implement automated analysis systems to interpret results and highlight strengths weaknesses and regressions.
Collaborate with Product teams to design evaluations aligned with real-world application needs.
Contribute to papers and reports documenting our evaluation methodologies and results for internal and external audiences.
Mentor engineers and researchers on evaluation best practices software engineering and tooling.
Co-own efforts to make evaluation datasets tools and results available to the broader research community including in Apache 2.0 open-source releases.
We hire slowly and deliberately. We recognise that we need top talent to deliver top research and we value ability over experience: if you think you would be a good fit for this role wed encourage you to apply even if you do not meet all of the following qualifications.
Basic Qualifications
Bachelors degree in computer science engineering or a related field.
Willingness to work in Germany. Our primary working locations are Heidelberg (preferred) and Berlin although there is some flexibility to work from other locations in Germany with regular travel to Heidelberg expected regularly potentially weekly.
Proficiency in programming and a passion for crafting high-quality maintainable software while following engineering best practices (e.g. TDD DDD).
Curiosity to dig deep into how models work and how to measure their capabilities.
Desire to take ownership of problems and collaborate with other teams to solve them.
Motivation to learn AI-related topics and get up-to-speed with the cutting edge.
Strong communication skills with the ability to convey technical solutions to diverse audiences.
Preferred Qualifications
Masters (or PhD) degree in computer science or related fields.
Familiarity with evaluation and benchmarking frameworks for AI models.
Experience working with distributed systems for large-scale data processing or evaluation orchestration.
Experience in dataset creation annotation and curation for complex AI tasks.
Familiarity with LLM architectures popular NLP tools (e.g. PyTorch HF Transformers) and automated evaluation techniques (e.g. LLM-as-a-judge multi-turn evaluation).
Experience designing evaluations for safety trustworthiness and bias in AI systems.
Strong skills in data visualization dashboarding and reporting for evaluation results.
Familiarity with cluster management systems model/data lineage and MLOps workflows.
We do not require prior experience in AI for this role but we value eagerness to learn. If you have prior experience in AI we will be particularly excited about your ability to translate evaluation insights into actionable improvements for models and systems.
We believe embodying these values would make you a great fit in our team:
We own work end-to-end from idea to production: You take responsibility for every stage of the process ensuring that our work is complete scalable and of the highest quality.
We ship what matters: Your focus is on solving real problems for our customers and the research community. You prioritize delivering impactful solutions that bring value and make a difference.
We work transparently: You collaborate and share your results openly with the team partners customers and the broader community through publishing and sharing results and insight including blogposts papers checkpoints and more.
We innovate through leveraging our intrinsic motivations and talents: We strive for technical depth and to balance ideas and interests of our team with our mission-backwards approach and leverage the interdisciplinary diverse perspectives in our teamwork.
Become part of an AI revolution!
30 days of paid vacation
Access to a variety of fitness & wellness offerings via Wellhub
Substantially subsidized company pension plan for your future security
Subsidized Germany-wide transportation ticket
Budget for additional technical equipment
Flexible working hours for better work-life balance and hybrid working model
Virtual Stock Option Plan
Full-Time