Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
You want to operate at the cutting edge of AI pushing the limits of whats possible in AI security. At Lakera we are not just another research lab: we are building the foundations of a research team that has immediate impact at scale. As a foundational member you will shape our approach influence key decisions and work on research that directly secures companies and agentic systems at scale. Were looking for people driven to make a real impact and move the needle.
We are building the worlds best foundation models for offensive and defensive security to secure AI applications at scale. As a Research Scientist you will lead work on everything from designing novel LLM post training approaches for security to scaling training pipelines. You will define the future of AI security and your research will not sit on a shelf it will protect organizations globally.
About Lakera
Lakera is on a mission to ensure AI does what we want it to do. We are heading towards a future where AI agents run our businesses and personal lives. Here at Lakera were not just dreaming about the future; were building the security foundation for it. We empower security teams and builders so that their businesses can adopt AI technologies and unleash the next phase of intelligent computing.
We work with Fortune 500 companies startups and foundation model providers to protect them and their users from adversarial misalignment. We are also the company behind Gandalf the worlds most popular AI security game.
Lakera has offices in San Francisco and Zurich.
We move fast and work with intensity. We act as one team but expect everyone to take substantial ownership and accountability. We prioritize transparency at every level and are committed to always raising the bar in everything we do. We promote diversity of thought as we believe that creates the best outcomes.
Example projects
Post train offensive security foundation models capable of generating impactful attacks to exploit vulnerabilities in LLM agents with minimal context.
Develop the leading benchmarks that give us confidence in scaling our efforts.
Build reinforcement learningbased strategies and other post training methods to maximize the effectiveness and adaptability of AIdriven attacks.
Optimize inference and training pipelines to scale offensive and defensive models effectively.
About you
You are creative bold and ready to challenge assumptions. You are excited to tackle realworld AI security problems and see your research come to life. You enjoy working in a tight knit team that rapidly moves between ideation and implementation. You want to work in a fast moving team where you have ownership impact and direct influence on the secure deployment of agentic systems at scale.
We are looking for at least one of the following:
A PhD in ML or a related field with a track record of tackling complex realworld problems. Work on LLMs is a plus but not a must.
Experience post training LLMs to improve performance in specific tasks (e.g. coding).
Evidence of driving an impactful ML project whether in academia industry or independent research.
Experience working on complex scalable ML engineering projects that involve making training and/or inference scalable and fast.
A strong publication record at top ML venues or other indicators of research excellence.
If youre ready to work at the frontier of AI security and truly make an impact lets talk.
Full-Time