Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
As AI systems evolve from tools into autonomous agents they expose entirely new risk surfacesemergent behaviors agentic autonomy and previously unimaginable vulnerabilities. Take the recent exploit in Cursor an AInative IDE where a thirdparty attacker was able to manipulate the LLM via the rules fileresulting in fully compromised code. Or the LLMbased attack that exfiltrated private messages from Slack channels. These are just the beginning. The landscape is full of vulnerabilities waiting to be discovered and this role puts you at the forefront of finding themshaping both Lakeras product direction and broader industry thinking.
At Lakera were building security foundation models to craft highly effective adversarial attacks against LLMs. But its not just about generating attacks in theorythey must be delivered in realworld systems to create realworld impact. This role bridges that gap between cuttingedge AI capabilities and practical exploitation.
Were looking for a Security Researcher with deep offensive expertisesomeone who understands how systems break and is excited to apply that knowledge to the rapidly emerging domain of AInative threats. Your work will directly shape how some of the largest and most advanced AI deployments in the world are tested hardened and trusted at scale.
About Lakera
Lakera is on a mission to ensure AI does what we want it to do. We are heading towards a future where AI agents run our businesses and personal lives. Here at Lakera were not just dreaming about the future; were building the security foundation for it. We empower security teams and builders so that their businesses can adopt AI technologies and unleash the next phase of intelligent computing.
We work with Fortune 500 companies startups and foundation model providers to protect them and their users from adversarial misalignment. We are also the company behind Gandalf the worlds most popular AI security game.
Lakera has offices in San Francisco and Zurich.
We move fast and work with intensity. We act as one team but expect everyone to take substantial ownership and accountability. We prioritize transparency at every level and are committed to always raising the bar in everything we do. We promote diversity of thought as we believe that creates the best outcomes.
Example Projects
Leverage Lakeras internal security foundation models to discover and weaponize vulnerabilities in agentic systems.
Collaborate with AI researchers to uncover novel classes of LLM vulnerabilitiesadvanced jailbreaks multistep prompt injections finetuning exploitsand use them to compromise realworld applications.
Design and lead redteaming operations against internal and customer AI stacks simulating adversaries to probe how LLMs fail in productionfrom misalignment to full system compromise.
Develop AInative security benchmarks that reflect real exploitability not just academic risk and contribute to the first generation of practical evaluation standards for LLM defenses.
Work closely with ML engineers to scale offensive techniques into automated testing pipelines and embed detection capabilities into Lakeras core products.
Shape the narrative of AI securityby publishing research contributing to tooling and helping define how traditional security paradigms must evolve in the face of intelligent systems.
About You
You are creative bold and ready to challenge assumptions. You are excited to tackle realworld AI security problems and see your research come to life. You enjoy working in a tight knit team that rapidly moves between ideation and implementation. You want to work in a fast moving team where you have ownership impact and direct influence on the secure deployment of agentic systems at scale.
Were looking for someone with handson experience in offensive security particularly in red teaming penetration testing or vulnerability research. Youve found real issues in real systems and you know how to think like an adversary.
In addition any of the following would be valuable:
Strong engineering skills and the ability to build your own tools and infrastructure.
Familiarity with how modern machine learning systems workor the ability to learn fast.
Experience with or interest in the security implications of LLMs and autonomous agents.
A track record of impactful security research tooling or public contributions.
If youre ready to work at the frontier of AI security and truly make an impact lets talk.
Full-Time