Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailNot Disclosed
Salary Not Disclosed
1 Vacancy
Since starting in July 2022 FAR.AI has grown to 19 FTE produced 28 academic papers and established the leading AI safety events for research and international cooperation. Our work is recognized globally with publications at leading venues such as NeurIPS ICML and ICLR that have been featured in the Financial Times Nature News and MIT Tech Review. We leverage our research insights to drive practical change through redteaming with frontier model developers. Additionally we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running an AI safety focused coworking space FAR.Labs with 40 members; and through targeted grants to technical researchers.
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction FAR.AI aims to pursue a diverse portfolio of projects.
Our current focus areas include:
building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs
finding more effective approaches to value alignment (e.g. training from language feedback
Advancing model evaluation techniques (e.g. inverse scaling and codebook features and learned planning.
We also put our research into practice through redteaming engagements with frontier AI developers and collaborations with government institutes.
To build a flourishing field of AI safety research we host targeted workshops and events and operate a coworking space in Berkeley called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe culminating in a public statement calling for global action on AI safety research and governance. We also host the semiannual Alignment Workshop with 150 researchers from academia industry and government to learn about the latest developments in AI safety and find collaborators. For more information on FAR.AIs activities please visit our recent post.
You will collaborate closely with research advisers and research scientists inside and outside of FAR. As a research engineer you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the writeup of results and credited as an author in submissions to peerreviewed venues (e.g. NeurIPS ICLR JMLR).
While each of our projects is unique your role will generally have:
Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction analyse experimental results and participate in the writeup of results.
Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.
Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.
Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.
Autonomy. You will be highly selfdirected. To succeed in the role you will likely need to spend part of your time studying machine learning and developing your highlevel views on AI safety research.
This role would be a good fit for someone looking to gain handson experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.
It is essential that you:
Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience opensource contributions or academic publications.
Have experience with at least one objectoriented programming language (preferably Python).
Are resultsoriented and motivated by impactful research.
It is preferable that you have experience with some of the following:
Common ML frameworks like PyTorch or TensorFlow.
Natural language processing or reinforcement learning.
Operating system internals and distributed systems.
Publications or opensource software contributions.
Basic linear algebra calculus vector probability and statistics.
As a Research Engineer you would lead collaborations and contribute to many projects with examples below:
Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or jailbreaks in language models or is progress in averagecase performance orthogonal to worstcase robustness
Robustness of advanced AI systems. Explore adversarial training architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zerosum board games and language models.
Mechanistic interpretability for mesaoptimization. Develop techniques to identify internal planning in models to effectively audit the goals of models in addition to their external behavior.
Redteaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.
You will be an employee of FAR AI a 501(c)3 research nonprofit.
Location: Both remote and inperson (Berkeley CA) are possible. We sponsor visas for inperson employees and can also hire remotely in most countries.
Hours: Fulltime 40 hours/week).
Application process: A 72minute programming assessment a short screening call two 1hour interviews and a 12 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.
Please apply! If you have any questions about the role please do get in touch at .
Full-Time