Employer Active
Job Alert
You will be updated with latest job alerts via emailJob Alert
You will be updated with latest job alerts via emailThe Machine Intelligence Research Institute (MIRI) is a nonprofit based in Berkeley California focused on reducing existential risks from the transition to smarterthanhuman AI. Weve historically been very focused on technical alignment research. Since summer 2023 we have shifted our focus towards increasing the chances of good AI regulation happening. See our strategy update post for more details.
Deadline extended to March 25th.
We are looking to build a dynamic and versatile team that can quickly produce a large range of research outputs for the technical governance space. Please feel free to fill out this form or contact us at
We focus on researching and designing technical aspects of regulations and policy that could lead to safe AI. The team works on:
Inputs into regulations requests for comments by policy bodies (e.g. NIST/US AISI EU UN)
Technical research to improve international coordination
Limitations of current AI safety proposals and policies
Communicating with and consulting for policymakers and governance organizations
Our previous publications are available on our website if you would like to read them. We have a draft of a research agenda that will inform future projects it is available upon request.
We are primarily hiring for researchers but also interested in hiring a manager for the team.
In this role you would have the chance to work on all of the above areas. The work will be a mixture of researching writing (for internal and external use) and engaging with collaborators and policymakers. Most of the daytoday work is a combination of reading writing and meetings. Some example activities could include:
Threat modeling working out how AI systems could cause largescale harm and hopefully what actions could be taken to prevent this
Responding to a US government agencys Request for Comment
Learning about risk management practices in other industries and applying these to AI
Designing and implementing evaluations of AI models for example to demonstrate failure modes with current policy
Preparing and presenting informative briefings to policymakers such as explaining the basics and current state of AI evaluations
Reading a government or AI developers AI policy document and writing a report on its limitations
Designing new AI policies and standards which address the limitations of current approaches
In the above work maintain particular focus on what is needed for solutions to scale to smarterthanhuman intelligence and conduct research on which new challenges may emerge at that stage
For a manager responsibilities could include the following but we are open to candidates who want to focus on a subset of these responsibilities.
External stakeholder management e.g. build and maintain relationships with policy makers and AI company employees (the target audience for much of our work)
Internal stakeholder management e.g. interface with the rest of MIRI and ensure our work is consistent with broader MIRI goals prepublication review of the teams outputs
Project management e.g. track existing projects motivate good work toward deadlines
People management e.g. run future hiring rounds fellowships
Bonus: Research contributions e.g. contributing to object level work
There are no formal degree requirements to work on the team however we are especially excited about applicants who have a strong background in AI Safety and have particular previous experience or familiarity working in (or as) one or more of:
Compute governance. Technical knowledge of AI hardware / chips manufacturing and related governance proposals.
Policy (including AI policy). Experience here could involve writing legislation or white papers engaging with policy makers or other research in AI policy and governance
Strong AI Safety generalist. For example you have produced good AI safety research and have a good overviewlevel understanding of empirical theory and conceptual approaches or otherwise have a demonstrated ability to think clearly and carefully about AI safety.
Bonus: Research or engineering focused on frontier AI models or the AI tech stack. The role may involve creating or running model evaluations benchmarking AI hardware conducting scaling law experiments and other empirical work.
We are also excited about candidates who are particularly strong in the following areas:
Agency You get things done without someone constantly looking over your shoulder. You notice problems and are motivated to fix them. You focus on solving the problem not waiting to be told what to do. You know when to defer to anothers decision and when to ask for guidance. You are an active member of the team not a mindless cog in the machine.
Conscientiousness You are diligent and hardworking and complete your work reliably and dependably. You desire to do tasks well and effectively. You pay attention to details and are organized and able to manage lots of small tasks and projects.
Comfort learning on the job You enjoy and are able to quickly acquire new skills and knowledge as needed. You feel comfortable working on underspecified tasks where part of your job is to further develop the research questions appropriately.
Generative thinking You enjoy coming up with and iterating on new ideas. You can generate original work as well as extend others thoughts. You arent afraid to suggest things or point out flaws in your or others thoughts.
Communication (Internal) You are a team player who is excited to work together with others and willing to attend several weekly meetings. You proactively keep teammates/manager in the loop about the status of projects you manage when things are falling behind when you need more information. You voice your confusions.
Communication (External) You are able to communicate effectively to external stakeholders who have a range of technical expertise including policymakers. You can produce concise clear and compelling writing and deliver presentations on the teams research and ideas.
In addition we are looking for candidates who:
Are broadly aligned with MIRIs values and willing to work toward MIRIs goals (i.e. the world needs to build an Off Switch for AI.
Are passionate about MIRIs mission and excited to support our work in reducing existential risks from AI.
Application deadline Please apply by end of day March 25th 2025 Pacific time. Earlier applications are encouraged. If you are unable to make this deadline please let us know and we can attempt to be flexible.
Location Prefer inoffice (in Berkeley CA).
Compensation $120200k.
The range is due to the wide possibility space of experience and skills that candidates may bring.
We strive to ensure that all staff are paid an appropriate and comfortable living wage such that they feel fairly compensated and are able to focus on doing great work.
Benefits MIRI offers a variety of benefits including:
Health insurance (the best available plans from Kaiser and Blue Shield) as well as dental and vision coverage. (We cannot always offer comparable benefits to international staff.
No vacation policy staff are encouraged to take vacation when they want/need to in coordination with their manager.
Visas We can potentially sponsor visas for particularly promising candidates.
Full-Time