As a research engineer intern here you will work very closely with our researchers on projects in fields such as Trojans Adversarial Robustness Power Aversion Machine Ethics and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship but we will ultimately be treating you as a colleague. By this we mean you will have the opportunity to debate for your own experiments or projects and defend their impact. You will plan and run experiments conduct code reviews and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.
Timing
This application is for the full-time summer internship position. Applications are due by December 5 2025.
You might be a good fit if you:
Are able to read an ML paper understand the key result and understand how it fits into the broader literature.
Are comfortable setting up launching and debugging ML experiments.
Are familiar with relevant frameworks and libraries (e.g. pytorch).
Communicate clearly and promptly with teammates.
Take ownership of your individual part in a project.
Have co-authored a ML paper in a top conference.
$9000 - $19200 one-time
This internship is unpaid; however CAIS provides the above stipend to assist with academic pursuits and living expenses. The stipend is subject to tax.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process please contact emailprotected.
We value diversity and encourage individuals from all backgrounds to apply.
We may use artificial intelligence (AI) tools to support parts of the hiring process such as reviewing applications analyzing resumes or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed please contact us.
Required Experience:
Intern
IntroductionThe Center for AI Safety (CAIS) is a leading research and field-building organization on a mission to reduce societal-scale risks from AI. Alongside our sister organization the CAIS Action Fund we tackle the toughest AI issues with a mix of technical societal and policy solutions. Our wo...
As a research engineer intern here you will work very closely with our researchers on projects in fields such as Trojans Adversarial Robustness Power Aversion Machine Ethics and Out-of-Distribution Detection. We will assign you a dedicated mentor throughout your internship but we will ultimately be treating you as a colleague. By this we mean you will have the opportunity to debate for your own experiments or projects and defend their impact. You will plan and run experiments conduct code reviews and work in a small team to create a publication with outsized impact. You will leverage our internal compute cluster to run experiments at scale on large language models.
Timing
This application is for the full-time summer internship position. Applications are due by December 5 2025.
You might be a good fit if you:
Are able to read an ML paper understand the key result and understand how it fits into the broader literature.
Are comfortable setting up launching and debugging ML experiments.
Are familiar with relevant frameworks and libraries (e.g. pytorch).
Communicate clearly and promptly with teammates.
Take ownership of your individual part in a project.
Have co-authored a ML paper in a top conference.
$9000 - $19200 one-time
This internship is unpaid; however CAIS provides the above stipend to assist with academic pursuits and living expenses. The stipend is subject to tax.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process please contact emailprotected.
We value diversity and encourage individuals from all backgrounds to apply.
We may use artificial intelligence (AI) tools to support parts of the hiring process such as reviewing applications analyzing resumes or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed please contact us.
Center for AI Safety. Reducing societal-scale risks from AI by advancing safety research, building the field of AI safety researchers, and promoting safety standards.