Research Scientist

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 30+ days ago
Vacancies: 1 Vacancy
The job posting is outdated and position may be filled

Job Summary

The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AIs toughest challenges through technical research field-building initiatives and policy engagement along with our sister organization Center for AI Safety Action Fund.
As a Research Scientist here you will lead and execute high-impact research that advances the safety and reliability of frontier AI systems. Youll design and run experiments on large language models build the tooling needed to train and evaluate models at scale and turn results into publishable research. Youll collaborate closely with CAIS researchers and external academic and commercial partners using our compute cluster to run large-scale training and evaluation. The work spans areas like AI honesty robustness transparency and trojan/backdoor behaviors aimed at reducing real-world risks from advanced AI systems.

Key Responsibilities Include:

  • Own end-to-end research experiments.

  • Train and fine-tune large transformer models across domains.

  • Build and maintain datasets and benchmarks.

  • Run distributed training and evaluation at scale.

  • Write and ship research collaborating with co-authors and supporting submissions of papers to top conferences.

  • Collaborate with researchers and external partners while contributing to shared research direction and responding quickly in research cycles.

  • Mentor and guide others on the team.

You might be a good fit if you:

  • Are a current PhD student or researcher in machine learning or a related field. Exceptional candidates with a strong publication record may be considered regardless of degree level.

  • Have co-authored at least one paper published at a top ML conference venue (e.g. NeurIPS ICML ICLR ACL CVPR). Workshop papers are considered though peer-reviewed conference publications are strongly preferred. Publications in journals such as IEEE or Springer Nature are typically given less weight.

  • Have a track record of empirical research in AI or ML particularly in AI safety-relevant areas (e.g. adversarial robustness calibration benchmarking). We weight empirical research heavily; candidates with primarily theoretical backgrounds are generally not a strong fit.

  • Alternatively have made meaningful research contributions at a leading AI lab.

  • Are able to read an ML paper understand the key result and understand how it fits into the broader literature.

  • Are comfortable setting up launching and debugging ML experiments.

  • Are familiar with relevant frameworks and libraries (e.g. PyTorch).

  • Communicate clearly and promptly with teammates.

  • Take ownership of your individual part in a project.

$170000 - $220000 a year

Benefits:

Health insurance for you and your dependents

401K plan 4% matching

Unlimited PTO

Lunch and dinner at the office

Annual Professional Development Stipend

Access to some of the top talent working on technical and conceptual research in AI safety

Know someone who could be a great fit for this role Submit their details through our Referral Form. If we end up hiring your referral youll receive a $1500 bonus once theyve been with CAIS for 90 days.

The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.

If you require a reasonable accommodation during the application or interview process please contact emailprotected.

We value diversity and encourage individuals from all backgrounds to apply.


Required Experience:

IC

The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We address AIs toughest challenges through technical research field-building initiatives and policy engagement along with our sister organization Center for AI Safety A...
View more view more

About Company

Company Logo

Center for AI Safety. Reducing societal-scale risks from AI by advancing safety research, building the field of AI safety researchers, and promoting safety standards.

View Profile View Profile