Introduction
The Center for AI Safety is a research and field-building nonprofit located in San Francisco. Our mission is to reduce catastrophic and existential risks from artificial intelligence through field-building and technical research.
As a research engineer here you will pursue a variety of research projects in fields such as AI Honesty Utility Engineering Trojans Transparency and Robustness.. You will assist in writing and submitting articles for publication at top conferences. You will collaborate with both internal research staff as well as academics at top universities (including Stanford UC Berkeley CMU or MIT). You will leverage our compute cluster to run experiments at scale on large language models.
Representative Projects
- Finetuning large-scale transformers and evaluating them under different data domains.
- Creating and designing new datasets to evaluate the robustness of different models.
- Scaling machine learning systems to thousands of GPUs.
- Evaluating models in sequential decision-making games.
- Developing and launching ML competitions (e.g. Trojan Detection Challenge).
- Collaborating with academics on research ranging from transparency proxy gaming honest AI interpretable uncertainty and so on.
You might be a good fit if you:
- Are able to read an ML paper understand the key result and understand how it fits into the broader literature.
- Are familiar with relevant frameworks and libraries (e.g. pytorch and huggingface).
- Have experience launching and training distributed ML jobs.
- Communicate clearly and promptly with teammates.
- Have co-authored an NLP or RL paper in a top conference.
About Us
The Center for AI Safety is a non-profit dedicated to ensuring the safety of future artificial intelligence systems. We believe that artificial intelligence will be a powerful technology which will dramatically change society and that AI safety must therefore be pursued proactively. To this end we conduct research into machine learning safety and facilitate field-building projects which accelerate the growth of the safety community. Join us in steering the future of AI.
For this role we are considering junior and mid level research engineers with salary pay ranges of 100-140K and 140-180K respectively.
Benefits:
1. Health insurance for you and your dependents
2. Competitive PTO
3. Free lunch and dinner at the office
4. Reimbursement for certain transportation fees
5. Annual learning & development stipend
If you have any questions about the role feel free to reach out to
emailprotected.
The Center of AI Safety is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race color religion sex sexual orientation gender identity national origin disability status protected veteran status or any other characteristic protected by law.
Some studies have found that a higher percentage of women and underrepresented minority candidates wont apply if they dont meet every listed qualification. The Center for AI Safety values candidates of all backgrounds. If you find yourself excited by the position but you dont check every box in the description we encourage you to apply anyway!
Required Experience:
IC
IntroductionThe Center for AI Safety is a research and field-building nonprofit located in San Francisco. Our mission is to reduce catastrophic and existential risks from artificial intelligence through field-building and technical research.As a research engineer here you will pursue a variety of re...
Introduction
The Center for AI Safety is a research and field-building nonprofit located in San Francisco. Our mission is to reduce catastrophic and existential risks from artificial intelligence through field-building and technical research.
As a research engineer here you will pursue a variety of research projects in fields such as AI Honesty Utility Engineering Trojans Transparency and Robustness.. You will assist in writing and submitting articles for publication at top conferences. You will collaborate with both internal research staff as well as academics at top universities (including Stanford UC Berkeley CMU or MIT). You will leverage our compute cluster to run experiments at scale on large language models.
Representative Projects
- Finetuning large-scale transformers and evaluating them under different data domains.
- Creating and designing new datasets to evaluate the robustness of different models.
- Scaling machine learning systems to thousands of GPUs.
- Evaluating models in sequential decision-making games.
- Developing and launching ML competitions (e.g. Trojan Detection Challenge).
- Collaborating with academics on research ranging from transparency proxy gaming honest AI interpretable uncertainty and so on.
You might be a good fit if you:
- Are able to read an ML paper understand the key result and understand how it fits into the broader literature.
- Are familiar with relevant frameworks and libraries (e.g. pytorch and huggingface).
- Have experience launching and training distributed ML jobs.
- Communicate clearly and promptly with teammates.
- Have co-authored an NLP or RL paper in a top conference.
About Us
The Center for AI Safety is a non-profit dedicated to ensuring the safety of future artificial intelligence systems. We believe that artificial intelligence will be a powerful technology which will dramatically change society and that AI safety must therefore be pursued proactively. To this end we conduct research into machine learning safety and facilitate field-building projects which accelerate the growth of the safety community. Join us in steering the future of AI.
For this role we are considering junior and mid level research engineers with salary pay ranges of 100-140K and 140-180K respectively.
Benefits:
1. Health insurance for you and your dependents
2. Competitive PTO
3. Free lunch and dinner at the office
4. Reimbursement for certain transportation fees
5. Annual learning & development stipend
If you have any questions about the role feel free to reach out to
emailprotected.
The Center of AI Safety is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race color religion sex sexual orientation gender identity national origin disability status protected veteran status or any other characteristic protected by law.
Some studies have found that a higher percentage of women and underrepresented minority candidates wont apply if they dont meet every listed qualification. The Center for AI Safety values candidates of all backgrounds. If you find yourself excited by the position but you dont check every box in the description we encourage you to apply anyway!
Required Experience:
IC
View more
View less