The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We work to ensure that AI is developed and deployed safely aligning its impact with the long-term interests of humanity. This means engaging with policymakers researchers industry leaders and the broader public to build awareness and support for measures that can meaningfully reduce AI risk.
Were seeking a Newsletter Editor who keeps up-to-date with AI safety news can identify the most compelling stories for an AI safety audience and who writes clearly. Your role will include story selection drafting editing and publishing newsletter issueswhile working closely with our team.
We are open to full-time or part-time arrangements. We have a preference for candidates based in San Francisco but we are open to remote candidates as well.
Key Responsibilities
Own the newsletter publishing end-to-end releasing articles at a weekly cadence.
Keep up-to-date with AI safety and identify the relevant news stories each week.
Collaborate with team members including the executive director getting high-level approval on stories early in the week and a round of revisions before release.
Ensure clarity precision and accessibility (no technical background required) while respecting the newsletters style.
Work with our team for newsletter distribution.
You might be a good fit if you
Have a solid understanding of AI risks and are motivated by CAIS mission to mitigate societal-scale risks from advanced AI.
Are organized and reliable: you can own a weekly cadence and hit deadlines without sacrificing quality.
Produce writing that is clear accurate and enjoyable for our readers.
Track AI-safety developments daily and can quickly identify the few stories that matter most each week.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process please contact
We value diversity and encourage individuals from all backgrounds to apply.
We may use artificial intelligence (AI) tools to support parts of the hiring process such as reviewing applications analyzing resumes or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed please contact us.
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We work to ensure that AI is developed and deployed safely aligning its impact with the long-term interests of humanity. This means engaging with policymakers researche...
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. We work to ensure that AI is developed and deployed safely aligning its impact with the long-term interests of humanity. This means engaging with policymakers researchers industry leaders and the broader public to build awareness and support for measures that can meaningfully reduce AI risk.
Were seeking a Newsletter Editor who keeps up-to-date with AI safety news can identify the most compelling stories for an AI safety audience and who writes clearly. Your role will include story selection drafting editing and publishing newsletter issueswhile working closely with our team.
We are open to full-time or part-time arrangements. We have a preference for candidates based in San Francisco but we are open to remote candidates as well.
Key Responsibilities
Own the newsletter publishing end-to-end releasing articles at a weekly cadence.
Keep up-to-date with AI safety and identify the relevant news stories each week.
Collaborate with team members including the executive director getting high-level approval on stories early in the week and a round of revisions before release.
Ensure clarity precision and accessibility (no technical background required) while respecting the newsletters style.
Work with our team for newsletter distribution.
You might be a good fit if you
Have a solid understanding of AI risks and are motivated by CAIS mission to mitigate societal-scale risks from advanced AI.
Are organized and reliable: you can own a weekly cadence and hit deadlines without sacrificing quality.
Produce writing that is clear accurate and enjoyable for our readers.
Track AI-safety developments daily and can quickly identify the few stories that matter most each week.
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.
If you require a reasonable accommodation during the application or interview process please contact
We value diversity and encourage individuals from all backgrounds to apply.
We may use artificial intelligence (AI) tools to support parts of the hiring process such as reviewing applications analyzing resumes or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed please contact us.
Center for AI Safety. Reducing societal-scale risks from AI by advancing safety research, building the field of AI safety researchers, and promoting safety standards.