Principal, Special Projects

Not Interested
Bookmark
Report This Job

profile Job Location:

San Francisco, CA - USA

profile Monthly Salary: Not Disclosed
Posted on: 2 days ago
Vacancies: 1 Vacancy

Job Summary

The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. Some of our past achievements include: releasing the most widely used measure of AI capabilities used by all major AI companies running a large compute cluster to facilitate AI safety research which has been cited over 16000 times and publishing a global statement on AI Risk signed by Geoffrey Hinton Yoshua Bengio and top AI CEOs.

The Role
Were hiring senior operators to own high-stakes projects and initiatives. You will identify high-impact opportunities define strategy and drive execution end-to-end. Above all we need people who can operate autonomously: someone with the judgment to navigate complex decisions and the track record to be trusted with significant responsibility.

The scope is broad by design. Example projects include: partnering with the team behind #TeamTrees to run a public campaign on AGI risk supporting researchers in building benchmarks for deception and weaponization risk standing up an AI safety hub in Washington DC and finding ways to engage YouTubers and longform creators on AI safety. What unites these is the need for someone with the judgment and ability to take an ambiguous mandate and turn it into a concrete outcome without needing to be managed closely.

Who Were Looking For
Youve operated at a high level in fast-moving high-stakes environmentsand have the track record to prove it. Example profiles were looking for include former startup Founders and COOspeople with both exceptional ability and judgment. Your specific background may look very different.

What Youll Do

    • Own projects and initiatives end-to-end: identify opportunities set strategy build plans and executewith the authority to make real decisions along the way.
    • Scope new projects end-to-end defining objectives deliverables timelines and budgets.
    • Coordinate across researchers vendors policy partners and external collaborators to move complex work forward.
    • Stay agile when priorities shift: re-scope re-prioritize and adjust without losing momentum.
    • Monitor risks and surface critical issues early always with a recommended path forward.

What Were Looking For

    • A track record of owning complex ambiguous initiatives and delivering outsized results.
    • The ability to scope new problem spaces quicklydefining goals success metrics and constraints through research interviews and good judgment.
    • Consistently good judgment under uncertaintyyou make sound calls with incomplete information know when to move fast and when to slow down and leadership can trust your decisions without reviewing every detail.
    • Comfort operating with high autonomy: you find the path forward even when one isnt obvious and you escalate the right things at the right time.
    • Strong analytical skills for evaluating feasibility impact and risk across very different domains.
    • Excellent written and verbal communicationyou can present complex ideas clearly to both technical and non-technical audiences.
    • Genuine interest in AI safety and the willingness to develop deep domain knowledge.
$150000 - $250000 a year

Benefits:
Health insurance for you and your dependents
401K plan 4% matching
Unlimited PTO
Lunch and dinner at the office
Annual Professional Development Stipend
Access to some of the top talent working on technical and conceptual research in AI safety
The Center for AI Safety is an Equal Opportunity Employer. We consider all qualified applicants without regard to race color religion sex sexual orientation gender identity or expression national origin ancestry age disability medical condition marital status military or veteran status or any other protected status in accordance with applicable federal state and local alignment with the San Francisco Fair Chance Ordinance we will consider qualified applicants with arrest and conviction records for employment.

If you require a reasonable accommodation during the application or interview process please contact emailprotected.

We value diversity and encourage individuals from all backgrounds to apply.
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. Some of our past achievements include: releasing the most widely used measure of AI capabilities used by all major AI companies running a large compute cluster to facil...
View more view more

Key Skills

  • Computer Science
  • Continuous Integration
  • Fraud
  • Intake Experience
  • Law Enforcement
  • Usability
  • Analysis Skills
  • Computer Forensics
  • Driving
  • Teaching
  • Counterintelligence
  • Sass

About Company

Company Logo

Center for AI Safety. Reducing societal-scale risks from AI by advancing safety research, building the field of AI safety researchers, and promoting safety standards.

View Profile View Profile