aisafety.training Open in urlscan Pro
23.21.234.173  Public Scan

Submitted URL: http://aisafety.training/
Effective URL: https://aisafety.training/
Submission: On January 01 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

AI SAFETY TRAINING

A database of training programs, courses, conferences, and other events for AI
existential safety. Book a free call with AI Safety Quest if you want to get
into AI safety!

 * Subscribe to updates
 * Self-Study
 * Add entry
 * Corrections & updates

Open Applications



Deadline not found



Programs Timeline

Exact dates may be inaccurate if they were added before dates were announced,
refer to the program websites for reliable information.



Upcoming Table




SELF-STUDY

Facilitated courses are usually heavily oversubscribed. However, materials are
openly available and lots of other people want to learn, so you can form your
own study groups! Pick your preferred course, then introduce yourself in
#study-buddies on the AI Alignment Slack to make a group, or go to AI Safety
Quest and form a Quest Party.


AI SAFETY FUNDAMENTALS

8 week courses by BlueDot Impact covering much of the foundations of the field
and ongoing research directions.

 * Alignment
 * Governance


ALIGNMENT FORUM CURATED SEQUENCES

Sequences of blog posts by researchers on the Alignment Forum covering diverse
topics.

 * Sequences


ARKOSE'S RESOURCES LIST

Curated and tested list of resources that Arkose sends to AI researchers,
excellent for getting a grounding in the problem.

 * Resources


READING WHAT WE CAN

Collection of books and articles for a 20 day reading challenge.

 * Books


CHAI BIBLIOGRAPHY

Extensive annotated reading recommendations from the Center for Human-Compatible
AI.

 * Materials


KEY PHENOMENA IN AI RISK

8 weeks reading curriculum from PIBBSS.ai that 'provides an extended
introduction to some key ideas in AI risk, in particular risks from misdirected
optimization or 'consequentialist cognition'.

 * Reading Curriculum

Machine Learning-focused

Machine-learning focused courses for people who want to work on alignment are
also available, though take care not to end up drifting into a purely
capabilities enhancing role on this track!


INTRO TO ML SAFETY

40 hours of recorded lectures, written assignments, coding assignments, and
readings by the Center for AI Safety, used in the ML Safety Scholars program.

 * Materials


ALIGNMENT RESEARCH ENGINEER ACCELERATOR

An advanced course to skill up in ML engineering to work in technical AI
Alignment roles.

 * Materials


DEEP LEARNING CURRICULUM BY JACOB HILTON

An advanced curriculum for getting up to speed with some of the latest
developments in deep learning, as of July 2022. It is targeted at people with a
strong quantitative background who are familiar with the basics of deep
learning, but may otherwise be new to the field.

 * Materials


THE INTERPRETABILITY TOOLKIT

Many tools to get started and skill up in Interpretability by Alignment Jam. The
toolkit includes Quickstart to Mechanistic Interpretability by Neel Nanda.

 * Materials


RESOURCES

AI Safety Communities
Living document of online and offline communities.

AI Safety Info
Interactive crowdsourced FAQ on AI Safety.

Alignment Ecosystem Development
Volunteering opportunities for devs and organizers.

Map of AI Existential Safety



--------------------------------------------------------------------------------

© AI Safety Support, released under CC-BY.

 * Discord
 * Email
 * Airtable