CLR Foundations Course
The CLR Foundations Course on risks of astronomical suffering (s-risks) is intended for participants to learn more about which s-risks we consider most important and how to reduce them.
Applications for our Winter 2024 round are open here until Wednesday November 27th, 23:59 Pacific Time. For details on the Winter 2024 round, please see this page.
To be notified about future iterations of our Foundations Course, please register interest here.
You may also be interested in the CLR S-Risk Seminars, which dive deeper into different priority cause areas within s-risk reduction and are intended for people who have completed the CLR Foundations Course or have a similar level of context.
Content
The Foundations Course is designed to introduce people to CLR’s research on how transformative AI (TAI) might be involved in the creation of large amounts of suffering (s-risks). Our priority areas for addressing these risks include work on multiagent AI safety, AI governance, epistemology, risks from malevolent or fanatical actors, and macrostrategy.
We recommend people consider the Center for Reducing Suffering’s S-risk Introductory Fellowship if they are interested in learning about work on s-risks unrelated to these topics, and consider BlueDot’s AI Safety courses if they are interested in learning about existential risks from TAI.
Target audience
We think the Foundations Course will be most useful for you if you are interested in reducing s-risk through contributions to CLR’s priority areas, and are seriously considering making this a priority for your career. They could also be useful if you are already working in an area that overlaps with CLR’s priorities (e.g. AI governance, AI alignment), and are interested in ways you can help reduce s-risks in the course of your current work.
We recommend those interested in our Summer Research Fellowship first take part in the Foundations Course, though participation is not a prerequisite.
The CLR Foundations Course will be most useful for you if you have not yet interacted extensively with CLR, e.g., you have talked with us about s-risks for less than 10 hours. If you have interacted more, you may be interested in the CLR S-Risk Seminars .
There might be more idiosyncratic reasons to apply and the criteria above are intended as a guide rather than strict criteria.
Contact
If you have any questions about the program or are uncertain whether to apply, please reach out to james.faville@longtermrisk.org.