Annual Review & Fundraiser 2022
Summary Our goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way. Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics. Causes of Conflict Research Group: In 2022, we […]
Read morePlans for 2022 & Review of 2021
Summary Mission: The Center on Long-Term Risk (CLR) works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduce the worst risks of astronomical suffering (s-risks). Research: We built better and more explicit models of future conflict situations, making our reasoning and conclusions in this area more legible and rigorous. We also have developed more considered views on AI timelines and potential backfire risks from our work. In total, we published twelve research reports, including a paper in the field of cooperative AI that was accepted at two NeurIPS workshops. Grantmaking: We contributed to the scale-up of the field of Cooperative AI through our advising of the Center for Emerging Risk Research (CERR). Some of our staff […]
Read morePlans for 2021 & Review of 2020
Summary Plans for 2021 Our first focus area will be cooperation & conflict in the context of transformative AI (TAI). In addition to improving our prioritization within this area, we plan to build a field around bargaining in artificial learners using tools from game theory and multi-agent reinforcement learning (MARL) and to make initial publications on related governance aspects. Our second focus area will be malevolence. We plan to assess how important this area is relative to our other work and investigate how preferences to create suffering could arise in TAI systems. Research will remain CLR’s main activity in 2021. We will continue trying to grow our research team. We will increase our grantmaking efforts across our focus areas. Some […]
Read moreEAF/FRI are now the Center on Long-Term Risk (CLR)
We have renamed the Foundational Research Institute (FRI) to the Center on Long-Term Risk (CLR) and will stop using the Effective Altruism Foundation (EAF) brand (except as the name of our legal entities).
Read moreOur plans for 2020
We describe CLR's plans for 2020 and give an overview of our successes and mistakes in 2019.
Read more