Updates

Summer Update 2025

We're writing to share some important organizational developments and research progress as we move through 2025. ContentsResearch Leadership TransitionResearch UpdatesEmpirical Research: Studying the Emergence of Undesirable LLM PersonasConceptual Research: Strategic ReadinessSummer Research Fellowship Research Leadership Transition After six months as interim research director, Mia Taylor has decided to leave CLR at the end of August. Mia will be joining Forethought as a researcher. This was a difficult decision for Mia, but after much reflection, she feels that she’ll have more positive impact working on other priorities within longtermism. While s-risk reduction won't be her main research focus going forward, she intends to stay engaged with the s-risk community and look for opportunities to contribute to s-risk reduction in her future […]

Read more

Annual Review & Fundraiser 2022

Summary Our goal: CLR’s goal is to reduce the worst risks of astronomical suffering (s-risks). Our concrete research programs are on AI conflict, Evidential Cooperation in Large Worlds (ECL), and s-risk macrostrategy. We ultimately want to identify and advocate for interventions that reliably shape the development and deployment of advanced AI systems in a positive way. Fundraising: We have had a short-term funding shortfall and a lot of medium-term funding uncertainty. Our minimal fundraising goal is $750,000. We think this is a particularly good time to donate to CLR for people interested in supporting work on s-risks, work on Cooperative AI, work on acausal interactions, or work on generally important longtermist topics. Causes of Conflict Research Group: In 2022, we […]

Read more

Plans for 2022 & Review of 2021

Summary Mission: The Center on Long-Term Risk (CLR) works on addressing the worst-case risks from the development and deployment of advanced AI systems in order to reduce the worst risks of astronomical suffering (s-risks). Research: We built better and more explicit models of future conflict situations, making our reasoning and conclusions in this area more legible and rigorous. We also have developed more considered views on AI timelines and potential backfire risks from our work. In total, we published twelve research reports, including a paper in the field of cooperative AI that was accepted at two NeurIPS workshops. Grantmaking: We contributed to the scale-up of the field of Cooperative AI through our advising of the Center for Emerging Risk Research (CERR). Some of our staff […]

Read more

Plans for 2021 & Review of 2020

Summary Plans for 2021 Our first focus area will be cooperation & conflict in the context of transformative AI (TAI). In addition to improving our prioritization within this area, we plan to build a field around bargaining in artificial learners using tools from game theory and multi-agent reinforcement learning (MARL) and to make initial publications on related governance aspects. Our second focus area will be malevolence. We plan to assess how important this area is relative to our other work and investigate how preferences to create suffering could arise in TAI systems. Research will remain CLR’s main activity in 2021. We will continue trying to grow our research team. We will increase our grantmaking efforts across our focus areas. Some […]

Read more