- About us
- Background on our strategy
- Plans for 2020
- Review of 2019
- How to contribute
- Our mission. We are building a global community of researchers and professionals working on reducing risks of astronomical suffering (s-risks).
- Our plans for 2020
- Research. We aim to investigate the questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence” and other areas.
- Research community. We plan to host research workshops, make grants to support work relevant to our priorities, present our work to other research groups, and advise people who are interested in reducing s-risks in their careers and research priorities.
- 2019 review
- Research. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems.
- Research workshops. We ran research workshops on s-risks from AI in Berlin, the San Francisco Bay Area, and near London. The participants gave positive feedback.
- Location. We moved to London (Primrose Hill) to attract and retain staff better and collaborate with other researchers in London and Oxford.
- Fundraising target. We aim to raise $185,000 (stretch goal: $700,000) by December 2019. If you prioritize reducing s-risks, there is a strong case for supporting us. Make a donation.
We are a London-based nonprofit. Previously, we were located in Switzerland (Basel) and Germany (Berlin).
Background on our strategy
For an overview of our strategic thinking, see the following pieces:
- Gloor: Cause prioritization for downside-focused value systems
- Althaus & Gloor: Reducing Risks of Astronomical Suffering: A Neglected Priority
- Gloor: Altruists Should Prioritize Artificial Intelligence (somewhat dated)
The best work on reducing s-risks cuts across a broad range of academic disciplines and interventions. Our recent research agenda, for instance, draws from computer science, economics, political science, and philosophy. That means (a) we must work in many different disciplines and (b) find people who can bridge disciplinary boundaries. The longtermism community brings together people with diverse backgrounds who understand our prioritization and share it to some extent. For this reason, we focus on making reducing s-risks a well-established priority in that community.
Inspired by GiveWell’s self-evaluations, we are tracking our progress with a set of deliberately vague performance questions:
- Building long-term capacity. Have we made progress towards becoming a research group that will have an outsized impact on the research landscape and relevant actors shaping the future?
- Research progress. Has our work resulted in research progress that helps reduce s-risks (both in-house and elsewhere)?
- Research dissemination. Have we communicated our research to our target audience, and has the target audience engaged with our ideas?
- Organizational health. Are we a healthy organization with an effective board, staff in appropriate roles, appropriate evaluation of our work, reliable policies and procedures, adequate financial reserves and reporting, and so forth?
Our team will answer these questions at the end of 2020.
Plans for 2020
We aim to investigate research questions listed in our research agenda titled “Cooperation, Conflict, and Transformative Artificial Intelligence.” We explain our focus on cooperation and conflict in the preface:
“S-risks might arise by malevolence, by accident, or in the course of conflict. (…) We believe that s-risks arising from conflict are among the most important, tractable, and neglected of these. In particular, strategic threats by powerful AI agents or AI-assisted humans against altruistic values may be among the largest sources of expected suffering. Strategic threats have historically been a source of significant danger to civilization (the Cold War being a prime example). And the potential downsides from such threats, including those involving large amounts of suffering, may increase significantly with the emergence of transformative AI systems.”
Topics covered by our research agenda include:
- AI strategy and governance. What does the strategic landscape at the time of transformative AI (TAI) development look like? For example, will it be unipolar or multipolar, and how will offensive and defensive capabilities scale? What does this imply for cooperation failures? How can we shape the governance of AI to reduce the chances of catastrophic cooperation failures?
- Credibility. What might the nature of credible commitment among TAI systems look like, and what are the implications for improving cooperation? Can we develop new theories (e.g., program equilibrium) to account for relevant features of AI?
- Peaceful bargaining mechanism. Can we further develop bargaining mechanisms that do not lead to destructive conflict (e.g., by implementing surrogate goals)?
- Contemporary AI architectures. How can we make progress on reducing cooperation failures using contemporary AI tools (e.g., learning to solve social dilemmas among deep reinforcement learners)?
- Humans in the loop. How do we expect human overseers or operators of AI systems to behave in interactions between humans and AI systems?
- Foundations of rational agency, including bounded decision-making and acausal reasoning.
We did not list some topics in the research agenda because they did not fit its scope, but we consider them very important:
- macrostrategy research on questions related to s-risk,
- nontechnical work on strategic threats,
- reducing the likelihood of s-risks from hatred, sadism, and other kinds of malevolence,
- research on whether and how we should advocate rights for (sentient) digital minds,
- reducing potential risks from genetic enhancement (especially in the context of TAI development),
- AI strategy topics not captured by the research agenda (e.g., near misses),
- AI governance topics not captured by the research agenda (e.g., the governance of digital minds),
- foundational questions relevant to s-risk (e.g., metaethics, population ethics, and the feasibility and moral relevance of artificial consciousness), and
- other potentially relevant areas (e.g., great power conflict, space governance, or promoting cooperation).
In practice, our publications and grants will be determined to a large extent by the ideas and motivation of the researchers. We understand the above list of topics as a menu for researchers to choose from, and we expect that our actual work will only cover a small portion of the relevant issues. We hope to collaborate with other AI safety research groups on some of these topics.
We are looking to grow our research team, so we would be excited to hear from you if you think you might be a good fit! We are also considering running a hiring round based on our research agenda as well as a summer research fellowship.
We aim to develop a global research community, promoting regular exchange and coordination between researchers whose work contributes to reducing s-risks.
- Research workshops. Our previous workshops were attended by researchers from major AI labs and academic research groups. They resulted in several researchers becoming more involved with research relevant to s-risks. We plan to continue to host research workshops near London and in the San Francisco Bay Area. Besides, we might host seminars at other research groups and explore the idea of hosting a retreat on moral reflection.
- Research agenda dissemination. We plan to reach out proactively to researchers who may be interested in working on our agenda. We plan to present the agenda at several research organizations, on podcasts, and at EA Global San Francisco. We may also publish a complementary overview of research questions focused on macrostrategy and s-risks from causes other than conflict involving AI systems.
- Grantmaking. We will continue to support work relevant to reducing s-risks through the CLR Fund. We plan to run at least one open grant application round. If we have sufficient capacity, we plan to explore more active forms of grantmaking, such as reaching out to academic researchers, laying the groundwork for setting up an academic research institute, or working closely with individuals who could launch valuable projects.
- Community coordination. We see substantial benefits from bringing the existential-risk-oriented (x-risk-oriented) and s-risk-oriented parts of the longtermism community closer together. We believe that concern for s-risks should be a core component of longtermist EA, so we will continue to encourage x-risk-oriented groups and authors to consider s-risks in their key content and thinking. We will also continue to suggest to suffering-focused EAs that they consider potential risks to people with other value systems in their publications (see below). We plan to reassess to what extent CLR should continue to have a coordinating role in the longtermist EA community at the end of 2020.
- Advising and in-person exchange. In the past, in-person exchange has been an important step for helping community members better understand our priorities and become more involved with our work. We will continue to advise people who are interested in reducing s-risks in their careers and research priorities. Next year, we might experiment with regular meetups and co-working at our offices.
Organizational opportunities and challenges
- Research office. We expect some of our remote researchers to join us at our offices in London sometime next year. We also hope to hire more researchers.
- Lead researcher. Our research team currently lacks a lead researcher with academic experience and management skills. We hope that Jesse Clifton will take on this role in mid-2020.
Review of 2019
S-risks from conflict. In 2019, we mainly worked on s-risks as a result of conflicts involving advanced AI systems:
- Research agenda: Clifton: Cooperation, Conflict, and Transformative Artificial Intelligence: for a summary, see above.
- Kokotajlo: The 'Commitment Races' problem: In this post on the Alignment Forum, CLR Fund grantee Daniel Kokotajlo explores the dilemma in which there are strong reasons to lock in commitments as early as possible. Such premature commitments might also lead to disaster.
We also circulated nine internal articles and working papers with the participants of our research workshops.
Foundational work on decision theory. This work might be relevant in the context of acausal interactions (see the last section of the research agenda):
- MacAskill, Vallinder, Shulman, Oesterheld, Treutlein: The Evidentialist’s Wager: In this working paper, the authors present a wager for altruists in favor of following acausal decision theories, even if they assign significantly lower credence to them being correct. The basic idea is that under acausal decision theories, correlated decision-makers amplify the impact of one’s action manifold. Johannes Treutlein first explored the main idea in a blog post in 2018.
- Oesterheld: Approval-directed agency and the decision theory of Newcomb-like problems: This paper on the implicit decision heuristics of trained AI agents has now been published in a special issue of Synthese.
- Sotala: Multiagent Models of Mind (sequence)
- Baumann: Risk factors for s-risks (independent researcher)
- Kokotajlo: Soft takeoff can still lead to decisive strategic advantage (CLR Fund grantee)
- Torges: Ingredients for creating disruptive research teams
- Torges: Assessing the state of AI R&D in the US, China, and Europe – Part 1: Output indicators
- Research workshops. We ran three research workshops on s-risks from AI. They improved our prioritization, helped us develop our research agenda, and informed the future work of some participants:
- “S-risk research workshop,” Berlin, 2 days, March 2019, with junior researchers.
- “Preventing disvalue from AI,” San Francisco Bay Area, 2.5 days, May 2019, with 21 AI safety and AI strategy researchers from leading institutes and AI labs (including DeepMind, OpenAI, MIRI, FHI). Participants rated the content at 4.3 out of 5 and the logistics at 4.5 out of 5 (weighted average). They said attending the event was about 4x as valuable as what they would have been doing otherwise (weighted geometric mean).
- “S-risk research workshop,” near London, 3 days, November 2019, with a mixture of junior and more experienced researchers.
- We have developed the capacity to host research workshops with consistently good quality.
- Grantmaking through the CLR Fund. We ran our first application round and made six grants worth $221,306 in total. Another $600,000 is available in the fund that we could not disburse so far (in part because we had planned to hire a Research Analyst for our grantmaking but were unable to fill the position).
- Community coordination. We worked to bring the x-risk-oriented and s-risk-oriented parts of the longtermism community closer together. We believe this will result in synergies in AI safety and AI governance research and policy and perhaps also in macrostrategy research and broad longtermist interventions.
- Background. Until 2018, there had been little collaboration between the x-risk-oriented and s-risk-oriented parts of the longtermism community, despite the overlap in philosophical views and cause areas (especially AI risk). For this reason, our work on s-risks received less engagement than it could have. Over the past four years, we worked hard to bridge this divide. For instance, we repeatedly sought feedback from other community members. In response to that feedback, we decided to focus less on public moral advocacy and more on research on reducing s-risks (which we consider more pressing anyway) and encouraged other s-risk-oriented community members to do so as well. We also visited other research groups to increase their engagement with our work.
- Communication guidelines. This year, we further expanded these efforts. We worked with Nick Beckstead, then Program Officer for effective altruism at the Open Philanthropy Project, to develop a set of communication guidelines for discussing astronomical stakes:
- Nick’s guidelines recommend highlighting beliefs and priorities that are important to the s-risk-oriented community. We are excited about these guidelines because we expect them to result in more contributions by outside experts to our research (at our workshops and on an ongoing basis) and a better representation of s-risks in the most popular EA content (see, e.g., the 80,000 Hours job board and previous edits to “The Long-Term Future”).
- CLR’s guidelines recommend communicating in a more nuanced manner about pessimistic views of the long-term future by considering highlighting moral cooperation and uncertainty, focusing more on practical questions if possible, and anticipating potential misunderstandings and misrepresentations. We see it as our responsibility to ensure that those who come to prioritize s-risks based on our writings will also share our cooperative approach and commitment against violence. We expect the guidelines to reduce that risk and to result in increased interest in s-risks by major funders (including the Open Philanthropy Project’s grant, see below). We expect both guidelines to contribute to a more balanced discussion about the long-term future.
- Nick put in a substantial effort to ensure his guidelines are read and endorsed by large parts of the community. Similarly, we reached out to the most active authors and sent our guidelines to them. Some community members suggested that these guidelines should be transparent to the community; we agree with them and are, therefore, planning to share them publicly. (We are waiting to hear from the people and organizations that support Nick’s guidelines whether they want to publish the guidelines and will add a link here if they decide to do so. We plan to publish CLR’s guidelines at that point, too.)
- Longer-term plans. We believe that these activities are only the beginning of longer and deeper collaborations. We plan to reassess the costs and benefits at the end of 2020.
- Research community.
- We advised 13 potential researchers and professionals interested in s-risks in their careers.
- We sent out our first research newsletter to about 70 researchers.
- We started providing scholarships and more systematic operations support for researchers.
- We improved our online communication platform for researchers (Slack workspace with several channels) and have received positive feedback on the discussion quality.
- Research management. We published a report on disruptive research groups. The main learnings for us were: (1) We should seriously consider how to address our lack of research leadership and (2) we should improve the physical proximity of our research staff.
- We moved to London. We relocated our headquarters from Berlin to London because this allows us to attract and retain staff better and collaborate with other researchers and EA organizations in London and Oxford. Our team of six will work from our offices in Primrose Hill, London.
- Hiring. We have hired Jesse Clifton to join our research team part-time. Jesse is pursuing a PhD in statistics at NCSU and is the primary author of our technical research agenda.
- Open Philanthropy Project grant. The Open Philanthropy Project awarded EAF, our parent organization, a $1 million grant over two years to support our research, general operations, and grantmaking.
- Strategic clarity. At the end of 2018, we were still substantially uncertain about the strategic goals of our organization. We have since refined our mission and strategy and have overhauled our website accordingly.
Mistakes and lessons learned
- Research output. While we were satisfied with our internal drafts, we fell short on our goals to produce written research output (for publication, or at least for sharing with peers).
- Feedback and transparency for our communication guidelines. We did not seek feedback on the guidelines as systematically as we now think we should have. As a result, some people in our network were dissatisfied with the outcome. Moreover, while we were planning to give a general update on our efforts in our end-of-year update, we now believe it would have been worth the time to publish the full guidelines sooner.
- Hiring. We planned to hire a Research Analyst for grantmaking and an Operations Analyst and made two job offers. One of them was not accepted; the other one did not work out during the first few months of employment. In hindsight, it might have been better to hire even more slowly and ensure we understood the roles we were hiring for better. Doing so would have allowed us to make a more convincing case for the positions and hire from a larger pool of candidates.
- Anticipating implications of strategic changes. When we decided to shift our strategic focus towards research on s-risks, we were insufficiently aware of how this would change everyone’s daily work and responsibilities. We now think we could have anticipated these changes more proactively and taken measures to make the transition easier for our staff.
- Strategic planning procedure. Due to repeated organizational changes over the past years, we had not developed a reliable annual strategic planning routine. This year, we did not realize that building such a process is important. We plan to prioritize this in 2020.
- Communicating our move to London. We did not communicate our decision to relocate from Berlin to London very carefully in some instances. As a result, we received some negative feedback from people who did not support our decision and were under the impression we had not thought carefully about it. We invested some time to provide more background on our reasoning.
- Budget 2020: $994,000 (7.4 expected full-time equivalent employees). Our per-staff expenses have increased compared with 2019 because we do not have access to free office space anymore, and the cost of living in London is significantly higher than in Berlin.
- CLR reserves as of early November: $1,305,000 (corresponds to 15 months of expenses; excluding CLR Fund balance).
- CLR Fund balance as of mid-December: $600,000.
- Room for more funding: $185,000 (to attain 18 months of reserves); stretch goal: $700,000 (to attain 24 months of reserves).
- We invest funds that we are unlikely to deploy soon in the global stock market as per our investment policy.
How to contribute
- Stay up to date. Subscribe to our supporter updates and follow our Facebook page.
- Work with us. We are always hiring researchers and might also hire for new positions in research operations and management. If you are interested, we would be very excited to hear from you!
- Get career advice. If you are interested in our priorities, we are happy to discuss your career plans with you. Schedule a call now.
- Engage with our research. If you are interested in discussing our research with our team and giving feedback on internal drafts, please reach out to Stefan Torges.
- Make a donation. We aim to raise $185,000 (stretch goal: $700,000) for CLR. (We can set up a donor-advised fund (DAF) for value-aligned donors who give at least $100,000 over two years.)
Recommendation for donors
We think it makes sense for donors to support us if:
- you believe we should prioritize interventions that affect the long-term future positively,
- (a) you assign significant credence to some form of suffering-focused ethics, (b) you think s-risks are not unlikely compared to very positive future scenarios, and/or (c) you think work on s-risks is particularly neglected and reasonably tractable, and
- you assign significant credence that our prioritization and strategy is sound, i.e., you consider our work on AI and/or non-AI priorities sufficiently pressing (e.g., you assign a nontrivial probability (at least 5–10%) to the development of transformative AI within the next 20 years).
For donors who do not agree with these points, we recommend giving to the donor lottery (or the EA Funds). We recommend that donors who are interested in the CLR Fund support CLR instead because the CLR Fund has a limited capacity to absorb further funding.
Would you like to support us? Make a donation.
We are interested in your feedback
If you have any questions or comments, we look forward to hearing from you; you can also send us feedback anonymously. We greatly appreciate any thoughts that could help us improve our work. Thank you!
I would like to thank Tobias Baumann, Max Daniel, Ruairi Donnelly, Lukas Gloor, Chi Nguyen, and Stefan Torges for giving feedback on this article.