Below is a list of areas we currently consider among the most important. We are interested in collaborating with or supporting individuals who are keen to make contributions to any of these. If you are a good fit, we might want to hire you, support you with a grant, or give advice on your career plans.

However, regardless of your background and the different areas listed below: if we believe that you can somehow do high-quality work relevant to s-risks, we are interested in supporting you.

Multi-agent systems (MAS)

Several sections of our research agenda, especially Sections 5 (Contemporary AI architectures) and 6 (Humans in the loop), concern topics that can be studied using machine learning tools. CLR researchers and affiliates are currently researching promoting cooperation among reinforcement learners and other AI agents, in particular in ways that may transfer to future AI designs.

Examples of CLR research related to MAS:

Strategy and psychology of conflict

In international relations and game theory, there is already a considerable literature on the causes of destructive conflict. We would like to draw on these insights to understand better how influential actors can best cooperate in the context of transformative artificial intelligence (TAI). For instance, why might catastrophic failures of cooperation between TAI systems occur, and how can we make them less likely? Sections 1.1 (Cooperation failure: models and examples), 2 (AI Strategy and Governance), and 4 (Peaceful Bargaining Mechanisms) of our research agenda are particularly relevant.

At CLR, we are currently working on a review of the international relations literature on coercive threats, focusing on insights most likely to transfer to TAI systems. We are also researching the relevance of coalitional dynamics for conflict between TAI systems. This research draws not only on international relations but also fields like game theory and psychology, which also relate to the determinants of cooperation and conflict between human actors.

Decision theory and formal epistemology

As explained in Section 7 of our research agenda, we are also interested in a better foundational understanding of decision-making, in the hope that this will help us steer towards better outcomes in high-stakes interactions between TAI systems.

An example of CLR research in this area:

  • William MacAskill, Aron Vallinder, Caspar Oesterheld, Carl Shulman, and Johannes Treutlein. The evidentialist’s wager. Manuscript, 2019.

Psychology and biology of malevolent traits

Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. With access to advanced technology, they may even pose existential risks. We are interested in a better understanding of malevolent traits and would like to investigate interventions to reduce the influence of individuals exhibiting such traits.

An example of CLR research in this area:

Cause prioritization and macrostrategy related to s-risks

We have only been doing research on s-risks since 2013. So we expect to change our change minds about many important questions as we learn more. We are interested in people bringing an independent perspective to the question of what we should prioritize.

Examples of CLR research in this area:

Additional areas

Additional research areas we consider relevant include (but are not limited to):


Get involved

CLR monthly updates