Priority areas

Below is a list of areas we currently consider among the most important. We are interested in collaborating with or supporting individuals who are keen to make contributions to any of these. If you are a good fit, we might want to hire you, support you with a grant, or give advice on your career plans.

However, regardless of your background and the different areas listed below: if we believe that you can somehow do high-quality work relevant to s-risks, we are interested in supporting you.

Multi-agent systems

Our research agenda cooperation, conflict, and transformative artificial intelligence (TAI) is ultimately aimed at reducing risks of conflict among TAI-enabled actors. This means that we need to understand how future AI systems might interact with one another, especially in high-stakes situations. CLR researchers and affiliates are currently researching how the design of future AI systems might determine the prospects for avoiding cooperation failure, using the tools of game theory, machine learning, and other disciplines related to multi-agent systems (MAS). You can find an overview of our work in this area here.

Examples of CLR research related to MAS:

AI governance

Ensuring the safe design of AI systems also poses problems of governance. Because the prospects for avoiding conflict involving TAI systems depends on the design of all of the systems involved, avoiding conflict and promoting cooperation among TAI systems may pose new governance challenges beyond those commonly discussed in the AI risk research community (e.g., here). CLR researchers are currently working to understand potential pathways to cooperation between AI developers on the aspects of their systems which are most relevant to avoiding catastrophic conflict.

Examples of CLR research related to AI governance:

Decision theory and formal epistemology

As explained in Section 7 of our research agenda, we are also interested in a better foundational understanding of decision-making, in the hope that this will help us steer towards better outcomes in high-stakes interactions between TAI systems.

An example of CLR research in this area:

Risks from malevolent actors

Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. With access to advanced technology, they may even pose existential risks. We are interested in a better understanding of malevolent traits and would like to investigate interventions to reduce the influence of individuals exhibiting such traits.

An example of CLR research in this area:

Cause prioritization and macrostrategy related to s-risks

We have only been doing research on s-risks since 2013. So we expect to change our minds about many important questions as we learn more. We are interested in people bringing an independent perspective to the question of what we should prioritize. This can also include seemingly esoteric topics like infinite ethics or extraterrestrials.

Examples of CLR research in this area: