We are currently not receiving any applications.


At the Center on Long-Term Risk (CLR), you will advance neglected research to reduce the most severe risks to our civilization in the long-term future, in particular in the context of transformative artificial intelligence. Your research will help inform:

  • our discretionary grantmaking with total assets of $19 million available to fund interventions our research identifies,
  • the activities and policies of key organizations and researchers working on longtermism and AI risk (e.g., risk mitigation measures taken by AI labs), and
  • new activities and projects carried out by our implementation team, policymakers, and professionals in our network.


About CLR

CLR aims to combine the best aspects of academic research (depth, scholarship, mentorship) with a strategic focus on preventing negative future scenarios. This means leaving out the less productive aspects of academia, such as a preference for publication volume and novelty over impact.

At CLR, you will enjoy:

  • a role tailored to your qualifications and strengths with ample intellectual freedom;
  • working towards a shared goal with highly dedicated and caring people;
  • an interdisciplinary research environment, with friendly and intellectually curious colleagues who will hold you to high standards and support you in your intellectual development;
  • comprehensive mentorship in longtermist macrostrategy, especially from the perspective of preventing s-risk;
  • the support of a well-funded and well-networked longtermist EA organization with extensive on-demand operational assistance instead of administrative burdens.

CLR was founded by a group of effective altruists who took the idea of improving the long-term future to a multi-million dollar foundation at the core of the longtermist research community. Working at CLR will advance your research career in longtermism, effective altruism, AI strategy, AI governance, and technical AI safety. You will have the opportunity to exchange ideas and present your work at regular workshops with researchers at leading labs and institutes such as DeepMind, OpenAI, the Machine Intelligence Research Institute, and the Future of Humanity Institute at the University of Oxford. Previous staff has gone on to work at organizations such as the Future of Humanity Institute and the Open Philanthropy Project.

CLR is an equal opportunity employer and we value diversity at our organization. We welcome applications from all sections of society and do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, disability status, or any other status protected by federal, state, or local laws. If you have a disability or additional need that requires accommodation, please let us know.

Subject areas

Regardless of your background and the different areas listed here: if we believe that you can somehow advance high-quality research relevant to s-risks, we are interested in creating a position for you. If you see a way to contribute to our research agenda or open research questions or have other ideas for reducing s-risks, please apply (or reach out to us). We commonly tailor our positions to the strengths and interests of the applicants.

Some specific areas we are interested in include:

Multi-agent systems (MAS)

Several sections of our research agenda, especially Sections 5 (Contemporary AI architectures) and 6 (Humans in the loop), concern topics that can be studied using machine learning tools. CLR researchers and affiliates are currently researching promoting cooperation among reinforcement learners and other AI agents, in particular in ways that may transfer to future AI designs.

Examples of CLR research related to MAS:


Carrying out a research proposal in this area will likely involve some combination of:

  • Running reinforcement learning experiments, especially in general-sum multi-agent systems;
  • Designing new algorithms, with a view to the ultimate goal of promoting cooperation;
  • Developing theory associated with MAS algorithms.

Note that CLR will provide funding for compute credits if needed.


  • Experience implementing RL algorithms in Python;
  • Familiarity with the basics of game theory;
  • Background in a formal discipline such as computer science, machine learning, math, statistics, evolutionary game theory, or economics.

Strategy and psychology of conflict

In international relations and game theory, there is already a considerable literature on the causes of destructive conflict. We would like to draw on these insights to understand better how influential actors can best cooperate in the context of transformative artificial intelligence (TAI). For instance, why might catastrophic failures of cooperation between TAI systems occur, and how can we make them less likely? Sections 1.1 (Cooperation failure: models and examples), 2 (AI Strategy and Governance), and 4 (Peaceful Bargaining Mechanisms) of our research agenda are particularly relevant.

At CLR, we are currently working on a review of the international relations literature on coercive threats, focusing on insights most likely to transfer to TAI systems. We are also researching the relevance of coalitional dynamics for conflict between TAI systems. This research draws not only on international relations but also fields like game theory and psychology, which also relate to the determinants of cooperation and conflict between human actors.


  • Conducting original empirical or theoretical research on the explanations of human conflict and conditions for peace;
  • Adapting existing frameworks from the conflict studies literature to modeling conflict involving TAI systems.


  • A background in international relations (especially international conflict), economics, psychology (especially behavioral game theory and social psychology), evolutionary game theory, or other relevant disciplines.
  • Basic familiarity with (or the ability to rapidly learn the basics of) game theory and machine learning.

Decision theory and formal epistemology

As explained in Section 7 of our research agenda, we are also interested in a better foundational understanding of decision-making, in the hope that this will help us steer towards better outcomes in high-stakes interactions between TAI systems.

An example of CLR research in this area:

  • William MacAskill, Aron Vallinder, Caspar Oesterheld, Carl Shulman, and Johannes Treutlein. The evidentialist’s wager. Manuscript, 2019.


  • Conducting foundational research on decision theory, especially acausal decision-making and bounded rationality.


  • Preferred: Training in relevant areas of analytic philosophy, such as philosophical decision theory or formal epistemology.

Additional areas

Additional research areas we consider relevant include (but are not limited to):

We also welcome applications based on relevant research topics outside these areas.

How to apply

We are currently not receiving any applications. Please contact us at info@longtermrisk.org if you have any questions about working with us.

Get involved