Our mission

We are building a global community of researchers and professionals working to do the most good in terms of reducing suffering.

Our priorities

We currently focus on reducing the worst risks of astronomical suffering (s-risks) from emerging technologies, with a focus on transformative artificial intelligence. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible. We want to prevent a situation similar to the advent of nuclear weapons, in which thinking about the serious implications of this technology took a back seat during the wartime arms race. As our technological power grows, future inventions may cause harm on an even larger scale—unless we act early and deliberately.

This priority is premised on the beliefs that

  • as an organization, we should do whatever has the highest expected value;
  • future lives matter as much as current lives, and we expect most individuals to exist in the long-term future;
  • there is a significant chance that artificial intelligence will shape the future in profound ways, and cause harm on an unprecedented scale;
  • there are actions we can take right now to mitigate these risks.

We have been refining our thinking on how to do the most good since 2013. Our priorities may change as we continue to learn more about how the future will unfold and which strategies and interventions are most impactful.

Our activities

  • Research: If we take seriously the idea that most of our impact will be in the long-term future, we have to understand how to have a predictable, significant, and lasting influence. To this end, we pursue research spanning a number of fields, including philosophy, computer science, and psychology.
  • Research community: We want to enable independent researchers and organizations in the fast-growing fields of effective altruism, AI governance, and AI safety to contribute to our mission. We run research workshops, provide operational support, and advise individuals on their career plans.
  • Fundraising & Grantmaking: We make grants to organizations and individuals in our priority areas to enable them to do the most good they can.

Our values

CLR’s primary ethical focus is the reduction of involuntary suffering (Suffering-Focused Ethics, SFE). This includes human suffering, but also the suffering in non-human animals and potential artificial minds of the future. In accordance with a diverse range of moral views, we believe that suffering, especially extreme suffering, cannot be easily outweighed by large amounts of happiness. While this leads us to prioritize reducing suffering, we also value happiness, flourishing, and the fulfillment of personal life goals. Within a framework of commonsensical value pluralism and a strong focus on cooperation, our goal is to ensure that the future contains as little involuntary suffering as possible.


Get involved

CLR monthly updates