We are building a global community of researchers and professionals working to do the most good in terms of reducing suffering.
We currently focus on efforts to reduce the worst risks of astronomical suffering (s-risks) from emerging technologies, with a focus on transformative artificial intelligence. We want to prevent a situation similar to the advent of nuclear weapons, in which careful reflection on the serious implications of this technology took a back seat during the wartime arms race. As our technological power grows, future inventions may cause harm on an even larger scale—unless we act early and deliberately. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.
This priority is premised on the beliefs that
- as an organization, we should do whatever has the highest expected value;
- future lives matter as much as current lives, and we expect most individuals to exist in the long-term future;
- there is a significant chance that artificial intelligence will shape the future in profound ways, and cause harm on an unprecedented scale;
- there are actions we can take right now to mitigate these risks.
We've been refining our thinking on how to do the most good for the past 7 years. Our priorities may change as we continue to learn more about how the future will unfold and which strategies and interventions are most impactful.
- Research: If we take seriously the idea that most of our impact will be in the long-term future, we have to understand how to have a predictable, significant, and lasting influence. To this end, we pursue interdisciplinary research with a focus on philosophy, economics, and computer science.
- Research community: We want to enable independent researchers and organizations in the fast-growing fields of effective altruism, AI governance, and AI safety to contribute to our mission. We run research workshops, provide operational support, and advise individuals on their career plans.
- Fundraising & Grantmaking: We make grants to organizations and individuals in our priority areas to enable them to do the most good they can. By sharing the lessons we learn from this work, we hope to inspire others to approach philanthropy in a similar manner.
CLR’s primary ethical focus is the reduction of involuntary suffering (Suffering-Focused Ethics, SFE). This includes human suffering, but also the suffering in non-human animals and potential artificial minds of the future. In accordance with a diverse range of moral views, we believe that suffering, especially extreme suffering, cannot be outweighed easily by large amounts of happiness. While this leads us to prioritize reducing suffering, we also value happiness, flourishing, and fulfilling people’s life goals. Within a framework of commonsensical value pluralism as well as a strong focus on cooperation, our goal is to ensure that the future contains as little involuntary suffering as possible.