About us
Our mission
Our goal is to address worst-case risks from the development and deployment of advanced AI systems. We are currently focused on conflict scenarios as well as technical and philosophical aspects of cooperation.
To this end, we do interdisciplinary research, make and recommend grants, and build a community of professionals and other researchers around our priorities, e.g., through events, fellowships, and individual support.
How we came to our mission
As a team and organization, we are driven by the idea to do the most good we can from an impartial perspective. While we are deeply committed to our values, we are radically open-minded about how to live up to them.
This is a complex challenge. Because our resources are limited, we cannot solve all problems in the world or mitigate all risks facing us in the future. Instead, we need to prioritize. We need to ask ourselves what actions we should take now to have as much of a positive impact as possible.
This has been the guiding question of our organization since our founding in 2013. Starting from a commitment to our values, there are many different considerations that have shaped our current focus. As we learn more, our priorities, or even our mission, may change.
Below we provide a list of some of the crucial considerations that inform our current priorities:
- sufficiently advanced artificially intelligent systems are likely to shape the future of our civilization in uniquely profound ways;1
- this transformation may cause harm on an unprecedented scale, agential risks like conflict and malevolence being particularly worrisome;2
- the chance that such systems will be developed in the next thirty years is sufficiently high to warrant action now.3
What our values are
Our primary ethical focus is the reduction of involuntary suffering. This includes human suffering, but also the suffering in non-human animals and potential artificial minds of the future. In accordance with a diverse range of moral views, we believe that suffering, especially extreme suffering, cannot be easily outweighed by large amounts of happiness.
While this leads us to prioritize reducing suffering, we do so within a framework of commonsensical value pluralism and with a strong focus on cooperation. Together with others in the effective altruism community, we want careful ethical reflection to guide the future of our civilization to the greatest extent possible.
- See for example: Beginner's Guide to Reducing S-Risks, Altruists Should Prioritize Artificial Intelligence, Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity. (back)
- See for example: Reducing Risks of Astronomical Suffering: A Neglected Priority, S-risks: Why they are the worst existential risks, and how to prevent them. (back)
- See for example: Forecasting TAI with Biological Anchors, AI Timelines sequence. (back)