How can humanity best reduce suffering?
Emerging technologies such as artificial intelligence could radically change the trajectory of our civilization. We are building a global community of researchers and professionals working to ensure that this technological transformation does not risk causing suffering on an unprecedented scale.
We do research, award grants and scholarships, and host workshops. Our work focuses on advancing the safety and governance of artificial intelligence as well as understanding other long-term risks.
6 March 2020
3 March 2020
We have renamed the Foundational Research Institute (FRI) to the Center on Long-Term Risk (CLR) and will stop using the Effective Altruism Foundation (EAF) brand (except as the name of our legal entities).
22 February 2019
We describe CLR's plans for 2020 and give an overview of our successes and mistakes in 2019.
Traditional disaster risk prevention has a concept of risk factors. These factors are not risks in and of themselves, but they increase either the probability or the magnitude of a risk. For instance, inadequate governance structures do not cause a specific disaster, but if a disaster strikes it may impede an effective response, thus increasing the damage.
Rather than considering individual scenarios of how s-risks could occur, which tends to be highly speculative, this post instead looks at risk factors – i.e. factors that would make s-risks more likely or more severe.