How can humanity best reduce suffering?
Emerging technologies such as artificial intelligence could radically change the trajectory of our civilization. We are building a global community of researchers and professionals working to ensure that this technological transformation does not risk causing suffering on an unprecedented scale.
We do research, award grants and scholarships, and host workshops. Our work focuses on advancing the safety and governance of artificial intelligence as well as understanding other long-term risks.
7 July 2020
Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. Malevolent humans with access to advanced technology—such as whole brain emulation […]
Read more
22 February 2019
Traditional disaster risk prevention has a concept of risk factors. These factors are not risks in and of themselves, but they increase either the probability or the magnitude of a risk. For instance, inadequate governance structures do not cause a specific disaster, but if a disaster strikes it may impede an effective response, thus increasing the damage.
Rather than considering individual scenarios of how s-risks could occur, which tends to be highly speculative, this post instead looks at risk factors – i.e. factors that would make s-risks more likely or more severe.
Read more
3 July 2018
Surrogate goals might be one of the most promising approaches to reduce (the disvalue resulting from) threats. The idea is to add to one’s current goals a surrogate goal that one did not initially care about, hoping that any potential threats will target this surrogate goal rather than what one initially cared about.
In this post, I will outline two key obstacles to a successful implementation of surrogate goals.
Read more