Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history. Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. Malevolent humans with access to advanced technology—such as whole brain emulation […]Read more
Traditional disaster risk prevention has a concept of risk factors. These factors are not risks in and of themselves, but they increase either the probability or the magnitude of a risk. For instance, inadequate governance structures do not cause a specific disaster, but if a disaster strikes it may impede an effective response, thus increasing the damage.
Rather than considering individual scenarios of how s-risks could occur, which tends to be highly speculative, this post instead looks at risk factors – i.e. factors that would make s-risks more likely or more severe.Read more
Surrogate goals might be one of the most promising approaches to reduce (the disvalue resulting from) threats. The idea is to add to one’s current goals a surrogate goal that one did not initially care about, hoping that any potential threats will target this surrogate goal rather than what one initially cared about.
In this post, I will outline two key obstacles to a successful implementation of surrogate goals.Read more
Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) Summary This post was originally written for internal discussions only; it is half-baked and unpolished. The post assumes familiarity with the ideas discussed in Caspar Oesterheld’s paper Multiverse-wide cooperation via coordinated decision-making. I […]Read more
Agents that threaten to harm other agents, either in an attempt at extortion or as part of an escalating conflict, are an important form of agential s-risks. To avoid worst-case outcomes resulting from the execution of such threats, I suggest that agents add a “meaningless” surrogate goal to their utility function.Read more
Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a post I wrote about Caspar Oesterheld’s long paper Multiverse-wide cooperation via coordinated decision-making. Because I have found the idea tricky to explain – which unfortunately makes it difficult to get […]Read more
In the essay Reducing Risks of Astronomical Suffering: A Neglected Priority, s-risks (also called suffering risks or risks of astronomical suffering) are defined as “events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far”.Read more
Efforts to shape advanced artificial intelligence (AI) may be among the most promising altruistic endeavours. If the transition to advanced AI goes wrong, the worst outcomes may involve not only the end of human civilization, but also astronomical amounts of suffering – a so-called s-risk.Read more
Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a reply to Metzinger’s essay on Benevolent Artificial Anti-natalism (BAAN), which appeared on EDGE.org (7.8.2017). Metzinger invites us to consider a hypothetical scenario where smarter-than-human artificial intelligence (AI) is built with […]Read more
Suppose you investigated two interventions A and B and came up with estimates for how much impact A and B will have. Your best guess is that A will spare a billion sentient beings from suffering, while B “only” spares a thousand beings. Now, should you actually believe that A is many orders of magnitude more effective than B?Read more
This post analyses key strategic questions on moral advocacy, such as:
What does moral advocacy look like in practice? Which values should we spread, and how?
How effective is moral advocacy compared to other interventions such as directly influencing new technologies?
What are the most important arguments for and against focusing on moral advocacy?
Efforts to mitigate the risks of advanced artificial intelligence may be a top priority for effective altruists. If this is true, what are the best means to shape AI? Should we write math-heavy papers on open technical questions, or opt for broader, non-technical interventions like values spreading?Read more
Imagine a data set of images labeled “suffering” or “no suffering”. For instance, suppose the “suffering” category contains documentations of war atrocities or factory farms, and the “no suffering” category contains innocuous images – say, a library. We could then use a neural network or other machine learning algorithms to learn to detect suffering based on that data.Read more
This post is based on notes for a talk I gave at EAG Boston 2017. I talk about risks of severe suffering in the far future, or s-risks. Reducing these risks is the main focus of the Foundational Research Institute, the EA research group that I represent.Read more
Setting up the goal systems of advanced AIs in a way that results in benevolent behavior is expected to be difficult. We should account for the possibility that the goal systems of AIs fail to implement our values as originally intended. In this paper, we propose the idea of backup utility functions: Secondary utility functions that are used in case the primary ones “fail”.Read more
FRI’s research seeks to identify the best intervention(s) for suffering reducers to work on. Rather than continuing our research indefinitely, we will eventually have to focus our efforts on an intervention directly targeted at improving the world. This report outlines plausible candidates for FRI’s “path to impact” and distills some advice on how current movement building efforts can best prepare for them.Read more
This is a snapshot of the Center on Long-Term Risk’s (formerly Foundational Research Institute) previous "Our Mission" page. The Foundational Research Institute (FRI) conducts research on how to best reduce the suffering of sentient beings in the long-term future. We publish essays and academic articles, make grants to support research on our priorities, and advise […]Read more