Future Suffering & Macrostrategy
Risks of Astronomical Future Suffering
Space colonization would likely increase rather than decrease total suffering. Because many people care nonetheless about humanity’s spread into the cosmos, we should reduce risks of astronomical future suffering without opposing others’ spacefaring dreams. In general, we recommend to focus on making sure that an intergalactic future will be good if it happens rather than making sure there will be such a future.
Download Read onlineSuffering-Focused AI Safety: In Favor of “Fail-Safe” Measures
AI outcomes where something goes wrong may differ enormously in the amounts of suffering they contain. An approach that tries to avert the worst of those outcomes seems especially promising because it is currently more neglected than classical AI safety efforts which shoot for a highly specific, “best-case” outcome.
Download Read onlineHow the Simulation Argument Dampens Future Fanaticism
The simulation argument suggests a non-trivial chance that most of the copies of ourselves are instantiated in relatively short-lived ancestor simulations run by superintelligent civilizations. If so, when we act to help others in the short run, our good deeds are duplicated many times over. This reasoning dramatically upshifts the relative importance of short-term helping over focusing on the far future.
Download Read onlineSuperintelligence as a Cause or Cure for Risks of Astronomical Suffering
Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, usually understood as the risk of human extinction. We argue that suffering risks (s-risks) present comparable severity and probability. Just as with existential risks, s-risks can be caused as well as reduced by superintelligent AI.
Download Read onlineArtificial Intelligence and Its Implications for Future Suffering
Artificial intelligence (AI) will likely transform the world later this century. Whether uncontrolled or controlled AIs would create more suffering in expectation is a question to explore further. Regardless, the field of AI safety and policy seems to be a very important space where altruists can make a positive-sum impact along many dimensions.
Read moreCause prioritization for downside-focused value systems
This post discusses cause prioritization from the perspective of downside-focused value systems, i.e. views whose primary concern is the reduction of bads such as suffering. According to such value systems, interventions which reduce risks of astronomical suffering are likely more promising than interventions which primarily reduce extinction risks.
Read moreHow Feasible Is the Rapid Development of Artificial Superintelligence?
Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more powerful could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, this article reviews the literature on human expertise and intelligence and discusses its relevance for AI.
Read moreReducing Risks of Astronomical Suffering: A Neglected Priority
Will we go extinct, or will we succeed in building a flourishing utopia? Discussions about the future trajectory of humanity often center around these two possibilities, which tends to ignore that survival does not always imply utopian outcomes, or that outcomes where humans go extinct could differ tremendously in how much suffering they contain.
Read moreCharity Cost-Effectiveness in an Uncertain World
Evaluating the effectiveness of our actions, or even just whether they're beneficial or harmful, is very difficult. One way to deal with uncertainty is to focus on actions that likely have positive effects across many scenarios. This approach often amounts to meta-level activities like promoting positive-sum institutions, reflectiveness, and effective altruism in general.
Read moreIdentifying Plausible Paths to Impact and their Strategic Implications
FRI’s research seeks to identify the best intervention(s) for suffering reducers to work on. Rather than continuing our research indefinitely, we will eventually have to focus our efforts on an intervention directly targeted at improving the world. This report outlines plausible candidates for FRI’s “path to impact” and distills some advice on how current movement building efforts can best prepare for them.
Read moreCooperation, Foresight, and Decision Theory
Multiverse-wide Cooperation via Correlated Decision Making
Some decision theorists argue that when playing a prisoner's dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we live in a large universe or multiverse of some sort.
Download Read onlineGains from Trade through Compromise
When agents of differing values compete, they may often find it mutually advantageous to compromise rather than continuing to engage in zero-sum conflicts. Potential ways of encouraging cooperation include promoting democracy, tolerance and (moral) trade. Because a future without compromise could be many times worse than a future with it, advancing compromise seems an important undertaking.
Download Read onlineApproval-directed agency and the decision theory of Newcomb-like problems
The quest for artificial intelligence poses questions relating to decision theory: How can we implement any given decision theory in an AI? Which decision theory (if any) describes the behavior of any existing AI design? This paper examines which decision theory (in particular, evidential or causal) is implemented by an approval-directed agent, i.e., an agent whose goal it is to maximize the score it receives from an overseer.
Download Read onlineRobust program equilibrium
One approach to achieving cooperation in the one-shot prisoner’s dilemma is Tennenholtz’s program equilibrium, in which the players of a game submit programs instead of strategies. These programs are then allowed to read each other’s source code to decide which action to take. Unfortunately, existing cooperative equilibria are either fragile or computationally challenging and therefore unlikely to be realized in practice. This paper proposes a new, simple, more efficient program to achieve more robust cooperative program equilibria.
Download Read onlineInternational Cooperation vs. AI Arms Race
There's a decent chance that governments will be the first to build artificial general intelligence (AI). International hostility, especially an AI arms race, could exacerbate risk-taking, hostile motivations, and errors of judgment when creating AI. If so, then international cooperation could be an important factor to consider when evaluating the flow-through effects of charities.
Read moreDifferential Intellectual Progress as a Positive-Sum Project
Fast technological development carries a risk of creating extremely powerful tools, especially AI, before society has a chance to figure out how best to use those tools in positive ways for many value systems. Suffering reducers may want to help mitigate the arms race for AI so that AI developers take fewer risks and have more time to plan for how to avert suffering that may result from the AI's computations. The AI-focused work of the Machine Intelligence Research Institute (MIRI) seems to be one important way to tackle this issue. I suggest some other, broader approaches, like advancing philosophical sophistication, cosmopolitan perspective, and social institutions for cooperation. As a general heuristic, it seems like advancing technology may be net […]
Read moreReasons to Be Nice to Other Value Systems
Several arguments support the heuristic that we should help groups holding different value systems from our own when doing so is cheap, unless those groups prove uncooperative to our values. This is true even if we don't directly care at all about other groups' value systems. Exactly how nice to be depends on the particulars of the situation.
Read moreHow Would Catastrophic Risks Affect Prospects for Compromise?
Global catastrophic risks – such as biotech disasters or nuclear war – would cause major damage in the short run, but their effects on the long-run trajectory that humanity takes are also significant. In particular, to the extent these disasters increase risks of war, they seem likely to precipitate AI arms races between nations and worsen prospects for compromise.
Read moreA Lower Bound on the Importance of Promoting Cooperation
This article suggests a lower-bound Fermi calculation for the cost-effectiveness of promoting cooperation. The purpose of this exercise is to make our thinking more concrete about how cooperation might reduce suffering and to make its potential more tangible.
Read moreEducation Matters for Altruism
Learning is an extremely important activity for altruists. Learning can seem ineffective in the short run, but used properly, it can pay off more than most financial or single-domain-focused investments. It's important for young activists not to neglect learning in order to just "do more to help now."
Read moreEthics
Formalizing Preference Utilitarianism in Physical World Models
Most ethical work is done at a low level of formality which can lead to misunderstandings in ethical discussions. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models. Even though our formalization is not immediately applicable, it is a first step in providing ethical inquiry with a formal basis.
Download Read onlineMeasuring Happiness and Suffering
Is the balance of happiness versus suffering in the future net positive or net negative (in expectation)? Is the aggregate happiness and suffering in a group of individuals positive or negative? For such questions to have factual answers that are free from value judgements, happiness and suffering would need to be objectively measurable to a very high degree. However, such a degree of measurability is widely (although not universally) rejected.
Download Read onlineWhat Is the Difference Between Weak Negative and Non-Negative Ethical Views?
Weak negative views in ethics, such as negative-leaning utilitarianism, are said to give more weight to reducing suffering than to promoting happiness. In contrast, non-negative views such as traditional utilitarianism are said to give equal weight to happiness and suffering. However, this way of distinguishing between the views rests on controversial assumptions about the measurability of happiness and suffering.
Download Read onlineThe Importance of Wild-Animal Suffering
The number of wild animals vastly exceeds that of animals on factory farms. Therefore, animal advocates should consider focusing their efforts to raise concern about the suffering that occurs in nature. In theory, engineering more humane ecological systems might be valuable. In practice, however, it seems more effective to promote the meme of caring about wild animals to other activists, academics and other sympathetic groups.
Read onlineThe Case for Suffering-Focused Ethics
“Suffering-focused ethics” is an umbrella term for moral views that place primary or particular importance on the prevention of suffering. Most views that fall into this category are pluralistic in that they hold that other things beside suffering reduction also matter morally. To illustrate the diversity within suffering-focused ethics as well as to present a convincing case for it, this article will introduce four separate motivating intuitions.
Read moreTranquilism
What makes an experience valuable or disvaluable? In contrast to hedonism, which holds that pleasure is what is good and pain is what is bad, tranquilism is an “absence of desire” theory that counts pleasure as instrumentally valuable only. According to tranquilism, what matters is whether an experience is free from bothersome components. States of contentment such as flow or meditative tranquility also qualify.
Read moreHedonistic vs. Preference Utilitarianism
It's a classic debate among utilitarians: Should we care about an organism's happiness and suffering (hedonic wellbeing), or should we ultimately value fulfilling what it wants, whatever that may be (preferences)? This article discusses various intuitions on both sides and explores a hybrid view that gives greater weight to the hedonic subsystems of brains than to other overriding subsystems.
Read moreValue Lexicality
An example of value lexicality is that an outcome with both torture and happiness is bad, regardless of the amount of happiness. Value lexicality is important partly because it can lead to suffering-focused ethics. Key topics that this essay explains include strong versus weak lexicality, value aggregation, views on large numbers and sequence arguments.
Read moreDescriptive Population Ethics and Its Relevance for Cause Prioritization
Two variables seem particularly important when trying to make informed choices about how to best shape the long-term future: One’s normative goods-to-bads ratio and one’s expected bads-to-goods ratio. This essay discusses how one could measure these variables and investigates associated challenges.
Read moreThe 'Asymmetry' and Extinction Thought Experiments
Someone who wants to do good is faced with the question how to prioritize preventing badness vs. bringing about more individuals with good lives. A relevant idea is the ‘Asymmetry,’ which roughly says that it is bad to bring into existence individuals with bad lives but not good to add individuals with good lives. One objection to the Asymmetry is extinction thought experiments where the reader is asked to compare one outcome in which humanity survives to one in which it goes extinct. The objection says that it is better if humanity survives, which is taken to be a counterargument to the Asymmetry. But, as Professor Meacham has pointed out, the objection lacks force against the Asymmetry because it drags […]
Read moreConsciousness
CLR’s practical priorities are largely independent from our views on consciousness, and the writings in this section do not necessarily reflect a CLR consensus.
Do Artificial Reinforcement-Learning Agents Matter Morally?
Artificial reinforcement learning (RL), a widely used training method in computer science, has striking parallels to reward and punishment learning in biological brains. Plausible theories of consciousness imply a non-zero probability that RL agents qualify as sentient and deserve our moral consideration, especially as AI research advances and RL agents become more sophisticated.
Download Read onlineThe Eliminativist Approach to Consciousness
This essay explains my version of an eliminativist approach to understanding consciousness. It suggests that we stop thinking in terms of "conscious" and "unconscious" and instead look at physical systems for what they are and what they can do. This perspective dissolves some biases in our usual perspective and shows us that the world is not composed of conscious minds moving through unconscious matter, but rather, the world is a unified whole, with some sub-processes being more fancy and self-reflective than others. I think eliminativism should be combined with more intuitive understandings of consciousness to ensure that its moral applications stay on the right track. Introduction "[Qualia] have seemed to be very significant properties to some theorists because they have […]
Download Read onlineA Dialogue on Suffering Subroutines
This piece presents a hypothetical dialogue that explains why instrumental computational processes of a future superintelligence might evoke moral concern. Generally, agent-like components might emerge in many places, including the computing processes of a future civilization. Whether and how much these subroutines matter are questions for future generations to figure out, but it's good to keep an open mind to the possibility that our intuitions about what suffering is may change dramatically.
Read moreFlavors of Computation Are Flavors of Consciousness
If we don't understand why we're conscious, how come we're so sure that extremely simple minds are not? I propose to think of consciousness as intrinsic to computation, although different types of computation may have very different types of consciousness – some so alien that we can't imagine them. Since all physical processes are computations, this view amounts to a kind of panpsychism. How we conceptualize consciousness is always a sort of spiritual poetry, but I think this perspective better accounts for why we ourselves are conscious despite not being different in a discontinuous way from the rest of the universe. Introduction "don't hold strong opinions about things you don't understand" --Derek Hess Susan Blackmore believes the way we typically […]
Read more© 2025 Center on Long-Term Risk