First published: Aug. 2016. Last update: August 2019.
Suffering-focused ethics is an umbrella term for moral views that place primary or particular importance on the prevention of suffering. Most views that fall into this category are pluralistic in that they hold that other things besides reducing suffering also matter morally. To illustrate the diversity within suffering-focused ethics as well as to present a convincing case for it, this article will introduce four separate (though sometimes overlapping) motivating principles or intuitions.1 Not all of these intuitions may ring true or appeal to everyone, but each of them can ground concern for suffering as a moral priority. Rather than presenting a fully developed theory of ethics that is suffering focused, our goal in this article is to argue that "suffering focus" should be a central desideratum for such a theory. In fact, we suggest being morally uncertain between plausible theories and taking into account moral cooperation for all our decisions.
I. Making people happy, not happy people
A common intuition is that creating happy beings is less morally pressing than making sure existing beings are well off. This intuition is characterized by Jan Narveson’s (1973) statement, “We are in favor of making people happy, but neutral about making happy people.” Narveson’s principle points to a putative2 asymmetry between suffering and happiness in the context of (not) adding new beings to the world. Consider the following thought experiment:
Imagine two planets, one empty and one inhabited by 1,000 beings suffering a miserable existence. Flying to the empty planet, you could bring 1,000,000 beings into existence that will live a happy life. Flying to the inhabited planet instead, you could help the 1,000 miserable beings and give them the means to live happily. If there is time to do both, where would you go first? If there is only time to fly to one planet, which one should it be?
Even though one could bring about 1,000 times as many happy beings as there are existing unhappy ones, many people’s moral intuition would have us help the unhappy beings instead. To those holding this intuition, taking care of suffering appears to be of greater moral importance than creating new, happy beings.
By contrast, preventing miserable beings from being added to the world seems just as important as preserving existing happiness. And it would be a no-brainer for most people that 1,000 existing beings going from happy to miserable is better than the 1,000 beings staying happy at the cost of 1,000,000 new, miserable beings being created. This suggests that we care about reducing suffering for existing and potential beings equally, whereas we prioritize the promotion of happiness in actual beings over the happiness in their merely potential peers.
Narveson’s principle is found at the heart of preference-based axiologies where what matters is preference (dis)satisfaction. In Christoph Fehige’s words:
"We have obligations to make preferrers satisfied, but no obligations to make satisfied preferrers" (Fehige, "A pareto principle for possible people", 1998, p.518).
Creating large numbers of beings while ensuring that all their preferences or goals are satisfied – which could be achieved by making provisions for the newly created beings to have easily satisfiable preferences, or preferences that correspond very closely with the most likely state of the world – may strike us as a pointless endeavor. The claim that an extra preference in itself is of little value, so that “Maximizers of preference satisfaction should instead call themselves minimizers of preference frustration.” (Fehige, ibid.) is the gist of Fehige's anti-frustrationism.
Intuitions such as “Making people happy rather than making happy people” are linked to the Epicurean view that non-existence does not constitute a deplorable state. Proponents of this view reason that the suffering and/or frustrated preferences of a being constitute a real, tangible problem for that being. By contrast, non-existence is free of moral predicaments for any evaluating agent, given that by definition no such agent exists. Why, then, might we be tempted to consider non-existence a problem? Non-existence may seem unsettling to us, because from the perspective of existing beings, no-longer-existing is a daunting prospect. Importantly however, death or no-longer-existing differs fundamentally from never having been born, in that it is typically preceded and accompanied by suffering and/or frustrated preferences.
Acceptance of the above moral asymmetry between the pair suffering/happiness as pertaining to existence/non-existence grounds a strong, moral reason to concentrate on suffering and/or preference frustration as opposed to the promotion of happiness. It comes with an understanding of ethics as being about solving the world’s problems: We confront spacetime, see wherever there is or will be a problem, i.e. a struggling being, and we solve it.
Views that incorporate this intuition: The intuition “Making people happy rather than making happy people” is directly incorporated in so-called person-affecting views, where affairs can only be morally bad if they are bad for someone, i.e. if there is a being to point to for whom something poses a problem. One such view is (a version of) anti-natalism (e.g. David Benatar, “Better never to have been”, 2008). The same underlying intuition is also present in preference-based approaches such as Fehige’s antifrustrationism that we mentioned before. In addition, it probably inspired the “moral ledger” view laid out by Peter Singer in his Practical Ethics,3 as well as in the “prior-existence” view he tentatively endorsed in the same book.4 Finally, egalitarianism and prioritarianism can incorporate the intuition by giving priority to suffering reduction as long as suffering is still around (or will foreseeably be around).5
II. Torture-level suffering cannot be counterbalanced
There is something particularly terrible when it comes to torture-level suffering. For many people, extreme suffering is so bad that no other experience can counterbalance it. We tend to shy away from truly imagining how horrible suffering can be, but morality, as the most serious business there is, needs to pay attention to everything. After episodes of extreme suffering, we may gradually forget just how bad it was in the moment. At times, circumstances can exacerbate to a point where we are willing to give up everything we care about. As Orwell pointed out in his book 1984: “[...] for everyone there is something unendurable – something that cannot be contemplated. Courage and cowardice are not involved. If you are falling from a height it is not cowardly to clutch at a rope. If you have come up from deep water it is not cowardly to fill your lungs with air. It is merely an instinct which cannot be destroyed. [...] They are a form of pressure that you cannot withstand, even if you wished to. You will do what is required of you."
Confronting torture-level suffering
Imagine you are taken out of your everyday life and presented with the following choice: 1) You die painlessly now. 2) You have to experience the worst possible torture for one week to be subsequently rewarded with forty years of bliss in a perfect experience machine, culminating in a painless death. Which option would you choose?
While people might be motivated to attempt to endure torture for loved ones or for the fulfillment of their dearest life goals, these confounders are eliminated in the thought experiment above, where the gains – paradise in the experience machine – also come with abandoning loved ones or life goals directed at the (non-virtual) world. In essence, we are asking whether torture-level suffering can – all else equal – be counterbalanced by other experiences in one's personal case. Of course, we imagine the virtual reality on offer to feel completely authentic to the user with no disturbing memories involved.
So when it comes to just comparing the best experiences to torture-level suffering in the personal case, would people make the deal? A refusal to accept this tradeoff suggests that as far as an individual's judgement of their experiences is concerned, torture-level suffering cannot – at least for some individuals – be counterbalanced by (much) larger amounts of happiness.
There are however people who would accept this bargain. This presents a challenge: How are we to extrapolate from intrapersonal tradeoffs, where a person is their own arbiter, to interpersonal tradeoffs, where we consider decisions that affect the welfare of other individuals? Accepting torture-level suffering in one’s personal case is not tantamount to granting that happiness can outweigh torture-level suffering in general in interpersonal tradeoffs. The question for someone who accepts torture-level suffering in exchange for subsequent happiness is “Why do they do it?”
Do the reasons also apply to setting tradeoffs for other people, or do they more narrowly only apply to their specific circumstances?
When it comes to assessing the quality of other people's lives, there is always going to be an element of paternalism, regardless of the policy we chose: Even in one’s personal case, many of one's person moments – when we are treating a person's life as a temporal series of autonomous "person moments" – during the week of torture would not consent. That is, sufferers would (almost certainly) regret the decision made by the person moment that preceded them, and wish for it to be changed. Conversely, after a few years in the most sublime bliss of virtual reality, the newer, happy person moments might end up concluding that, in retrospect, the suffering was worth it after all. And if they were to get tortured again, the judgment might again revert. With perspectives biased by temporal viewpoints, an objective answer on if or when to accept torture-level suffering cannot be obtained, and no matter which answers we choose, some people (or some person moments) will have ground for objecting.
Nevertheless, what we can do is to analyze the different reasons for choosing one way or the other, and integrate them into a framework that aspires to a kind of impartiality or fairness, at least to as much a degree as is possible given the constraints. The personal case where someone voluntarily accepts extreme suffering for sufficiently much happiness in their own future is confounded by many factors: For instance, perhaps someone’s life goal includes a desire for novelty or excitement-seeking that ultimately motivates the tradeoff in question. These confounders call into question whether what makes suffering worthwhile under some circumstances is found purely, and to an equal degree for every subject of experience, in the "goodness" of happiness.
People generally hold the belief that more recent experiential moments have epistemic authority over their elapsed peers. For instance, we are willing to grant that an elderly person looking back at various life choices is in a good epistemic position to determine the merit of these choices. However, such epistemic authority cannot straightforwardly be transferred to the interpersonal level. While it does seem plausible that people choose rationally when they decide to undergo tradeoffs involving extreme suffering in their own lives (though there might also be a systemic error in gauging these sorts of tradeoffs), it strikes us as more controversial whether a measurement of the (dis)value of suffering and happiness respectively can be established from an outside, “impartial” perspective. The following thought experiment lends support to this intuition by highlighting an important difference between intrapersonal and interpersonal welfare tradeoffs:
Part 1: Imagine you are sitting on a towel at the beach. The weather is very hot, but you are sitting comfortably in the shade. After some time, you develop a strong craving for taking a refreshing swim in the ocean. Unfortunately, you forgot your sandals and will have to walk barefoot over hot sand. You decide that the pain from walking over the hot sand will be worth the pleasure of swimming in the ocean and go for it.
Part 2: Imagine you are controlling a supercomputer that can implement states of consciousness. You look at the control board in front of you and have the option to instantiate some painful “hot-sand moments” with 10% of the computer’s resources, and a lot of happy “ocean-swimming moments” with the other 90% of computing power. The two experiences will not be connected to each other, i.e. no memory is present in the ocean-moments of the sand-moments, and the sand-moments experience no anticipation of the refreshing swimming. Would you choose to run these experiences jointly?
Interestingly, it is much less obvious that the tradeoff in Part 2 is “worth it.” In fact, it is unclear what “worth it” would even mean in this context. Our own behavior for trading between pleasure and suffering seems to be driven by factors that don't just correspond to the intensity of the suffering or pleasure in question; factors such as cravings for pleasure (that would counterfactually result in unpleasantness, e.g. if one were to remain in the shade and consequently became more and more hot, sweaty and bored), or by preferences for adventure, for accomplishments and meaning in life, or simply for novel or exciting activities. When such external factors are removed and we are only looking at “raw experiences” rather than experiences embedded in the context of people’s lives as a whole, our intuitions regarding the appropriate exchange rates may change drastically, and many reasons (cravings, desire for novelty, meaning, adventure, etc.) for why one might be tempted to accept (torture-level) suffering in the intrapersonal case simply disappear. In the impartial or altruistic context, when evaluating whether to instantiate certain (packages of) experiences or not, there seems to be an especially strong case for never bringing about torture-level suffering, where the person moment in question would be left uncompensated and wanting its suffering to be terminated at all costs.6
This suggests that even for people who might be inclined to accept the bargain described above for themselves, it remains an open question whether they should apply the same exchange rate for the impartial or altruistic context. The two modes of evaluation, intra- versus interpersonal, differ in interesting and potentially relevant respects.
Views that incorporate this intuition: The intuition that torture-level suffering cannot be counterbalanced is strong in many people. It is present in the widespread belief that minor pains cannot be aggregated to become worse than an instance of torture.7 Among consequentialist ethical systems, it is incorporated by threshold and consent-based negative utilitarianism and in (maximin) prioritarianism. It likely also contributes to absolute prohibitions against torture in deontological moralities. Finally, the intuition is part of philosophical works of fiction, such as Ursula K. LeGuin’s short story The Ones Who Walk Away from Omelas,8 Dostoevsky’s The Brothers Karamazov9 or Camus’s The Plague.10
III. Happiness as the absence of suffering
A widespread view, especially in non-Western traditions, is that “happiness” consists of the absence of suffering. According to this view, not only pleasure, but also tranquillity or contentment are amongst the best experiences (Gloor, 2017). We pursue pleasures because without them, we (usually) develop cravings for these pleasures. And these cravings constitute a form of suffering or dissatisfaction, i.e. of consciously wanting the current experience to be different. If a state is entirely free of cravings, there is a sense in which it can be considered perfect. Subjectively at least, the immediate, internal evaluation in such a state concludes that nothing needs to be changed. Accordingly, pleasure then carries “mere” instrumental importance, because flooding the mind with pleasure is one of several ways to (temporarily) get rid of cravings. Quoting from the paper on Tranquilist axiology:
In the context of everyday life, there are almost always things that ever so slightly bother us. Uncomfortable pressure in the shoes, thirst, hunger, headaches, boredom, itches, non-effortless work, worries, longing for better times... When our brain is flooded with pleasure, we temporarily become unaware of all the negative ingredients of our stream of consciousness, and they thus cease to exist. Pleasure is the typical way in which our minds experience temporary freedom from suffering, which may contribute to the view that happiness is the symmetrical counterpart to suffering, and that pleasure, at the expense of all other possible states, is intrinsically important and worth bringing about. However, there are also (contingently rare) mental states devoid of anything bothersome that are not intensely pleasurable, examples being flow states or states of meditative tranquillity. Felt from the inside, meditative tranquillity is perfect in that it is untroubled by any aversive components, untroubled by any cravings for more pleasure. Likewise, a state of flow – as it may be experienced during stimulating work, when listening to music or when playing video games – where tasks are being done on “autopilot,” with time flying and a low sense of self awareness, also has this same crucial quality of being experienced as completely nonproblematic. Such states – let us call them states of contentment – may not commonly be described as “(intensely) pleasurable,” but following venerable traditions in Buddhism and Epicureanism, these states, too, deserve to be regarded as perfect.
Experiences that we in our everyday life think of as “neutral” may often contain a backdrop of dissatisfaction, hard to notice introspectively because we have become accustomed to it, but present nonetheless in that our evaluation of the experience is affected and becomes ever-so-slightly negative. Experiences that are truly free from dissatisfaction on the other hand are experiences we often think of as positive.
Critics may object that a world in which all pleasures were reduced to “mere” contentment – such as states of constant meditation, half-sleep or the playing of flow-inducing video games – would be unexciting and rather monotonous. All the heights of sensory and emotional pleasures, such as eating one’s favorite foods, successfully accomplishing a long-term project or being in love would be lost. But it is worth pointing out that the loss of these experiences appears tragic only when viewed from the outside, when we compare it to the world we would wish to inhabit in terms of our life's goals and the desired narrative for how we want our lives to go.
So the reflective part of our nature cares about things other than our moment-to-moment experiences. And to that part of us, a world of mere contentment would indeed be found lacking. However, what tranquilist axiology is modeled after is the impulsive part of our nature: We tend to live according to the short-sighted avoidance of dissatisfaction. We seek pleasure not because pleasure is in itself valuable, but in order to drown out boredom or pleasure cravings. For the impulsive part in us, a world filled with nothing but states of contentment would be a true paradise. If we zoomed in on all that is being experienced in such a world, there would by definition never be a moment of boredom or unfulfilled longing; never would there be a moment where someone consciously wants to have something changed about their experience. To its inhabitants, such a world would manifest itself as perfect. (Of course, the impulsive part of us may not be the only motivational system that matters morally, and morality should also be about evaluating the world according to the fulfillment of long-term, reflected preferences or life goals. It remains an open question how these two parts are to be integrated.)
To distill the intuition that it is morally unimportant to turn states of contentment into states of maximal pleasure, consider the following thought experiment:
Imagine a large temple filled with 1,000 Buddhist monks who are all absorbed in meditation; their minds are at rest in flawless contentment. Unfortunately, the whole temple will collapse in ten minutes and all the monks will be killed. You cannot do anything to prevent the temple from collapsing, but you have the option to press a button that will release a gaseous designer drug into the temple. The drug will reliably produce extreme heights of pleasure and euphoria with no side effects. Would you press the button?11
One reason to press the button is that it could cause the monks to believe that they have reached a long-sought state of enlightenment they were after their whole life. But let us suppose that the monks in the temple are already maximally satisfied with their lives' achievements: The drug-induced euphoria will change the quality of their experience, but it would not change their beliefs about enlightenment or their meditative accomplishments. Should we press the button?
It is tempting to feel roughly indifferent here: Pressing the button seems like a nice thing to do, and assuming it produces no harm or panic, it may be hard to imagine how it would be something bad.12 At the same time, it does not seem particularly important or morally pressing to push the button. As far as the monks in the temple are concerned, it seems that they will be totally fine (for the next ten minutes anyway) without the drug. If there are any relevant opportunity costs whatsoever, such as e.g. an opportunity to reduce mild suffering somewhere else, would it ever be the morally preferable action to induce euphoria in the monks? If yes, why?
This thought experiment suggests that differences in “happiness levels,” i.e. in changing an experience from “neutral” (or “not-maximally-positive” – depending on how one looks at meditation) to intensely pleasurable or “extremely positive,” is not of (strong) moral importance. Interestingly, no symmetrical point for differences in pain levels can be made: On the contrary, no one in their right mind would think that turning the mild suffering of 1,000 individuals into extreme agony makes at most little moral difference. To summarize, the former view – that making slightly happy experiences much more happy is at most of little value – is at the very least plausible (and for many people, even highly intuitive). The latter view, on the other hand – that turning slightly painful states into very painful states is at most of little disvalue – is impossible to even take seriously.13 This contrast strongly indicates that there is an asymmetry between how we value increasing happiness versus reducing suffering. Accordingly, perhaps happiness, rather than being intrinsically valuable, should be seen as instrumentally valuable, or as contingently valuable depending on a person valuing (specific flavors of) happiness for themselves (or others) in their life goals.
Views that incorporate this intuition: The intuition that happiness consists of the absence of suffering is common in non-Western traditions, especially in Buddhism.14 It is also a central part of Epicureanism.15 Finally, it may constitute part of the explanation why a lot of people reject versions of consequentialism where the happiness of the many can outweigh the suffering of a few.
IV. Other values have diminishing returns
In addition to valuing suffering reduction, we might care about our personal well-being, about us and others fulfilling their dearest dreams and life-goals, about happiness and there being love and joy in the world, and many other, similar objectives. Interestingly, these other values beside concern for suffering often seem to be bounded, i.e. they seem to quickly reach diminishing returns once we optimize for them successfully.
Imagine there is a civilization with ten billion maximally happy inhabitants who never experience any suffering. We have the option to introduce more lives and overall more happiness to the world by multiplying the number of people in that civilization by a factor of one billion. However, the way to bring about this population explosion will also lower the quality of life for all the beings, old and new. In the new civilization with ten billion billion people, each person will experience a lot of mild suffering, a decent amount of moderate suffering, and even some moments of strong suffering. People will also be happy a lot, such that most outsiders, as well as all the people themselves, would on the whole regard their existences as worth living. Should we choose to bring about the much larger, less happy civilization, or do we prefer the small(ish) but maximally happy one?
While there are some people who argue for accepting the repugnant conclusion (Tännsjö, 2004), most people would probably prefer the smaller but happier civilization – at least under some circumstances. One explanation for this preference might lie in intuition one discussed above, “Making people happy rather than making happy people.” However, this is unlikely to be what is going on for everyone who prefers the smaller civilization: If there was a way to double the size of the smaller population while keeping the quality of life perfect, many people would likely consider this option both positive and important. This suggests that some people do care (intrinsically) about adding more lives and/or happiness to the world. But considering that they would not go for the larger civilization in the Repugnant Conclusion thought experiment above, it also seems that they implicitly place diminishing returns on additional happiness, i.e. that the bigger you go, the more making an overall happy population larger is no longer (that) important.
By contrast, people are much less likely to place diminishing returns on reducing suffering – at least17 insofar as the disvalue of extreme suffering, or the suffering in lives that on the whole do not seem worth living, is concerned. Most people would say that no matter the size of a (finite) population of suffering beings, adding more suffering beings would always remain equally bad.
It should be noted that incorporating diminishing returns to things of positive value into a normative theory is difficult to do in ways that do not seem unsatisfyingly arbitrary. However, perhaps the need to fit all one’s moral intuitions into an overarching theory based solely on intuitively appealing axioms simply cannot be fulfilled. Some have pointed out that human moral intuitions are complex, which makes it non-obvious that one's normative views must follow directly from just a couple of simple and elegant principles.
Views that incorporate this intuition: Next to being a plausible (partial) explanation why people reject the Repugnant Conclusion, diminishing returns to happiness and other values might explain the appeal of average versions of consequentialism and the fact that most people do not consider it morally important to fill the entire universe with happy beings.18
V. Concluding discussion
We introduced four separate motivating intuitions for suffering-focused ethics. Endorsing one or several of these intuitions as a guiding principle can ground concern for suffering as the main focus of someone’s morality, while leaving room for other things to value. In addition to the intuitions discussed above, some people’s focus on reducing suffering may also derive from a suffering-focused disposition or temperament when they contemplate the value of lives and outcomes in practice: Quantifying suffering and happiness can be done in several different ways, and people’s judgments may differ even if they use the same methods for their assessment. When doing rough, impressionistic aggregations, some people – who we can call “suffering-focused” – tend to conclude that various lives and outcomes are overall bad, while others tend to conclude that they are overall good.19
It should be noted that pointing out strongly held intuitions or principles does not yet yield a comprehensively specified goal or moral system. For instance, some of the views discussed above may be tricky to formalize in satisfying ways.20
Most people have intuitions about many things, including selfish interests, altruism, our personal (moral?) self-image, game-theoretic considerations or cultural community norms. Deciding which of these we want to reflectively endorse and to what extent, and then bringing all of this together into a coherent goal that ranks not just situations we are culturally or evolutionarily familiar with, but all possible world states,21 is indeed challenging.
Given the difficulty of this task, it is important that we do not make it even more complicated by placing unreasonable formal demands on our values. Likewise, it is important that we do not hastily subscribe to some particular view without remaining open to reflection. Ultimately, choosing values comes down to finding the intuitions and guiding principles we care about the most – and if that includes a number of different intuitions, or even some form of extrapolation procedure to defer to better-informed future versions of ourselves – then the solution may not necessarily look simple. This is completely fine, and it allows those who agree with (some of) the intuitions behind suffering-focused ethics to care about other things in addition.
CLR’s endorsement of suffering-focused ethics is an attempt to incorporate suffering alleviation within a generally commonsensical framework about what is and is not wise to do in practice. Our activism should be strategically smart, non-violent and cooperative. This ensures that people with many different moral perspectives and different practical approaches can coordinate their activism, avoid zero-sum conflicts and instead focus on mutually supportive objectives.
- Our Mission
- The Case for Suffering-Focused Ethics
- Reducing Risks of Astronomical Suffering: A Neglected Priority
- Altruists Should Prioritize Artificial Intelligence
Fehige, C. (1998). A pareto principle for possible people. In Fehige, C. and Wessels U. (Eds.), Preferences (pp. 508-43). Berlin: Walter de Gruyter.
Gloor, L. (2017). Tranquilism. Center on Long-Term Risk.
Greaves, H. (2017). Population axiology. Philosophy Compass, 12:e12442. https://doi.org/10.1111/phc3.12442
Holtug, N. (1999). Utility, priority and possible people. Utilitas, 11(01), 16-36.
Knutsson, S. & Brülde, B. (2016). Promoting goods or reducing bads. Manuscript in preparation.
Narveson, J. (1973). Moral problems of population. The Monist, 62-86.
Norcross, A. (2009). Two dogmas of deontology: Aggregation, rights, and the separateness of persons. Social Philosophy and Policy, 26(01), 76-95.
Parfit, D. (1984). Reasons and persons. Oxford University Press.
Singer, P. (1993). Practical ethics (2nd ed.). Cambridge: Cambridge University Press.
Strodach, G. K. (Ed.). (1963). The Philosophy of Epicurus: Letters, doctrines, and parallel passages from Lucretius. Northwestern University Press.
Tännsjö T. (2004). Why We Ought To Accept The Repugnant Conclusion. In: Tännsjö T., Ryberg J. (eds) The Repugnant Conclusion. Library Of Ethics And Applied Philosophy, 15. Dordrecht: Springer.
- We mean intuitions in the sense of anchoring points for grounding our moral judgments and constructing or evaluating moral theories. Intuitions in this sense are not just hunches or gut feelings, they are the central, indispensable building blocks of our moral views. (back)
- There are philosophers that have defended a symmetrical view, notably Jeff McMahan in “Asymmetries in the morality of causing people to exist” (2009). Others, such as Greaves (2017) have noted that justifying a procreative asymmetry via the claim that creating new happy lives is neutral but not morally good can lead to paradoxes that are difficult to circumvent. (back)
- Singer (1993, p. 128) presents the analogy that one could view preferences as debits on a moral ledger. It is good to clear existing debits, but there is little point to accepting a debit just to clear it afterwards. (back)
- Singer recently changed his view from (prior-existence) preference utilitarianism towards total hedonistic utilitarianism. (back)
- Egalitarianism does so if it is concerned with individuals’ suffering rather than, for example, the distribution of resources. In addition, an egalitarian or prioritarian view (applied to future generations) needs to be sufficiently strong to have this implication. That is, it needs to judge inequality or the well-being of the disadvantaged as sufficiently pressing to rank it above the endeavour to bring more individuals into existence. See e.g. Holtug (1999). (back)
- When literary characters undergo struggles and great suffering for causes they believe in, we may think that this is worth it for them. But this belief is usually motivated by a belief in fulfilling life–goals or sacrificing oneself for “causes greater than oneself,” i.e. it is something that is motivated more by a preference-driven perspective, rather than by a perspective that (just) looks at the (dis)value of different experiences. (back)
- See Norcross (2009) for a discussion of this and for some interesting arguments why one might nevertheless choose to permit thoroughgoing aggregation. See also this list of additional related works. (back)
- See here for a summary. (back)
- After having read out a graphical description of a child being tortured by her parents, Ivan Karamazov makes the following point to his brother Alyosha:
"Why recognize that devilish good-and-evil, when it costs so much? I mean, the entire universe of knowledge is not worth the tears of that little child addressed to 'dear Father God'." (back)
- After witnessing the painful death of a child who died of the plague, the preacher, Father Peneloux, admits that he has no idea how to reconcile that suffering with God’s benevolence. The passage reads: “Thus he might easily have assured them [the congregation] that the child’s sufferings would be compensated for by an eternity of bliss awaiting him. But how could he give that assurance when, to tell the truth, he knew nothing about it? For who would dare to assert that eternal happiness can compensate for a single moment's human suffering?" (back)
- Alternatively, to avoid potentially distorting ideas about, for example, violating their autonomy: Would the accidental release (by environmental forces) of such a gaseous compound make the world better? (back)
- Except that one might find it morally doubtful to drug others without their explicit consent. (back)
- Critics may object that the thought experiment is confounded by status quo bias or the act-omission distinction: What if the monks were already experiencing heights of euphoria, would it not be morally bad to deprive them of this by “sending them back to meditation?” While this framing might make it more likely that people consider the lack of euphoria a bad thing, it does not fully take away the force of the intuition that meditation is also really good, and that the monks are still perfectly fine even if they are now no longer euphoric. (back)
- See the Four Noble Truths, especially the second one. (back)
- In his Letter to Menoeceus (as quoted by Gloor (2017) ), Epicurus wrote:
“A steady view of these matters shows us how to refer all moral choice and aversion to bodily health and imperturbability of mind, these being the twin goals of happy living. It is on this account that we do everything we do – to achieve freedom from pain and freedom from fear. When once we come by this, the tumult in the soul is calmed and the human being does not have to go about looking for something that is lacking or to search for something additional with which to supplement the welfare of soul and body. Accordingly we have need of pleasure only when we feel pain because of the absence of pleasure, but whenever we do not feel pain we no longer stand in need of pleasure. And so we speak of pleasure as the starting point and the goal of the happy life because we realize that it is our primary native good, because every act of choice and aversion originates with it, and because we come back to it when we judge every good by using the pleasure feeling as our criterion.” (Strodach, 1963, p. 182)
The last sentence may seem contradictory to modern ears, but Epicurus goes on to explain his idiosyncratic use of the term “pleasure:”
“Thus when I say that pleasure is the goal of living I do not mean the pleasures of libertines or the pleasures inherent in positive enjoyment, as is supposed by certain persons who are ignorant of our doctrine or who are not in agreement with it or who interpret it perversely. I mean, on the contrary, the pleasure that consists in freedom from bodily pain and mental agitation.” (back)
- Inspired by Parfit (1984). (back)
- The way to formalize non-diminishing returns for suffering is tricky. For instance, most people would probably want to avoid a value function that assigns negative value to adding almost-perfectly-happy beings (with tiny amounts of suffering) to a world in which there is already a lot of happiness. This suggests that minor suffering, at least when embedded in otherwise happy lives, does not aggregate to ever become negative on the whole (i.e. for the whole package of lives being added). However, what does seem plausible is that when we start out with a world where there is a huge population of maximally happy beings, adding a package of “additional happy beings plus one being that only exists for a few moments of suffering” makes that world worse overall. The same would probably go for adding the package “additional happy beings plus a being that is usually happy but experiences some moments of torture-level suffering.” (back)
- This perspective is perhaps best illustrated with the analogy of “morality as a painting,” introduced by Holden Karnofsky in this conversation:
“So one crazy analogy to how my morality might turn out to work, and the big point here is I don't know how my morality works, is we have a painting and the painting is very beautiful. There is some crap on the painting. Would I like the crap cleaned up? Yes, very much. That's like the suffering that's in the world today. Then there is making more of the painting: that's just a strange function. My utility with the size of the painting, it’s just like a strange and complicated function. It may go up in any kind of reasonable term that I can actually foresee, but flatten out, at some point. So to see the world as like a painting and my utility of it is that, I think that is somewhat of an analogy to how my morality may work, that it’s not like there is this linear multiplier and the multiplier is one thing or another thing. It’s: starting to talk about billions of future generations is just like going so far outside of where my morality has ever been stress-tested. I don’t know how it would respond. I actually suspect that it would flatten out the same way as with the painting.” (back)
- See Knutsson & Brülde (2016) and as well as this post by Brian Tomasik. (back)
- For instance, different versions of person-affecting views may have trouble with the non-identity problem or with appeals to the symmetry of good and bad (“Why can’t things only be “good” if they are good for someone?”); “prior-existence” views lead to strange conclusions when one compares the value of adding a very happy child to the world to adding a child that is only fairly happy (both appear equally “neutral”); and views that state a threshold below which suffering becomes impossible to compensate are faced with some general philosophical challenges for lexicality. (back)
- Some philosophers would question whether this degree of specificity or completeness is necessary or desirable. In reply to this, we can point out that specification seems impossible to avoid: If one leaves values vague or underdetermined, the moral agent in question still has to act somehow in any given situation. And inevitably, the way the agent ends up acting would then, implicitly, set the parameters of their goal (even if this resulted in the agent being indifferent about most of the ways the world could go). (back)