Charity Cost-Effectiveness in an Uncertain World
First written: 28 Oct. 2013; last update: 4 Dec. 2015
Summary
Evaluating the effectiveness of our actions, or even just whether they're positive or negative by our values, is very difficult. One approach is to focus on clear, quantifiable metrics and assume that the larger, indirect considerations just kind of work out. Another way to deal with uncertainty is to focus on actions that seem likely to have generally positive effects across many scenarios, and often this approach amounts to meta-level activities like encouraging positive-sum institutions, philosophical inquiry, and effective altruism in general. When we consider flow-through effects of our actions, the seemingly vast gaps in cost-effectiveness among charities are humbled to more modest differences, and we begin to find more worth in the diversity of activities that different people are pursuing. Those who have abnormal values may be more wary of a general "promote wisdom" approach to shaping the future, but it seems plausible that all value systems will ultimately benefit in expectation from a more cooperative and reflective future populace.
Contents
- Epigraphs
- Introductory dialogue
- Should we only use rigorous data?
- Alternate approach: Cause robustness
- Punting to the future
- Against charity fanaticism
- What if you have weird values?
- Isn't robustness just risk aversion?
- Why I think charities don't differ by many orders of magnitude
- Movement fashion
- Two ways that charity differs from business
- Nick Beckstead's talk
- Carl Shulman on flow-through effects
- Modeling unknown unknowns
- Acknowledgments
- Feedback
Epigraphs
To teach how to live without certainty, and yet without being paralyzed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.--Bertrand Russell, History of Western Philosophy
If something sounds too good to be true, it probably is.--proverb
Introductory dialogue
In fall 2005, I was talking with a friend about utilitarianism:
Friend: "I see where you're coming from, but utilitarianism doesn't work."
Me_2005: "Why not?"
Friend: "You have no way of assessing all the effects of your actions. Everything you do has a million consequences."
Me_2005: "We do the best we can to estimate them."
Friend: "The error bars are infinite in both directions."
Me_2005: "Hmm, yes, the error bars are infinite, but we can assume the unknowns cancel out. For example, consider building safer seat belts. In the short term, we know it prevents injuries, which is good. There may be spill-over longer-term effects, but those could equally be good or bad, so they cancel out in the calculations. Hence, the expected value is still positive."
If I were to continue the conversation now, talking with my past self, here's how it might go:
Me_2013: "Ok, let's consider the example of seat-belt safety. One effect is to prevent injuries and tragic deaths, which is good. But another effect is to increase the human population, which means more meat consumption. Since people eat over a thousand chickens and an order of magnitude more fish in their lifetimes, this seems net bad on balance."
Me_2005: "Hmm, ok. Then maybe seat belts are net negative.... Those two factors you enumerated are pretty concrete, and everything else is fuzzy, so we can assume everything else cancels out. Hence the overall balance is bad."
Me_2013: "Well, there might be other good side effects of seat belts. Better car safety means more people driving, which means more inclination to build parking lots. And parking lots are one very clear way to prevent vast amounts of suffering by wildlife that would have otherwise lived short lives and endured painful deaths on the land had it not been paved. Maybe there are more insects whose painful deaths are spared by parking lots than there are additional chickens and fish killed by the drivers who remain alive."
Me_2005: "Ah, good point. Parking lots are quite beneficial, so on balance, it may be that seat belts are net positive."
Me_2013: "On the other hand, more driving of cars means more greenhouse-gas emissions, which means more climate change, which means more global political instability, which may worsen prospects for compromise in the far future. The amount at stake there could swamp everything else."
Me_2005: "Oh, yes, you're right. So then seat belts may be net bad on balance."
Me_2013: "On the other hand, fewer premature deaths may allow people to feel less threatened and more far-sighted in their actions...."
[...and so on]
Nick Bostrom offers more examples of this type in his 2014 talk, "Crucial considerations and wise philanthropy".
Should we only use rigorous data?
This idea of focusing on the effects you can see and ignoring those you can't, hoping they all kind of cancel out, is tempting. It's one way to feel like "I'm actually making a difference" by doing a given action, rather than being paralyzed by uncertainty, which can sometimes be depressing.
Some in the effective-altruism movement adopt this kind of approach. Basically, "any cause that's uncertain, far off, and speculative isn't 'rigorous,' so we should ignore it. Instead, we should focus on clearly proven interventions with hard scientific evidence behind them, like malaria prevention or veg outreach, because at least here we know we're doing something good rather than floating off into space."
I think there's some value to this perspective, especially insofar as digging deep into scientific details can teach you important lessons about the world that are transferable elsewhere. However, where it falls down as a philosophical stance is with this question: How do you know malaria prevention or veg outreach is good on balance? How can you know the sign of the activity (much less the effectiveness) without considering the longer-range and speculative implications? You could try to assume that everything you can't see "just cancels out," but as we saw in the introductory dialogue, this approach can yield arbitrarily flip-flopping conclusions depending on where the boundary of "known facts" meets the boundary of "everything else we assume away." Ultimately we have no choice but to look at the whole picture and grapple with it as best we can.
Alternate approach: Cause robustness
Another way to pick a cause to work on is to look for actions that have broadly positive effects across a wide range of scenarios. For example, in general:
- More cooperation, democracy, and effective institutions for positive-sum compromise are better.
- More philosophical reflectiveness and discourse are better.
- Meta-level activities to support the above (like the effective-altruism movement, advising people on career choice for making a difference, and so on) are probably good.
Note that I didn't include on this list technology, economic growth, and other trends generally seen as positive, because both negative-leaning and positive-leaning utilitarians have concerns about whether faster technology is net good or bad. Of course, particular kinds of technologies, like those that advance wisdom faster than raw power, are safer and often clearly beneficial.
Punting to the future
Focusing on the very robust projects often amounts to punting the hard questions to future generations who will be better equipped to solve them. In general, we have basically no other choice. There are known puzzles and unknown future discoveries in physics, neuroscience, anthropics, infinity, general epistemology, human (and nonhuman) values, and many other areas of life that are likely to radically transform our views on ethics and how we should act in the world. Taking object-level actions based only on what we know now is sort of like Socrates trying to determine whether the Higgs boson exists. Socrates would have been at a complete loss to solve this question directly, but one thing he could have done would have been to encourage more thinking on intellectual topics in general, with eventual payoff in the future.
Gaverick on Felicifia proposed this basic idea to "get smarter" before we try to address object-level problems. The caution I would add is that we don't just need knowledge per se but specifically ethically reflective knowledge that can slowly, carefully, and circumspectly decide how to move ahead. We don't want a quick knowledge explosion in which one party runs roughshod over most things that most organisms care about in a winner-takes-all fashion. Thus, even though artificial general intelligence (AGI) would ultimately be needed to figure out these questions that are beyond human grasp, we need to develop AGI wisely, and fostering better social conditions within which that AGI development can proceed more prudently can make sense in the short term.
One aspect of preparing the ground for future generations to do good things is making sure our values are transferred, at least in broad outline. The biggest danger of the "punt to the future" strategy is not that the future won't be intelligent (selection pressure makes it likely that it will be) but that it may not care about what we care about. While we don't want to lock the future in to a moral view that would seem silly with greater understanding of the universe, we also don't want the future's values to float around arbitrarily. The space of possible values is huge, and the space of values that we might come to care about even upon further reflection is a tiny subset of the total. This is why our slogan shouldn't be just "more intelligence" but instead something like "more wisdom."
(I should mention in passing that it would probably be better from a suffering-focused perspective if humans don't develop AGI because of the extreme risks of future suffering that doing so entails. However, given that humanity will develop AGI whether I like it or not, my best prospects for making a difference seem to come from pushing it in more humane and positive-sum directions. I also recognize that other people care very much about the fruits that AGI could offer, and I give this some moral weight as well.)
Thinking meta doesn't always mean acting meta
It seems that the main way in which our actions make a difference is by how they prepare the ground for our successors to think more carefully about big questions in altruism and take good actions based on that thinking. However, this doesn't mean that the only way to affect the far future is to work directly on meta-level activities, like movement-building, career advising, or friendly AGI. Rather, all causes that we might work on have flow-through effects in other areas. For example, studying how to reduce wild-animal suffering in the near term can encourage people to recognize the importance of the issue more broadly, which might also make them think more cautiously before spreading wildlife to other planets or in simulations. Studying object-level crucial considerations in philosophy can motivate more people to follow suit, perhaps more effectively than by just preaching about the general importance of philosophical reflection without setting the example. In general, taking small, concrete steps alongside a bigger vision can potentially be more inspiring than just talking about the bigger vision, although exactly how to strike the balance between these two is up for debate.
An effective altruist in 1800
As one final example to drive home the point, imagine an effective altruist in the year 1800 trying to optimize his positive impact. He would not know most of modern economics, political science, game theory, physics, cosmology, biology, cognitive science, psychology, business, philosophy, probability theory, computation theory, or manifold other subjects that would have been crucial for him to consider. If he tried to place his bets on the most significant object-level issue that would be relevant centuries later, he'd almost certainly get it wrong. I doubt we would fare substantially better today at trying to guess a specific, concrete area of focus more than a few decades out. (Of course, if we can guess such an area with decent precision, we could achieve high leverage, as Nick Beckstead points out in the presentation discussed at the end of this essay.)
What this 1800s effective altruist might have guessed correctly would have been the importance of world peace, philosophical reflection, positive-sum social institutions, and wisdom. Promoting those in 1800 may have been close to the best thing this person could have done, and this suggests that these may remain among the best options for us today. We should of course be on the lookout for more specific, high-leverage options, but we ought also to remain humble about the extent of our abilities against the vast space of unknown unknowns as far as errors in our current beliefs and new domains of knowledge that we never even knew existed.
Nick Beckstead discusses a similar analogy in section 6.4.3 of On the Overwhelming Importance of Shaping the Far Future, giving the example of a person in the year 1500 who wanted to help humanity build telephones centuries later.
Against charity fanaticism
QALYs
Sometimes advocates for a cause become fixated on a particular metric or way of looking at the situation -- for example, a $/QALY figure applied to developing-world or animal-welfare interventions. This seductive number seems to finally give us an answer to which causes are better than others. And the result is that, even after accounting for different quality of evidence between charities and the optimizer's curse, different charities can vary by many times, even orders of magnitude, along this metric.
Quantification in this fashion is certainly helpful and provides one important perspective from which to view the situation, but we shouldn't mistake it for being the end of the story. The world is complex, and there are many different angles from which to view a charity's activities. From different perspectives, different charities may come out ahead.
Moreover, long-term side effects can make a big difference even when optimizing a single metric. A simple example: Suppose the Pope was deciding whether to encourage an easing of the Catholic Church's prohibition on birth control. This would help prevent the spread of HIV in Africa and generally seems like a win for public health, i.e., more QALYs. But wait, birth control would mean fewer pregnancies, which means fewer people born, which presumably means fewer QALYs even considering the HIV-prevention effects. So, according to the QALY metric, allowing birth control would be bad. But then consider the impacts on farm animals: A smaller human population, especially by developed-world Catholics, means less meat consumption and fewer negative QALYs in factory farms, so the overall impact might once again be positive for QALYs. But wait, what about wild animals? Humans appropriate enormous amounts of land and biomass that would otherwise support vast numbers of small, suffering wild animals, so a bigger human population may mean less wild-animal suffering and hence more QALYs, and arguably this effect dominates. So we're back to opposing birth control. On the other hand, consider that better access to birth control might empower women, improve social stability, and generally lead to a more peaceful and cooperative future, which could improve the quality of life of 1038 future people every century for billions of years. So once again birth control appears positive. And on and on....
As we saw in the introductory dialogue, we can't just stop at one particular cutoff point for the analysis. The flow-through effects of actions are not wholly unpredictable and may make a huge difference.
Direct AI work
When we stop fixating on particular side-effects of an intervention and try instead to see the whole picture, we realize that the charity landscape is less drastically imbalanced than it might seem. Everything affects everything else, and these side-effects have a way of dampening the unique claims to urgency of the seemingly astronomically important causes while potentially raising that of causes that don't seem so important naively. For example, insofar as a charity encourages cooperation, philosophical reflection, and meta-thinking about how to best reduce suffering in the future -- even if only by accident -- it has valuable flow-through effects, and it's unlikely these can be beaten by many orders of magnitude by something else.
In 2008, I talked with some people at the Singularity Institute for Artificial Intelligence (SIAI, now called MIRI) and asked what kinds of insanely important projects they were working on compared with other groups. What I heard sounded unimpressive to me: It was basically just more exploration in math, philosophy, cognitive science, and similar domains. I didn't find anything that seemed 1030 times better (or even 10 times better) than, say, what other smart science-oriented philosophers were exploring in academia. At the time my conclusion was: "Well, maybe this work isn't so important." Now I have a different take; rather than SIAI's work being unimportant, I see the other work by smart academics and non-academic thinkers as being more important than I had realized. These intellectual insights that I had taken for granted don't happen for free. SIAI and other philosophers both contribute to humanity's general progress toward greater wisdom and provide more giant shoulders on which later generations can stand.
For the record, I do think MIRI's work is among the best being done right now and tentatively encourage donations to MIRI, but I also have the perspective to see that MIRI is not astronomically more valuable in a counterfactual sense than other causes that also contribute to the overall mission.
I would also point out that not all work in AI or cognitive science is necessarily net positive. I would encourage differential intellectual progress in which we focus on ethical and social questions at a faster pace than on the purely technical questions, because otherwise we risk having great power without appropriate structures to constrain its use.
Infinities
This meta-level perspective, in which we recognize the side effects of actions on the future, also defuses the most extreme fanaticism: Infinite payoffs. Some speculative scenarios in physics or other domains offer the prospect of making an infinite impact by your efforts. If so, doesn't working to affect those outcomes dominate everything else? The answer is "no," because in fact, everything we do has implications for those infinite scenarios. Building wisdom, promoting cooperation, and so on will all make a difference to infinite payoffs if they exist, and indeed, it's only by acquiring vastly greater wisdom that we even stand a chance of making the right choices with respect to those potentially infinite decisions. As with the more mundane charity-evaluation examples, fixating on a particular scenario seems potentially dominated by encouraging more broadly helpful social conditions that can allow for addressing a wide spectrum of possible scenarios (some of which we can't even imagine yet). Of course, working on a specific case can be one way to encourage exploration of broader cases, but the seeming direct value of focusing on one specific idea may be outweighed by the indirect value of encouraging deeper insight into the whole class of such ideas.
This is why even pure expected-utility maximizers with no bound on their utility function will still tend to act pretty normally: Even in this case, the best general strategy is probably to set the stage for others to more wisely address the problem, rather than unilaterally doing something crazy yourself. (See also "Empirical stabilizing assumptions" in "Infinite Ethics".)
Qualifications
While this discussion was meant to gesture somewhat in the direction of recognizing what Holden Karnofsky calls "broad market efficiency," it doesn't mean that all charities are equal. Indeed, many actions that we might take would have negative consequences, both in the short and long terms, and even among charities, there are likely some that cause more harm than good. Flow-through effects are notoriously complicated, so even a well meaning activity could prove harmful at the end of the day. That said, just as I don't expect some charities to be astronomically better than others, so I don't expect many charities to be extremely negative either.
Flow-through effects also don't mean that we can't make some judgments about relative effectiveness. Probably studying ways to reduce suffering in the future is many times more important than studying dung beetles, to use an example from philosopher Nick Bostrom. But it's not clear that directly studying future suffering is many orders of magnitude more important. The tools and insights developed in one science tend to transfer to others. And in general, it's important to pursue a diversity of projects, to discover things you never knew you never knew. Given the choice between 1000 papers on future suffering and 1000 on dung beetles, versus 1001 papers on future suffering and 0 on dung beetles, I would choose the former.
What if you have weird values?
Improving future wisdom is good if you expect people in the future to carry on roughly in ways that you would approve of, i.e., doing things that generally accord with what you care about, with possible modifications based on insights they have that you don't. The situation is trickier for those, like negative utilitarians (NUs), whose values are strongly divergent from those of the majority and are likely to be so indefinitely. If most people with greater wisdom would spread life (and hence suffering) far and wide into the cosmos, then isn't making the future wiser bad by NU values?
It could be, but I think a wiser future is probably good even for NUs in expectation. For one thing, if people are more sophisticated, they may be more likely to feel empathy for your NU moral stance, realizing that you're another brain very similar to them who happens to feel that suffering is very bad, and as a result, they may care somewhat more about suffering as well, if not to the same degree as NUs do. Moreover, even if future people don't change their moral outlooks, they should at least recognize that strategically, win-win compromises are better for their values, so it seems that a wiser populace should be more likely to cooperate and grant concessions to the NU minority. By analogy, democracy is generally better than a random dictatorship in expectation for all members of the democracy, not just for the majority.
If you have weird values, then it's less likely that low-hanging fruits have been picked and that the charity market is as broadly efficient as for other people. That said, insofar as your impacts will ultimately be mediated through your effect on the future, even your weird values might, depending on the circumstances, be best advanced by similar sorts of wisdom- and peace-promoting projects as are important for the majority, though this conclusion will be more tenuous in your case.
Isn't robustness just risk aversion?
Working on robustly positive interventions sounds like a form of risk aversion. A skeptic might allege: "You want to make sure you don't cause harm, without considering the possible upside of riskier approaches." However, as I think is clear from the examples in this piece, cause robustness is not really about reducing risk but is our best hope of doing something with an expected value that's not approximately zero. On many concrete issues, a given action is about as likely to cause harm as benefit, and there are so many variables to consider that taking a step back to explore further is the best way to improve our prospects. In the long run, investment in altruistic knowledge and social institutions for addressing problems will often pay off more than trying to wager our resources on some concrete gamble that we see right now.
Of course, this isn't to say we should always do the safest possible action from the perspective of not causing harm relative to a status quo of inaction. (To minimize expected harm relative to the status quo, the best option would be to keep the status quo.) Sometimes you can go too meta. Sometimes there are instances where I feel we need to push forward with unconventional ideas and not wait for society as a whole to catch up -- e.g., by spreading concern for wild-animal suffering or considering possibilities of suffering subroutines. But we also should avoid doing something crazy because a high-variance expected-value calculation suggests it might have higher payoff in the short term. Advancing future cooperation and wisdom is not just a more secure way to do good but probably also has greater expected value.
Ultimately our choice of where to focus does come down to an expected-value calculation. If we were just trying to find a way to certainly make a positive impact, we could, for instance, visit an elderly home and play chess with the people there to keep them company. This is admirable and has very low risk of negative side effects, but it doesn't have maximal expected value (except perhaps in moderation as a way to improve your spiritual wellbeing). On the other hand, expending all your resources on a long-shot gamble to improve the far future based on a highly specific speculation of how events will play out does not have highest expected value either. Off of the efficient frontier, more risk does not necessarily mean more expected return. An approach that broadly improves future outcomes will tend to have reasonably high probability of making a reasonably significant impact, so the expected value will tend to be competitively large.
Why I think charities don't differ by many orders of magnitude
The discussion in this section became lengthy, so I moved it to a new essay.
Movement fashion
Groups often think of themselves as different and special. People like to feel as though they're discovering new things and pioneering a frontier that hasn't yet been explored. I've seen many cases where old ideas get recycled under a new, sexy label, even though people end up doing mostly the same things they did before. This trend resembles fashion. Sometimes this happens in the academic realm, like when old statistical methods are rebranded as "artificial intelligence," or when standard techniques from a field are reintroduced as "the hot new thing."
I think the effective-altruism (EA) movement has some properties of being like fashion. It consists of idealistic young people who think they've discovered new principles of how to improve the world. For example, from "A critique of effective altruism":
Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I've heard other people working for EA organizations express the same sentiment.
Certainly there are some new ideas and methodologies in EA, but most of the movement's principles are very old:
- Altruism has been around since before humans existed (even in mammals, birds, etc.). More modern forms are at least centuries old. The idea of "making the best impact on the world" is what many idealists throughout history have always wanted to achieve.
- Quantification and rational thinking are also ancient, but even in their modern forms, they have been widespread in economics, finance, engineering, etc. for many decades. The principles of how to be "effective" are standard for anyone in the business world and, frankly, for many in the nonprofit world, especially at the more elite charities.
- The philosophical roots of EA have been discussed for decades as well. The main aspects that are new tend to concern emerging technologies and far-future scenarios that were not as available to generations past.
- When we consider flow-through effects, we realize that our actions really aren't immensely more important than what other altruists are already doing. We don't have secret sauce that makes us astronomically more efficient.
I think it's helpful to learn more about lots of fields. The academic and nonprofit literature already contains many important writings about social movements, what works and doesn't for making an impact, how to do fundraising, how to manage an organization, etc., as well as basically any object-level cause you want to work on -- from animal welfare to international cooperation. Major foundations have smart people who have already thought hard about these issues. Even the man on the street has a lifetime of accumulated wisdom that you can learn from. When we think about how much knowledge there is in the world, and how little we can ever learn in our lifetimes, the conclusion is humbling. It's good to recognize our place in this much larger picture rather than assuming we have the answers (especially at such a young age for many of us).
One reason EAs may see themselves as special is because they learned through the EA movement a lot of powerful ideas that are in fact much older and more general, including concepts from economics, sociology, business, and philosophy. Carl Shulman discussed this phenomenon in "Don't Revere The Bearer Of Good Info" with reference to the writings of Eliezer Yudkowsky. Carl underscores the importance of stepping outside our own circle to see a much bigger picture of the world than what our community tends to talk about. As I've gotten older, I've been increasingly humbled by how much other people have already figured out, as well as how hard it is to decide where you can make the biggest difference.
Two ways that charity differs from business
In many ways the effective-altruism movement can be seen as an extension of principles from the business world to the nonprofit sector: Quantification and metrics, focus on performance rather than overhead, emphasis on cost-effectiveness and return on investment, etc. For the most part this business mindset is positive, but there are at least two ways that it has the potential to lead charities astray -- both in the EA movement and elsewhere.
Overemphasis on visible metrics
In business, (pretty much) all that matters to shareholders is a company's financial performance, as measured in dollars. A company's stock price can capture a lot, including long-term projections for the industry in addition to short-term profits, but it also misses a lot as well -- including most externalities that the company has, unless they affect its bottom line, such as through taxes and government regulations.
Likewise, in altruism, if we introduce a "bottom line" mentality, we may over-optimize for this metric and ignore other important features of charity work. This might look like excessively focusing on QALYs/$ or animals-saved/$, ignoring the often more important flow-through effects (externalities) and implications for the far future that the work might entail. GiveWell has helped to discourage excessive focus on visible metrics, and most other EAs recognize this issue as well. However, I think naive metric optimization is an easy mindset to fall into when one first encounters EA. Optimization in engineering or finance is a much more precise process than optimization in charity or policy making, and sometimes the tools that perform extraordinarily well at the former fail at the latter compared with so-called "soft science" skills.
Treating other groups as competitors
In business, another company that performs a similar service to customers as yours is a competitor, and your goal is to steal as much market share as you can from that competitor. The only cost of marketing is the money that you spend on it, and if it draws away enough customers from the competitor, it's worth it.
Charities, both EA and otherwise, can adopt a similar mindset: They want more donors for their cause, without paying too much attention about what charities they're pulling donors away from. EAs are concerned with replaceability issues and recognize that pulling donors away from other issues might matter, but usually they feel that the charity they're promoting is vastly more effective than the one they're pulling people away from, so there's not much lost. It's possible this is true in some cases -- e.g., encouraging donors to fund HIV prevention instead of AIDS treatments, or recruiting donors who would have funded art galleries -- but in other cases it becomes much less clear. Especially in the realm of policy analysis and political advocacy, which arguably have some of the highest returns, it's more difficult to say that one charity is vastly more important than another, because the issues are complex and multifaceted.
So for altruists, the cost of marketing and fundraising is more than the time and money required to carry them out. Charity is not a competition.
Nick Beckstead's talk
After writing this piece, I came across a presentation that Nick Beckstead gave in July 2013. Nick explains several similar views to those expressed in this essay. For instance, his concluding slide (p. 39):
- There is an interesting question about where you want to be on the targeted vs. broad spectrum, and I think it is pretty unclear
- Lots of ordinary stuff done by people who aren't thinking about the far future at all may be valuable by far future standards
- Broad approaches (including general technological progress) look more robustly good, but some targeted approaches may lead to outsized returns if done properly
- There are many complicated questions, and putting it all together requires challenging big picture thinking. Studying targeted approaches stands out somewhat because it has the potential for outsized returns.
Nick proposes these as some robustly positive goals (p. 5):
- Coordination: How well-coordinated people are
- Capability: How capable individuals are at achieving their goals
- Motives: How well-motivated people are
- Information: To what extent people have access to information
I agree with Coordination and Motives. I'm less certain about Information, because this speeds up development of risks along with development of safety measures. The same is even more true for Capability. I would therefore favor differentially pushing on wisdom and compromise relatively more than economic and technological growth. Nick makes some recognition of this on p. 30:
- Broad approaches are more likely to enhance bad stuff as well as good stuff
- Increasing people's general capabilities/information makes people more able to do things that would be dangerous, offsetting some of the benefits of increased capabilities/information
- Improving coordination or motives may do this to a lesser extent
Nick himself argues that faster economic growth is very likely positive because it improves cooperation and tolerance. He quotes Benjamin Friedman's The Moral Consequences of Economic Growth: "Economic growth -- meaning a rising standard of living for the clear majority of citizens -- more often than not fosters greater opportunity, tolerance of diversity, social mobility, commitment to fairness, and dedication to democracy." I agree with this, but the question is not whether economic growth has good effects of this type but whether these effects can outpace the risks that it also accelerates. I feel that this question remains unresolved.
On pp. 18-19, Nick deflects the argument that faster technology is net bad by pointing out that it also means faster countermeasures, along with some other considerations that I think are minor. This point is relevant, but I maintain that it's not clear what the net balance is. In my view it's too early to say that faster technology is net good, much less sufficiently good that we should push on it compared with other things.
On p. 24, Nick echoes my point about deferring to the future on some questions:
- In some ways, trying to help future people navigate specific challenges better is like trying to help people from a different country solve their specific challenges, and to do so without intimate knowledge of the situation, and without the ability to travel to their country or talk to anyone who has been there at all recently.
- Sometimes, only we can work on the problem (this is true for climate change and people who will be alive in 100 years)
- It is less clearly true with risks from future technology
Nick concludes with some important research questions about historical examples of what interventions were most important and what current opportunities and funding/talent gaps look like.
Carl Shulman on flow-through effects
Carl Shulman has a piece, "What proxies to use for flow-through effects?," that suggests many possible metrics that are relevant for impacts on the far future, though he explains that not all of them are always obviously positive in sign. From Carl's list, these are some that I believe are pretty robustly positive:
- Education to increase "wisdom"
- International peace and cooperation, including all of Carl's sub-bullets there
- Institutional quality metrics
The sign of most other metrics is less clear to me, including economic growth, population, education in general, and especially technology. Carl cites the World Values Survey as an important demonstration of the impact of per-capita wealth on rationality and cosmopolitan perspective.
Within the "wisdom" category, I would include scientometrics that Carl mentions for natural sciences applied to social sciences and philosophy. For example, number of publications, number of web pages discussing those topics, number and length of Wikipedia articles on those topics, etc. Of course, the value of some of these domains is in the pudding -- insofar as they improve democracy, transparency, global cooperation, and so on.
In the comments, Nick Beckstead suggested inequality as another candidate metric. I haven't studied the literature extensively, but I have heard arguments about how it erodes many other relevant metrics, including trust, cooperation, mental health, and interpersonal kindness. For example, according to Richard Wilkinson: "Where there is more equality we use more cooperative social strategies, but where there is more inequality, people feel they have to fend for themselves and competition for status becomes more important."
Modeling unknown unknowns
It might seem as though we're helpless to respond to not-yet-discovered crucial considerations for how we should act. Is our only option to keep researching to find more crucial considerations and to move society toward a more cooperative and wise state in the meanwhile? Not necessarily. Maybe another possibility is to model the unknown crucial considerations.
Consider the following narrative. Andrew is a young boy who sees people going to a blood-donation drive. He doesn't know what they're doing, but he sees them being stuck with needles. He concludes that he wouldn't like to participate in a blood drive. Let's call this his "initial evaluation" (IE) and represent it by a number to indicate whether it favors or opposes the action. In this case, Andrew assumes he would not like to participate in the blood drive, so let's say IE = -1, where the negative number means "oppose".
A few years later, Andrew learns that blood drives are intended to save lives, which is a good thing. This crucial consideration is not something he anticipated earlier, which makes it an "unknown unknown" discovery. Since it's Andrew's first unknown-unknown insight, let's call it UU1. Since this consideration favors giving blood, and it does so more strongly than Andrew's initial evaluation opposed giving blood, let's say UU1 = 3. Since IE + UU1 = -1 + 3 > 0, Andrew now gives blood at drives.
However, one year later, Andrew becomes a deep ecologist. He feels that humans are ruining the Earth, and that nature preservation deserves more weight than human lives. Giving blood allows a person in a rich country to live perhaps an additional few years, during which time the person will ride in cars, eat farmed food, use electricity, and so on. Andrew judges that these environmental impacts are sufficiently bad they're not worth the benefit of saving the person's life. Let's say UU2 = -5, so that now IE + UU1 + UU2 = -1 + 3 + -5 = -3 < 0, and Andrew now stops giving blood again.
After another few months, Andrew reads Peter Singer and realizes that individual animals also matter. Since human activities like driving and farming food injure and kill lots of wild animals, Andrew concludes that this additional insight further argues against blood donation. Say UU3 = -2.
However, not long after, Andrew learns about wild-animal suffering and realizes that animals suffer immensely even when they aren't being harmed by humans. Because human activity seems to have on the whole reduced wild-animal populations, Andrew concludes that it's better if more humans exist, and this outweighs the harm they cause to wild animals and to the planet. Say UU4 = 10. Now IE + Σi=04 UUi = -1 + 3 + -5 + -2 + 10 = 5. Andrew donates blood once more.
Finally, Andrew realizes that donating blood takes time that he could otherwise spend on useful activities. This consideration is relevant but not dominating. UU5 = -1.
What about future crucial considerations that Andrew hasn't yet discovered? Can he make any statements about them? One way to do so would be to model unknown unknowns (UUs) as being sampled from some probability distribution P: UUi ~ P for all i. The distribution of UUs so far was {3, -5, -2, 10, -1}. The sample mean is 1, and the standard error is 2.6. The standard error is big enough that Andrew can't have much confidence about future UUs, though the sample mean very weakly suggests future UUs are more likely on average to be positive than negative.
If Andrew instead had 100 UU data points, the standard error would be much smaller, which would give more confidence. This illustrates one lesson when handling UUs: The more considerations you've already considered, the more confident you can be that the distribution of remaining UUs also has positive mean.
That we can anticipate something about UUs despite not knowing what they will be can be seen more clearly in a case where the current UUs are more lopsided. For example, suppose the action under consideration is "start fights with random people on the street". While this probably has a few considerations in its favor, almost all of the crucial considerations that one could think of argue against the idea, suggesting that most new UUs will point against it as well.
Modeling UUs in practice may be messier than what I've discussed here, because it's not clear how many UUs remain (although one could apply a prior probability distribution over the number remaining), nor is it clear that they all come from a fixed probability distribution. Perhaps future UUs tend to dominate past ones in size, leading to ever more unstable estimates; for example, if previous UUs tend to change the sign of lots of previous considerations at once, then the latest UU would have a bigger and bigger magnitude as time went on, since it would need to "undo" more and more past UUs. It's also not clear how to partition insights into UU buckets. For example, the insight that donating blood helps wild animals could be stated simply as a single UU with magnitude 10, or it could be broken down as "donating blood helps wild vertebrates" (magnitude 2) and "donating blood helps wild invertebrates" (magnitude 8). Different ways of partitioning UUs would lead to a different estimated probability distribution, although the sign of the sample mean would always remain the same.
There are problems with the approach I described. Magnus Vinding noted to me that
When it comes to UUs, a major problem is that pretty much our entire value system and worldview seem to be up for grabs, and, moreover, that the different UUs likely will be dependent in deep, complex ways that will make modelling of them very hard. Modelling the interrelations of the UUs in our sample and how they make each other change would also seem a necessary element to include in such an analysis.
Acknowledgments
My thoughts on these topics were influenced by many effective-altruist thinkers, including Holden Karnofsky, Jonah Sinick, and Nick Beckstead. See also Paul Christiano's "Beware brittle arguments."
Feedback
The discussion of this essay on Facebook's "Effective Altruists" forum includes a debate about whether flow-through effects are actually significant relative to first-order effects and how inevitable the future is likely to be.