6 March 2021

Collaborative game specification: arriving at common models in bargaining

Conflict is often an inefficient outcome to a bargaining problem. This is true in the sense that, for a given game-theoretic model of a strategic interaction, there is often some equilibrium in which all agents are better off than the conflict outcome. But real-world agents may not make decisions according to game-theoretic models, and when they do, they may use different models. This makes it more difficult to guarantee that real-world agents will avoid bargaining failure than is suggested by the observation that conflict is often inefficient.   In another post, I described the "prior selection problem", on which different agents having different models of their situation can lead to bargaining failure. Moreover, techniques for addressing bargaining problems like coordination on […]

Read more
13 February 2021

Weak identifiability and its consequences in strategic settings

One way that agents might become involved in catastrophic conflict is if they have mistaken beliefs about one another. Maybe I think you are bluffing when you threaten to launch the nukes, but you are dead serious. So we should understand why agents might sometimes have such mistaken beliefs. In this post I'll discuss one obstacle to the formation of accurate beliefs about other agents, which has to do with identifiability. As with my post on equilibrium and prior selection problems, this is a theme that keeps cropping up in my thinking about AI cooperation and conflict, so I thought it might be helpful to have it written up. We say that a model is unidentifiable if there are several […]

Read more
18 January 2021

Birds, Brains, Planes, and AI: Against Appeals to the Complexity / Mysteriousness / Efficiency of the Brain

[Epistemic status: Strong opinions lightly held, this time with a cool graph.] I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable. In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes. In slogan form: If all we had to do to get TAI was make a simple neural net 10x the […]

Read more
30 December 2020

Against GDP as a metric for AI timelines and takeoff speeds

Or: Why AI Takeover Might Happen Before GDP Accelerates, and Other Thoughts On What Matters for Timelines and Takeoff Speeds I think world GDP (and economic growth more generally) is overrated as a metric for AI timelines and takeoff speeds. Here are some uses of GDP that I disagree with, or at least think should be accompanied by cautionary notes: Timelines: Ajeya Cotra thinks of transformative AI as “software which causes a tenfold acceleration in the rate of growth of the world economy (assuming that it is used everywhere that it would be economically profitable to use it).” I don’t mean to single her out in particular; this seems like the standard definition now. Takeoff Speeds: Paul Christiano argues for […]

Read more
16 December 2020

Incentivizing forecasting via social media

Summary Most people will probably never participate on existing forecasting platforms which limits their effects on mainstream institutions and public discourse. Changes to the user interface and recommendation algorithms of social media platforms might incentivize forecasting and lead to its more widespread adoption. Broadly, we envision i) automatically suggesting questions of likely interest to the user—e.g., questions related to the user’s current post or trending topics—and ii) rewarding users with higher than average forecasting accuracy with increased visibility. In a best case scenario, such forecasting-incentivizing features might have various positive consequences such as increasing society’s shared sense of reality and the quality of public discourse, while reducing polarization and the spread of misinformation. Facebook’s Forecast could be seen as one […]

Read more
5 December 2020

Commitment ability in multipolar AI scenarios

Abstract The ability to make credible commitments is a key factor in many bargaining situations ranging from trade to international conflict. This post builds a taxonomy of the commitment mechanisms that transformative AI (TAI) systems could use in future multipolar scenarios, describes various issues they have in practice, and draws some tentative conclusions about the landscape of commitments we might expect in the future. Introduction A better understanding of the commitments that future AI systems can make is helpful for predicting and influencing the dynamics of multipolar scenarios. The option to credibly bind oneself to certain actions or strategies fundamentally changes the game theory behind bargaining, cooperation, and conflict. Credible commitments can work to stabilize positive-sum agreements, and to increase […]

Read more
21 November 2020

Persuasion Tools: AI takeover without takeoff or agency?

[epistemic status: speculation] I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked. --Wei Dai What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems […]

Read more
16 November 2020

How Roodman's GWP model translates to TAI timelines

How does David Roodman’s world GDP model translate to TAI timelines? Now, before I go any further, let me be the first to say that I don’t think we should use this model to predict TAI. This model takes a very broad outside view and is thus inferior to models like Ajeya Cotra’s which make use of more relevant information. (However, it is still useful for rebutting claims that TAI is unprecedented, inconsistent with historical trends, low-prior, etc.) Nevertheless, out of curiosity I thought I’d calculate what the model implies for TAI timelines. Here is the projection made by Roodman’s model. The red line is real historic GWP data; the splay of grey shades that continues it is the splay […]

Read more
22 October 2020

The date of AI Takeover is not the day the AI takes over

Instead, it’s the point of no return—the day we AI risk reducers lose the ability to significantly reduce AI risk. This might happen years before classic milestones like “World GWP doubles in four years” and “Superhuman AGI is deployed." The rest of this post explains, justifies, and expands on this obvious but underappreciated idea. (Toby Ord appreciates it; see quote below). I found myself explaining it repeatedly, so I wrote this post as a reference. AI timelines often come up in career planning conversations. Insofar as AI timelines are short, career plans which take a long time to pay off are a bad idea, because by the time you reap the benefits of the plans it may already be too […]

Read more
7 July 2020

Reducing long-term risks from malevolent actors

Summary Dictators who exhibited highly narcissistic, psychopathic, or sadistic traits were involved in some of the greatest catastrophes in human history.  Malevolent individuals in positions of power could negatively affect humanity’s long-term trajectory by, for example, exacerbating international conflict or other broad risk factors. Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks. We therefore consider interventions to reduce the expected influence of malevolent humans on the long-term future. The development of manipulation-proof measures of malevolence seems valuable, since they could be used to screen for malevolent humans in high-impact settings, such as heads of government or CEOs. We also explore possible future technologies that […]

Read more
22 February 2019

Risk factors for s-risks

Traditional disaster risk prevention has a concept of risk factors. These factors are not risks in and of themselves, but they increase either the probability or the magnitude of a risk. For instance, inadequate governance structures do not cause a specific disaster, but if a disaster strikes it may impede an effective response, thus increasing the damage. Rather than considering individual scenarios of how s-risks could occur, which tends to be highly speculative, this post instead looks at risk factors – i.e. factors that would make s-risks more likely or more severe.

Read more
3 July 2018

Challenges to implementing surrogate goals

Surrogate goals might be one of the most promising approaches to reduce (the disvalue resulting from) threats. The idea is to add to one’s current goals a surrogate goal that one did not initially care about, hoping that any potential threats will target this surrogate goal rather than what one initially cared about. In this post, I will outline two key obstacles to a successful implementation of surrogate goals.

Read more
29 March 2018

A framework for thinking about AI timescales

To steer the development of powerful AI in beneficial directions, we need an accurate understanding of how the transition to a world with powerful AI systems will unfold. A key question is how long such a transition (or “takeoff”) will take.

Read more
1 March 2018

Commenting on MSR, Part 2: Cooperation heuristics

Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) Summary This post was originally written for internal discussions only; it is half-baked and unpolished. The post assumes familiarity with the ideas discussed in Caspar Oesterheld’s paper Multiverse-wide cooperation via coordinated decision-making. I wrote a short introduction to multiverse-wide cooperation in an earlier post (but I still recommend reading parts of Caspar’s original paper, or this more advanced introduction, because several of the points that follow below build on topics not covered in my introduction). With that out of the way: In this post, I will comment on what I think might be interesting aspects of multiverse-wide cooperation […]

Read more
20 February 2018

Using surrogate goals to deflect threats

Agents that threaten to harm other agents, either in an attempt at extortion or as part of an escalating conflict, are an important form of agential s-risks. To avoid worst-case outcomes resulting from the execution of such threats, I suggest that agents add a “meaningless” surrogate goal to their utility function.

Read more
14 November 2017

Self-improvement races

Just like human factions may race toward AI and thus risk misalignment, AIs may race toward superior abilities by self-improving themselves in risky ways.

Read more
2 November 2017

Commenting on MSR, Part 1: Multiverse-wide cooperation in a nutshell

Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a post I wrote about Caspar Oesterheld’s long paper Multiverse-wide cooperation via coordinated decision-making. Because I have found the idea tricky to explain – which unfortunately makes it difficult to get feedback from others on whether the thinking behind it makes sense – I decided to write a shorter summary. While I am hoping that my text can serve as a standalone piece, for additional introductory content I also recommend reading the beginning of Caspar’s paper, or watching the short video introduction here (requires basic knowledge of the “CDT, EDT or something else” debate in decision […]

Read more
21 September 2017

S-risk FAQ

In the essay Reducing Risks of Astronomical Suffering: A Neglected Priority, s-risks (also called suffering risks or risks of astronomical suffering) are defined as “events that would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far”.

Read more
18 September 2017

Focus areas of worst-case AI safety

Efforts to shape advanced artificial intelligence (AI) may be among the most promising altruistic endeavours. If the transition to advanced AI goes wrong, the worst outcomes may involve not only the end of human civilization, but also astronomical amounts of suffering – a so-called s-risk.

Read more
10 August 2017

A reply to Thomas Metzinger’s BAAN thought experiment

Published on the CLR blog, where researchers are free to explore their own ideas on how humanity can best reduce suffering. (more) This is a reply to Metzinger’s essay on Benevolent Artificial Anti-natalism (BAAN), which appeared on EDGE.org (7.8.2017). Metzinger invites us to consider a hypothetical scenario where smarter-than-human artificial intelligence (AI) is built with the goal of assisting us with ethical deliberation. Being superior to us in its understanding of how our own minds function, the envisioned AI could come to a deeper understanding of our values than we may be able to arrive at ourselves. Metzinger has us envision that this artificial super-ethicist comes to conclude that biological existence – at least in its current form – is […]

Read more
21 July 2017

Uncertainty smooths out differences in impact

Suppose you investigated two interventions A and B and came up with estimates for how much impact A and B will have. Your best guess is that A will spare a billion sentient beings from suffering, while B “only” spares a thousand beings. Now, should you actually believe that A is many orders of magnitude more effective than B?

Read more
17 July 2017

Arguments for and against moral advocacy

This post analyses key strategic questions on moral advocacy, such as: What does moral advocacy look like in practice? Which values should we spread, and how? How effective is moral advocacy compared to other interventions such as directly influencing new technologies? What are the most important arguments for and against focusing on moral advocacy?

Read more
30 June 2017

Strategic implications of AI scenarios

Efforts to mitigate the risks of advanced artificial intelligence may be a top priority for effective altruists. If this is true, what are the best means to shape AI? Should we write math-heavy papers on open technical questions, or opt for broader, non-technical interventions like values spreading?

Read more
26 June 2017

Tool use and intelligence: A conversation

This post is a discussion between Lukas Gloor and Tobias Baumann on the meaning of tool use and intelligence, which is relevant to our thinking about the future or (artificial) intelligence and the likelihood of AI scenarios.

Read more
20 June 2017

Training neural networks to detect suffering

Imagine a data set of images labeled “suffering” or “no suffering”. For instance, suppose the “suffering” category contains documentations of war atrocities or factory farms, and the “no suffering” category contains innocuous images – say, a library. We could then use a neural network or other machine learning algorithms to learn to detect suffering based on that data.

Read more
19 June 2017

Launching the FRI blog

We were moved by the many good reasons to make conversations public. At the same time, we felt the content we wanted to publish differed from the articles on our main site. Hence, we're happy to announce the launch of FRI’s new blog.

Read more
21 November 2016

Backup Utility Functions: A Fail-Safe AI Technique

Setting up the goal systems of advanced AIs in a way that results in benevolent behavior is expected to be difficult. We should account for the possibility that the goal systems of AIs fail to implement our values as originally intended. In this paper, we propose the idea of backup utility functions: Secondary utility functions that are used in case the primary ones “fail”.

Read more
14 August 2016

Identifying Plausible Paths to Impact and their Strategic Implications

FRI’s research seeks to identify the best intervention(s) for suffering reducers to work on. Rather than continuing our research indefinitely, we will eventually have to focus our efforts on an intervention directly targeted at improving the world. This report outlines plausible candidates for FRI’s “path to impact” and distills some advice on how current movement building efforts can best prepare for them.

Read more
7 June 2016

Our Mission

This is a snapshot of the Center on Long-Term Risk’s (formerly Foundational Research Institute) previous "Our Mission" page. The Foundational Research Institute (FRI) conducts research on how to best reduce the suffering of sentient beings in the long-term future. We publish essays and academic articles, make grants to support research on our priorities, and advise individuals and policymakers. Our focus is on exploring effective, robust and cooperative strategies to avoid risks of dystopian futures and working toward a future guided by careful ethical reflection. Our scope ranges from foundational questions about ethics, consciousness and game theory to policy implications for global cooperation or AI safety. Reflectiveness, values and technology The term “dystopian futures” elicits associations of cruel leadership and totalitarian […]

Read more