The optimal timing of spending on AGI safety work; why we should probably be spending more now

Tristan Cook & Guillaume Corlouer October 24th 2022 Summary When should funders wanting to increase the probability of AGI going well spend their money? We have created a tool to calculate the optimum spending schedule and tentatively conclude funders collectively should be spending at least 5% of their capital each year on AI risk interventions and in some cases up to 35%. This is likely higher than the current AI risk community spending rate which is at most 3%. In most cases, we find that the optimal spending schedule is between 5% and 15% better than the ‘default’ strategy of just spending the interest one accrues and from 15% to 50% better than a naive projection of the community’s spending […]

Read more

When is intent alignment sufficient or necessary to reduce AGI conflict?

In this post, we look at conditions under which Intent Alignment isn't Sufficient or Intent Alignment isn't Necessary for interventions on AGI systems to reduce the risks of (unendorsed) conflict to be effective. We then conclude this sequence by listing what we currently think are relatively promising directions for technical research and intervention to reduce AGI conflict. ContentsIntent alignment is not sufficient to prevent unendorsed conflictWhen would consultation with overseers fail to prevent catastrophic decisions?Conflict-causing capabilities failuresFailures of cooperative capabilitiesFailures to understand cooperation-relevant preferencesWhy not delegate work on conflict reduction?Intent alignment may not be necessary to reduce the risk of conflictTentative conclusions about directions for research & interventionReferences Intent alignment is not sufficient to prevent unendorsed conflict In the previous post, we outlined […]

Read more

When would AGIs engage in conflict?

Here we will look at two of the claims introduced in the previous post: AGIs might not avoid conflict that is costly by their lights (Capabilities aren’t Sufficient) and conflict that is costly by our lights might not be costly by the AGIs’ (Conflict isn’t Costly).  ContentsExplaining costly conflictAvoiding conflict via commitment and disclosure ability? What if conflict isn’t costly by the agents’ lights? Candidate directions for research and interventionAppendix: Full rational conflict taxonomyEquilibrium-compatible casesEquilibrium-incompatible casesReasons agents don’t disclose private informationReferences Explaining costly conflict First we’ll focus on conflict that is costly by the AGIs’ lights. We’ll define “costly conflict” as (ex post) inefficiency: There is an outcome that all of the agents involved in the interaction prefer to the one that […]

Read more

When does technical work to reduce AGI conflict make a difference?: Introduction

This is a pared-down version of a longer draft report. We went with a more concise version to get it out faster, so it ended up being more of an overview of definitions and concepts, and is thin on concrete examples and details. Hopefully subsequent work will help fill those gaps. ContentsSequence SummaryNecessary Conditions for Technical Work on AGI Conflict to Have a Counterfactual ImpactConflict isn't CostlyCapabilities aren't SufficientIntent Alignment isn't SufficientIntent Alignment isn't NecessaryNote on scopeAcknowledgmentsReferences Sequence Summary Some researchers are focused on reducing the risks of conflict between AGIs. In this sequence, we’ll present several necessary conditions for technical work on AGI conflict reduction to be effective, and survey circumstances under which these conditions hold. We’ll also present […]

Read more