When is intent alignment sufficient or necessary to reduce AGI conflict?

In this post, we look at conditions under which Intent Alignment isn't Sufficient or Intent Alignment isn't Necessary for interventions on AGI systems to reduce the risks of (unendorsed) conflict to be effective. We then conclude this sequence by listing what we currently think are relatively promising directions for technical research and intervention to reduce AGI conflict. ContentsIntent alignment is not sufficient to prevent unendorsed conflictWhen would consultation with overseers fail to prevent catastrophic decisions?Conflict-causing capabilities failuresFailures of cooperative capabilitiesFailures to understand cooperation-relevant preferencesWhy not delegate work on conflict reduction?Intent alignment may not be necessary to reduce the risk of conflictTentative conclusions about directions for research & interventionReferences Intent alignment is not sufficient to prevent unendorsed conflict In the previous post, we outlined […]

Read more

When would AGIs engage in conflict?

Here we will look at two of the claims introduced in the previous post: AGIs might not avoid conflict that is costly by their lights (Capabilities aren’t Sufficient) and conflict that is costly by our lights might not be costly by the AGIs’ (Conflict isn’t Costly).  ContentsExplaining costly conflictAvoiding conflict via commitment and disclosure ability? What if conflict isn’t costly by the agents’ lights? Candidate directions for research and interventionAppendix: Full rational conflict taxonomyEquilibrium-compatible casesEquilibrium-incompatible casesReasons agents don’t disclose private informationReferences Explaining costly conflict First we’ll focus on conflict that is costly by the AGIs’ lights. We’ll define “costly conflict” as (ex post) inefficiency: There is an outcome that all of the agents involved in the interaction prefer to the one that […]

Read more

When does technical work to reduce AGI conflict make a difference?: Introduction

This is a pared-down version of a longer draft report. We went with a more concise version to get it out faster, so it ended up being more of an overview of definitions and concepts, and is thin on concrete examples and details. Hopefully subsequent work will help fill those gaps. ContentsSequence SummaryNecessary Conditions for Technical Work on AGI Conflict to Have a Counterfactual ImpactConflict isn't CostlyCapabilities aren't SufficientIntent Alignment isn't SufficientIntent Alignment isn't NecessaryNote on scopeAcknowledgmentsReferences Sequence Summary Some researchers are focused on reducing the risks of conflict between AGIs. In this sequence, we’ll present several necessary conditions for technical work on AGI conflict reduction to be effective, and survey circumstances under which these conditions hold. We’ll also present […]

Read more