Commitment games with conditional information revelation


    \[\begin{array}{ccc} \textbf{Anthony DiGiovanni} & &\textbf{Jesse Clifton}\\ \text{Center on Long-Term Risk} & &\text{Center on Long-Term Risk}\\ \texttt{} & &\texttt{}\\ \end{array}\]



The conditional commitment abilities of mutually transparent computer agents have been studied in previous work on commitment games and program equilibrium. This literature has shown how these abilities can help resolve Prisoner’s Dilemmas and other failures of cooperation in complete information settings. But inefficiencies due to private information have been neglected thus far in this literature, despite the fact that these problems are pervasive and might also be addressed by greater mutual transparency. In this work, we introduce a framework for commitment games with a new kind of conditional commitment device, which agents can use to conditionally reveal private information. We prove a folk theorem for this setting that provides sufficient conditions for ex post efficiency, and thus represents a model of ideal cooperation between agents without a third-party mediator. Connecting our framework with the literature on strategic information revelation, we explore cases where conditional revelation can be used to achieve full cooperation while unconditional revelation cannot. Finally, extending previous work on program equilibrium, we develop an implementation of conditional information revelation. We show that this implementation forms program \epsilon-Bayesian Nash equilibria corresponding to the Bayesian Nash equilibria of these commitment games.

Cooperative AI, program equilibrium, smart contracts

ACM Reference Format:
Anthony DiGiovanni and Jesse Clifton. 2022. Commitment games with conditional information revelation. In Appears at the 4th Games, Agents, and Incentives Workshop (GAIW 2022). Held as part of the Workshops at the 20th International Conference on Autonomous Agents and Multiagent Systems., Auckland, New Zealand, May 2022, IFAAMAS, 14 pages.

1 Introduction

What are the upper limits on the ability of rational, self-interested agents to cooperate? As autonomous systems become increasingly responsible for important decisions, including in interactions with other agents, the study of “Cooperative AI” [2] will potentially help ensure these decisions result in cooperation. It is well-known that game-theoretically rational behavior — which will potentially be more descriptive of the decision-making of advanced computer agents than humans — can result in imperfect cooperation, in the sense of inefficient outcomes. Some famous examples are the Prisoner’s Dilemma and the Myerson-Satterthwaite impossibility of efficient bargaining under incomplete information [20]. Fearon [4] explores “rationalist” explanations for war (i.e., situations in which war occurs in equilibrium); these include Prisoner’s Dilemma-style inability to credibly commit to peaceful alternatives to war, as well as incentives to misrepresent private information (e.g., military strength). Because private information is so ubiquitous in real strategic interactions, resolving these cases of inefficiency is a fundamental open problem. Inefficiencies due to private information will be increasingly observed in machine learning, as machine learning is used to train agents in complex multi-agent environments featuring private information, such as negotiation. For example, Lewis et al. [16] found that when an agent was trained with reinforcement learning on negotiations under incomplete information, it failed to reach an agreement with humans more frequently than a human-imitative model did.

But greater ability to make commitments and share private information can open up more efficient equilibria. Computer systems could be much better at making their internal workings legible to other agents, and at making sophisticated conditional commitments. More mutually beneficial outcomes could also be facilitated by new technologies like smart contracts [30]. This makes the game theory of interactions between agents with these abilities important for the understanding of Cooperative AI — in particular, for developing an ideal standard of multi-agent decision making with future technologies. An extreme example of the power of greater transparency and commitment ability is Tennenholtz [29] ’s “program equilibrium” solution to the one-shot Prisoner’s Dilemma. The players in Tennenholtz’s “program game” version of the Prisoner’s Dilemma submit computer programs to play on their behalf, which can condition their outputs on each other’s source code. Then a pair of programs with the source code \texttt{``If counterpart's source code == my source code: Cooperate; Else: Defect''} form an equilibrium of mutual cooperation.

In this spirit, we are interested in exploring the kinds of cooperation that can be achieved by agents who are capable of extreme mutual transparency and credible commitment. We can think of this as giving an upper bound on the ability of advanced artificially intelligent agents, or humans equipped with advanced technology for commitment and transparency, to achieve efficient outcomes. While such abilities are inaccessible to current systems, identifying sufficient conditions for cooperation under private information provides directions for future research and development, in order to avoid failures of cooperation. These are our main contributions:

  1.  We develop a new class of games in which players can condition both their commitments  and revelation of private information on their counterparts’ commitments and decisions to reveal private information. We present a folk theorem for these games: The set of equilibrium payoffs equals the set of feasible and interim individually rational payoffs, notably including all ex post efficient payoffs. The equilibria are conceptually straightforward: For a given ex post payoff profile, players reveal their private information and play according to an action profile attaining that payoff profile; if anyone deviates, they revert to a punishment strategy (without revealing private information to the deviator). The problem is to avoid circularity in these conditional decisions. Our result builds on Forges [5] ’ folk theorem for Bayesian games without conditional information revelation, in which equilibrium payoffs must also be incentive compatible. This expansion of the set of equilibrium payoffs is important, because in several settings, such as those of the classic Myerson-Satterthwaite theorem [20], ex-post efficiency (or optimality according to some function of social welfare) and incentive compatibility are mutually exclusive.
  2. Due to findings that the ability to unconditionally reveal private information often leads players to reveal that information [7, 18 , 19], one might suspect that conditional revelation is unnecessary. It is not. We present examples where our assumptions allow for ex-post efficiency but the ability to unconditionally reveal private information does not.
  3. In these commitment games, the conditional commitment and revelation devices are abstract objects. The devices in Forges [5] ’s and our folk theorems avoid circularity by conditioning on the particular identities of the other players’ devices, but this precludes robust cooperation with other devices that would output the same decisions. Using computer programs as conditional commitment and revelation devices, we give a specific implementation of \epsilon-Bayesian Nash equilibria corresponding to the equilibria of our commitment game. This approach extends Oesterheld [21] ’s “robust program equilibria.” We solve the additional problems of (1) ensuring that the programs terminate with more than two players, (2) in circumstances where cooperating with other players requires knowing their private information. Ours is the first study of program equilibrium [29] under private information.

2 Related Work

Commitment games and program equilibrium. We build on commitment games, introduced by Kalai et al. [11] and generalized to Bayesian games (without verifiable revelation) by Forges [5]. In a commitment game, players submit commitment devices that can choose actions conditional on other players’ devices. This leads to folk theorems: Players can choose commitment devices that conditionally commit to playing a target action (e.g., cooperating in a Prisoner’s Dilemma), and punishing if their counterparts do not play accordingly (e.g., defecting in a Prisoner’s Dilemma if counterparts’ devices do not cooperate). A specific kind of commitment game is one played between computer agents who can condition their behavior on each other’s source code. This is the focus of the literature on program equilibrium [1, 15 , 21 , 22 , 25 , 29]. Peters and Szentes [24] critique the program equilibrium framework as insufficiently robust to new contracts, because the programs in, e.g., Kalai et al . [11] ’s folk theorem only cooperate with the exact programs used in the equilibrium profile. Like ours, the commitment devices in Peters and Szentes [24] can reveal their types and punish those that do not also reveal. However, their devices reveal unconditionally and thus leave the punishing player exploitable, restricting the equilibrium payoffs to a smaller set than that of Forges [5] or ours.

Our folk theorem builds directly on Forges [5]. In Forges’ setting, players lack the ability to reveal private information. Thus the equilibrium payoffs must be incentive compatible. We instead allow (conditional) verification of private information, which lets us drop Forges’ incentive compatibility constraint on equilibrium payoffs. Our program equilibrium implementation extends Oesterheld [21] ’s robust program equilibrium to allow for conditional information revelation.

Strategic information revelation. In games of strategic information revelation, players have the ability to verifiably reveal some or all of their private information. The question then becomes: How much private information should players reveal (if any), and how should other players update their beliefs based on players’ refusal to reveal some information? A foundational result in this literature is that of full unraveling: Under a range of conditions, when players can verifiably reveal information, they will act as if all information has been revealed [7, 18, 19]. This means the mere possibility of verifiable revelation is often enough to avoid informational inefficiencies. However, there are cases where unraveling fails to hold, and informational inefficiencies persist even when verifiable revelation is possible. This can be due to uncertainty about a player’s ability to verifiably reveal [3, 26] or revelation being costly [8, 10]. But revelation of private information can fail even without such uncertainty or costs [14 , 17]. We will review several such examples in Section 5, show how the persistence of uncertainty in these settings can lead to welfare losses, and show how this can be remedied with the commitment technologies of our framework (but not weaker ones, like those of Forges [5]).

3 Preliminaries: Games of Incomplete Information and Inefficiency

3.1 Definitions

Let G be a Bayesian game with n players. Each player i has a space of types T_i, giving joint type space T = \displaystyle\times_{i=1}^n T_i. At the start of the game, players' types are sampled by Nature according to the common prior q. Each player knows only their type. Player i's strategy is a choice of action a_i \in \mathcal{A}_i for each type in T_i. Let u_i(\boldsymbol{t}, \boldsymbol{a}) denote player i's expected payoff in this game when the players have types \boldsymbol{t} = (t_1, \dots, t_n) and follow an action profile \boldsymbol{a} = (a_1, \dots, a_n). A Bayesian Nash equilibrium is an action profile \boldsymbol{a} in which every player and type plays a best response with respect to the prior over other players' types: For all players i and all types t_i, a_i(t_i) \in \displaystyle\textrm{arg\,max}_{a_i' \in \mathcal{A}_i}\mathbb{E}_{\boldsymbol{t}_{-i} \sim q(\cdot | t_i)}u_i(\boldsymbol{t}, (a_i'(t_i), \boldsymbol{a}_{-i}(\boldsymbol{t_{-i})) )}. An \epsilon-Bayesian Nash equilibrium is similar: Each player and type expects to gain at most \epsilon (instead of 0) by deviating from \boldsymbol{a}.

We assume players can correlate their actions by conditioning on a trustworthy randomization device \mathcal{C}. For any correlated strategy \boldsymbol{\mu} (a distribution over action profiles), let u_i(\boldsymbol{t},\boldsymbol{\mu}) = \mathbb{E}_{\boldsymbol{a} \sim \boldsymbol{\mu}} u_i(\boldsymbol{t},\boldsymbol{a}). When it is helpful, we will write \boldsymbol{\mu}(\cdot | \boldsymbol{t}) to clarify the subset of the type profile on which the correlated strategy is conditioned. Let (a_j, \boldsymbol{\mu}_{-j}) denote a correlated strategy whose jth entry is degenerate at a_j, and the actions of players other than j are sampled from \boldsymbol{\mu}_{-j} independently of a_j. Then, the following definitions will be key to our discussion:

DEFINITION 1. A payoff vector \mathbf{x} as a function of type profiles is feasible if there exists a correlated strategy \boldsymbol{\mu} (\cdot|\boldsymbol{t}) such that, for all players j and types t_j \in T_j, x_j(t_j) =\mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}u_j(\boldsymbol{t}, \boldsymbol{\mu} )}.

DEFINITION 2. A payoff \mathbf{x} is interim individually rational (INTIR) if, for each player j, there exists a correlated strategy \boldsymbol{\tau}_{-j}(\cdot | \boldsymbol{t}_{-j}) used by the other players such that, for all t_j \in T_j, x_j(t_j)\geq\max_{a_j \in \mathcal{A}_j}\mathbb{E}_{\boldsymbol{t}_{-j}\simq(\cdot|t_j)}u_j(\boldsymbol{t}, (a_j,\boldsymbol{\tau}_{-j}(\cdot | \boldsymbol{t}_{-j})))}.  

The minimax strategy \boldsymbol{\tau}_{-j} is used by the other players to punish player j. The threat of such punishments will support the equilibria of our folk theorem. Players only have sufficient information to use this correlated strategy if they reveal their types to each other. Moreover, the punishment can only work in general if they do not reveal their types to player j, because the definition of INTIR requires the deviator to be uncertain about \boldsymbol{t}_{-j}. Since the inequalities hold for all t_j \in T_j, the players do not need to know player j's type to punish them.

DEFINITION 3. A feasible payoff \mathbf{x} induced by \boldsymbol{\mu} is incentive compatible (IC) if, for each player j and type pair t_j, s_j \in T_j, x_j(t_j) \geq \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}}{u_j((t_j,\boldsymbol{t}_{-j}), \boldsymbol{\mu} (\cdot | s_j, \boldsymbol{t}_{-j}))}.

Incentive compatibility means that each player prefers a given correlated strategy to be played according to their type, as opposed to that of another type.

DEFINITION 4. Given a type profile \boldsymbol{t}, a payoff \mathbf{x} is ex post efficient (hereafter, "efficient") if there does not exist \boldsymbol{\tilde{\boldsymbol{\mu} }} such that u_i(\boldsymbol{t}, \boldsymbol{\tilde{\boldsymbol{\mu} }}) \geq x_i(t_i) for all i and u_{i'}(\boldsymbol{t}, \boldsymbol{\tilde{\boldsymbol{\mu} }}) > x_{i'}(t_{i'}) for some i'.

We will also consider games with strategic information revelation, i.e., Bayesian games where, immediately after learning their types, players are able to reveal their private information as follows. Players simultaneously each choose \Theta_i from some revelation action set \mathcal{R}(t_i), which is a subset of \mathcal{T}(t_i) = \{\Theta_i \subseteq T_i \ | \ t_i \in \Theta_i\}. Then, all players observe each~\Theta_i, thus learning that player i's type is in \Theta_i. Revelation is verifiable in the sense that a player's choice of \Theta_i must contain their true type, i.e., they cannot falsely "reveal" a different type. We will place our results on conditional type revelation in the context of the literature on unraveling:

DEFINITION 5. Let \{\Theta_i\}_{i=1}^n be the profile of revelation actions (as functions of types) in a Bayesian Nash equilibrium \sigma of a game with strategic information revelation. Then \sigma has full unraveling if \Theta_i(t_i) = \{t_i\} for all i, or partial unraveling if \Theta_i(t_i) is a strict subset of T_i for at least one i.

3.2 Inefficiency: Running example

Uncertainty about others' private information, and a lack of ability or incentive to reveal that information, can lead to inefficient outcomes in Bayesian Nash equilibrium (or an appropriate refinement thereof). Here is a running example we will use to illustrate how informational problems can be overcome under our assumptions, but not under the weaker assumption of unconditional revelation ability.

Example 3.1 (War under incomplete information, adapted from Slantchev and Tarar [27] ). Two countries i=1,2 are on the verge of war over some territory. Country 1 offers a split of the territory giving fractions s and 1-s to countries 1 and 2, respectively. If country 2 rejects this offer, they go to war. Each player wins with some probability (detailed below), and each pays a cost of fighting c_i > 0. The winner receives a payoff of 1, and the loser gets 0.

The countries' military strength determines the probability that country 2 wins the war, denoted p(\theta). Country 1 doesn't know whether the other's army is weak (with type \theta^W) or strong (\theta^S), while country 1's strength is common knowledge. Further, country 2 has a weak point, which country 1 believes is equally likely to be in one of two locations v \in \{1, 2\}. Thus country 2's type is given by t_2 = \{\theta, v\}. Country 1 can make a sneak attack on \hat{v}\in \{1,2\}, independent of whether they go to war. Country 1 would gain z from attacking \hat{v} = v, costing c_{A,2} for country 2. But incorrectly attacking \hat{v} \neq v would cost c_{A,1} > z for country 1, so country 1 would not risk an attack given a prior of \frac{1}{2} on each of the locations. If country 2 reveals its full type by allowing inspectors from country 1 to assess its military strength \theta, country 1 will also learn v.

If country 1 has a sufficiently low prior that country 2 is strong, then war occurs in the unique perfect Bayesian equilibrium when country 2 is strong. Moreover, this can happen even if the countries can fully reveal their private information to one another. In other words, the unraveling of private information does not occur, because player 2 will be made worse off if they allow player 1 to learn about their weak point.

Before looking at what is achievable with different technologies for information revelation, we need to formally introduce our framework for commitment games with conditional information revelation. In the next section, we describe these games and present our folk theorem.

4 Commitment Games with Conditional Information Revelation

4.1 Setup

Players are faced with a "base game" G, a Bayesian game with strategic information revelation as defined in Section 3.1. In our framework, a commitment game is a higher-level Bayesian game in which the type distribution is the same as that of G, and actions are devices that define  mappings from other players' devices to actions in G (conditional on one's type). We assume \{\{t_i\}, T_i\} \subseteq \mathcal{R}(t_i) for all players i and types t_i, i.e., players are at least able to reveal their exact types or not reveal any new information. They additionally have access to devices that can condition (i) their actions in G and (ii) the revelation of their private information on other players' devices. Upon learning their type t_i, player i chooses a commitment device d_i from an abstract space of devices D_i. These devices are mappings from the player's type to a response function and a type revelation function. As in Kalai et al . [11] and Forges [5], we will define these functions so as to allow players to condition their decisions on each other's decisions without circularity.

Let C be the domain of the randomization device \mathcal{C}. The response function is r^i_{d_i(t_i)}: \times_{j \neq i} D_j \times C \to \mathcal{A}_i. (This notation, adopted from Forges [5], distinguishes the player's action-determining function from the action itself.) Given the other players' devices d_{-i} and a signal c given by the realized value of the random variable \mathcal{C}, player i's action in G after the revelation phase is r^i_{d_i(t_i)}(d_{-i},c).2 Conditioning the response on \mathcal{C} permits players to commit to (correlated) distributions over actions. Second, we introduce type revelation functions y^i_{d_i(t_i)}: \times_{j \neq i} D_j \to\{0, 1\}^{n-1}, which are not in the framework of Forges [5]. The jth entry of y^i_{d_i(t_i)} indicates whether player i reveals their type to player j, i.e., player j learns \Theta_i = \{t_i\} if this value is 1 or \Theta_i = T_i if it is 0. (We can restrict attention to cases where either all or no information is revealed, as our folk theorem  shows that such a revelation action set is sufficient to enforce each equilibrium payoff profile.) Thus, each player i can condition their action r^i_{d_i(t_i)}(d_{-i},c) on the others' private information revealed to them via (y^j_{d_j(t_j)}(d_{-j}))_{j\neq i}. Further, they can choose whether to reveal their type to another player, via y^i_{d_i(t_i)}(d_{-i}), based on that player's device. Thus players can decide not to reveal private information to players whose devices are not in a desired device profile, and instead punish them.

Then, the commitment game G(\mathcal{D}) is the one-shot Bayesian game in which each player i's strategy is a device d_i \in D_i, as a function of their type. After devices are simultaneously and independently submitted (potentially as a draw from a mixed strategy over devices), the signal c is drawn from the randomization device \mathcal{C}, and players play the induced action profile (r^i_{d_i(t_i)}(d_{-i},c))_{i=1}^n in G. Thus the ex post payoff of player i in G(\mathcal{D}) from a device profile (d_i)_{i=1}^n is u_i(\boldsymbol{t}, (r^i_{d_i(t_i)}(d_{-i},c))_{i=1}^n).

4.2 Folk theorem

Our folk theorem consists of two results: First, any feasible and INTIR payoff can be achieved in equilibrium (Theorem 1). As a special case of interest, then, any efficient payoff can be attained in equilibrium. Second, all equilibrium payoffs in G(\mathcal{D}) are feasible and INTIR (Proposition 1).

THEOREM 1. Let G(\mathcal{D}) be any commitment game. For type profile \boldsymbol{t}, let \boldsymbol{\tau} be a correlated strategy inducing a feasible and INTIR payoff profile (u_i(\boldsymbol{t},\boldsymbol{\tau}))_{i=1}^n. Let \boldsymbol{\underline{\tau}} be a punishment strategy that is arbitrary except, if j is the only player with d'_j \neq d_j, let \boldsymbol{\underline{\tau}} be the minimax strategy \boldsymbol{\tau}_{-j} against player j. Conditional on the signal c, let \boldsymbol{\tau}^{c}(\boldsymbol{t}) be the deterministic action profile, called the target action profile, given by \boldsymbol{\tau}(\cdot|\boldsymbol{t}), and let \boldsymbol{\underline{\tau}}^c be the deterministic action profile given by \boldsymbol{\underline{\tau}}. For all players i and types t_i, let d_i be such that:

    \begin{align*}r^i_{d_i(t_i)}(d'_{-i},c) &= \begin{cases}\tau_i^{c}(\boldsymbol{t}),& \text{if } d'_{-i} = d\\\underline{\tau}_i^{c},& \text{otherwise,} \\ \end{cases} \\ {y^i_{d_i(t_i)}(d'_{-i})}_j &= \begin{cases} 1,& \text{if } d'_{j} = d_{-i}\\ 0,& \text{otherwise.} \\ \end{cases} \end{align*}

Then, the device profile (d_i)_{i=1}^n is a Bayesian Nash equilibrium of G(\mathcal{D}).

PROOF. We first need to check that the response and type revelation functions only condition on information available to the players. If all players use (d_i)_{i=1}^n, then by construction of y^i_{d_i(t_i)} they all reveal their types to each other, and so are able to play \boldsymbol{\tau}(\cdot|\boldsymbol{t}) conditioned on their type profile (regardless of whether the induced payoff is IC). If at least one player uses some other device, the players who do use (d_i)_{i=1}^n still share their types with each other, thus they can play \underline{\boldsymbol{\tau}}.

Suppose player j deviates from (d_i)_{i=1}^n. That is, player j's strategy in G(\mathcal{D}) is d_j' \neq d_j. Note that the outputs of player j's response and type revelation functions induced by d_j' may in general be the same as those returned by d_j. We will show that \boldsymbol{\underline{\tau}}^c punishes deviations from the target action profile regardless of these outputs, as long as there is a deviation in functions r'^j or y'^j. Let a'_j = r^j_{d_j'(t_j)}(d_{-j},c). Then:

    \begin{align*} &\mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, (a'_j,\boldsymbol{\underline{\tau}}))|d'_j,d_{-j}) \\ &\qquad = \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t},(a_j',\boldsymbol{\tau}_{-j}(\cdot | \boldsymbol{t}_{-j}))) \\ &\qquad \leq \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}u_j(\boldsymbol{t},\boldsymbol{\tau}) \hspace{2.3cm}\textrm{(by INTIR)} \\ &\qquad = \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|d_j,d_{-j}). \end{align*}

This last expression is the ex interim payoff of the proposed commitment d_j given that the other players use d_{-j}, therefore (d_i)_{i=1}^n is a Bayesian Nash Equilibrium.

PROPOSITION 1. Let G(\mathcal{D}) be any commitment game. If a device profile (d_i)_{i=1}^n is a Bayesian Nash equilibrium of G(\mathcal{D}), then the induced payoff \mathbf{x} is feasible and INTIR.

PROOF. Let \boldsymbol{\mu} be the strategy profile of (r^i_{d_i(t_i)}(d_{-i},c))_{i=1}^n. Then by hypothesis x_i(t_i) =\mathbb{E}_{\boldsymbol{t}_{-i}} \sim q(\cdot | t_i)u_i(\boldsymbol{t}, \boldsymbol{\mu}), so \mathbf{x} is feasible. Suppose that for some player j, for all correlated strategies \boldsymbol{\tau}_{-j} there exists a type t_j such that:

    \begin{align*} x_j(t_j) &< \max_{a_j \in \mathcal{A}_j}  \mathbb{E}_{\boldsymbol{t}_{-j}} \sim q(\cdot | t_j)u_j(\boldsymbol{t}, (a_j,\boldsymbol{\tau}_{-j}(\cdot | \boldsymbol{t}_{-j}))). \end{align*}

Let a^*_j = \textrm{arg\,max}_{a_j} \mathbb{E}_{\boldsymbol{t}_{-j}} \sim q(\cdot | t_j)u_j(\boldsymbol{t},(a_j,\boldsymbol{\tau}_{-j}(\cdot | \boldsymbol{t}_{-j}))). Then if player j with type t_j deviates to d'_j such that r^j_{d'_j(t_j)}(d_{-j},c) = a^*_j:

    \begin{align*} &\mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, (a^*_j,\boldsymbol{\mu}_{-j}))|d'_j,d_{-j}) \\ &\qquad = \mathbb{E}_{\boldsymbol{t}_{-j}} \sim q(\cdot | t_j)u_j(\boldsymbol{t},(a_j^*,\boldsymbol{\mu}_{-j}(\cdot | \boldsymbol{t}_{-j}))) \\ &\qquad > x_j(t_j). \end{align*}

This contradicts the assumption that \mathbf{x} is the payoff of a Bayesian Nash equilibrium, therefore \mathbf{x} is INTIR.

Our assumptions do not imply the equilibrium payoffs are IC (unlike Forges [5]). Suppose a player i's payoff would increase if the players conditioned the correlated strategy on a different type (i.e., not IC). This does not imply that a profit is possible by deviating from the equilibrium, because in our setting the other players' actions are conditioned on the type revealed by i. In particular, as in our proposed device profile, they may choose to play their part of the target action profile only if all other players' devices reveal their (true) types.

5 Other Conditions for Efficiency and Counterexamples

The assumptions that give rise to this class of commitment games with conditional information revelation are stronger than the ability to unconditionally reveal private information. Recalling the unraveling results from Section 2, unconditional revelation ability is sometimes sufficient for the full revelation of private information, or for revelation of the information that prohibits incentive compatibility, and thus the possibility of efficiency in equilibrium. But this is not always true, whereas efficiency is always attainable in equilibrium under our assumptions. We first show that full unraveling fails in our running example when country 2 has a weak point. Then, we discuss conditions under which the ability to partially reveal private information is sufficient for efficiency, and examples where these conditions don’t hold.

5.1 Analysis of running example

Since country 2 can only either reveal both its strength \theta and weak point v, or neither, in our formalism of strategic information revelation \mathcal{R}(t_2) = \{\{t_2\}, T_2\}. If country 2 rejects the offer 1-s, players go to a war that country 2 wins with probability p_W if its army is weak, or p_S > p_W if strong.

Assume country 2 is strong and the prior probability of a strong type is q < \frac{p_S - p_W}{p_S - p_W + c_1 + c_2}. In the perfect Bayesian equilibrium of G (without type revelation) country 1 offers 1 - s = p_W - c_2, which country 2 rejects [27]. That is, if country 1 believes country 2 is unlikely to be strong, country 1 makes the smallest offer that only a weak type would accept. Thus with private information the countries go to war and receive inefficient payoffs in equilibrium. A strong country 2 also prefers not to reveal its type unconditionally. Although this would guarantee that country 1 best-responds with 1 - s = p_S - c_2, which country 2 would accept, given knowledge of the weak point v country 1 prefers to attack it and receive an extra z payoff with certainty, costing c_{A,2} for country 2. Country 2 would therefore be worse off by c_{A,2} than in equilibrium without revelation, where its expected payoff is p_S - c_2.

However, if country 2 can reveal its full type if and only if country 1 commits to 1-s = p_S that country 2 accepts, and commits not to attack v, the countries can avoid war in equilibrium. The profile (1 - p(\theta), p(\theta)) is not IC, and hence cannot be achieved under the assumptions of Forges [5] alone, because a weak country 2 would prefer the strong type's payoff p_S (absent type-conditional commitment by country 1). In this example, conditional type revelation is required for efficiency due to a practical inability to reveal military strength without also revealing a vulnerability (Table 1). In other words, country 2's revelation action set \mathcal{R}(t_2) is too restricted for full unraveling to occur. Interactions between advanced artificially intelligent agents may feature similar problems necessitating our framework. For example, if revelation requires sharing source code or the full set of parameters of a neural network that lacks cleanly separated modules, unconditional revelation risks exposing exploitable information. See also example 7 of Okuno-Fujiwara et al. [23] in which full unraveling fails because a firm does not want to reveal a secret technology that provides a competitive advantage, leading to inefficiency because other private information is not revealed.

5.2 Efficiency with unconditional revelation

Full unraveling. If full unraveling occurs in the base game G, then the ability to conditionally reveal information becomes irrelevant. For example, consider a modification of Example 3.1 in which there is no weak point, i.e., country 2's type is t_2 = \theta rather than t_2 = \{\theta, v\}. A strong country 2 that can verify its strength to country 1 prefers to do so, since this does not also help country 1 exploit it. But because of this, if country 2 refuses to reveal its strength and it is common knowledge that country 2 could verifiably reveal, country 1 knows country 2 is weak. Thus, all types are revealed in equilibrium without conditioning on country 1's commitment.

Some sufficient and necessary conditions for full unraveling have been derived. Hagenbach et al . [9] show that given verifiable revelation, full unraveling is guaranteed if there are no cycles in the directed graph defined by types that prefer to pretend to be each other. For full unraveling in some classes of games with multidimensional types, it is necessary for one of the players' payoff to be sufficiently nonconvex in the other's beliefs [17]. In Appendix A, we give an example where this condition fails, thus unconditional revelation is insufficient even without exploitable information. However, even for games with full unraveling, the framework of Forges [5] is still insufficient for equilibria with non-IC payoffs, since that framework does not allow verifiable revelation (conditional or otherwise).

Table 1: Sufficient conditions to achieve a payoff in a Bayesian Nash equilibrium of a commitment game, given different forms of verifiable type revelation ability. “Full” means a player can only reveal their full type, while “Partial” means other subsets containing their type can be revealed.
Full Partial
Conditional feasible, INTIR feasible, INTIR
Unconditional feasible, INTIR, {full unraveling or IC} feasible, INTIR, {full unraveling or IC after unraveling}



Partial revelation and post-unraveling incentive compatibility. If in our original example country 2 could partially reveal its type, i.e., only the probability of winning a war but not its weak point, conditional revelation would not be necessary (Table 1). This is because the strategy inducing the efficient payoff profile (1 - p(\theta) + c_2, p(\theta) - c_2) depends only on the part of country 2's type that is revealed by the unraveling argument. Country 2 does not profit from lying about its exploitable, non-unraveled information — that is, the payoff is IC with respect to that information, even if not IC in the pre-unraveling game. Thus country 1 does not need to know this information for the efficient payoff to be achieved in equilibrium. Formally, in this case \mathcal{R}(t_2) = \mathcal{T}(t_2), i.e., country 2 can choose to reveal any \Theta_2 \ni t_2, producing an equilibrium of partial unraveling. We can generalize this observation with the following proposition.

PROPOSITION 2. Suppose that the devices in G(\mathcal{D}) do not have revelation functions, and G is a game of strategic information revelation with \mathcal{R}(t_i) = \mathcal{T}(t_i) for all i, t_i. Let q be updated to have support on the subset \Theta \subseteq T of types remaining after unraveling. As in Forges [5], assume \mathcal{C} is conditioned on \boldsymbol{t}. Then a payoff profile \mathbf{x} is achievable in a Bayesian Nash equilibrium of G(\mathcal{D}) if and only if it is feasible, INTIR, and IC (with respect to the post-unraveling game and updated q).

PROOF. This is an immediate corollary of Propositions 1 and 2 of Forges [5], applied to the base game induced by unraveling (that is, with a prior updated on types being in the space \Theta).

To our knowledge, it is an open question which conditions are sufficient and necessary for partial unraveling such that the efficient payoffs of the post-unraveling game are IC. An informal summary of Proposition 2 and characterizations of equilibrium payoffs under our framework and that of Forges [5] is: Given conditional commitment ability, efficiency can be achieved in equilibrium if and only if there is sufficiently strong incentive compatibility, conditional and verifiable revelation ability, or an intermediate combination of these (see Table 1).

Proposition 2 is not vacuous; there exist games in which, given the ability to partially, verifiably,  and unconditionally reveal their private information, players end up in an inefficient equilibrium that is Pareto-dominated by a non-IC payoff. Consequently, the alternatives to conditional information revelation that we have considered are not sufficient to achieve all feasible and INTIR payoffs even when partial revelation is possible. The game in Appendix A is one example where such a payoff is efficient. In the following example, the only efficient payoff is IC. However, the set of equilibrium payoffs is smaller than under our assumptions, and excludes some potentially desirable outcomes. For example, there is a non-IC \epsilon-efficient payoff that improves upon the strictly efficient payoff in utilitarian welfare (sum of all players' payoffs).

Example 5.1 (All-pay auction under incomplete information from Kovenock et al. [14]). Two firms i=1,2 participate in an all-pay auction. Each firm has a private valuation s_i \stackrel{\text{iid}}{\sim} \text{Unif}[0,1] of a good. After observing their respective valuations, players simultaneously choose whether to reveal them. Then they simultaneously submit bids x_i \in [0, \infty), and the higher bid wins the good, with a tie broken by a fair coin. Thus player i's payoff is s_i(\mathbb{I}[x_i \geq x_{-i}] - \frac{1}{2}\mathbb{I}[x_i = x_{-i}]) - x_i. There is a Bayesian Nash equilibrium of this base game in which x_i = s_i^2/2, and neither player reveals their valuation [14]. In this equilibrium, each player's ex interim payoff is:

    \begin{align*} \mathbb{E}_q u_i(\boldsymbol{t}, (x_i, x_{-i})) &= s_iP(x_i \geq x_{-i}) - x_i \\ &= s_iP(s_i^2/2 \geq s_{-i}^2/2) - s_i^2/2 \\ &= s_i^2/2. \end{align*}

The ex post payoffs are (s_1 - s_1^2/2, -s_2^2/2) if s_1 > s_2, (-s_1^2/2, s_2 - s_2^2/2) if s_1 < s_2, and (s_1/2 - s_1^2/2, s_2/2 - s_1^2/2) otherwise.

Now, let \varepsilon > 0, and consider the following strategy \boldsymbol{\tau}. For type profiles such that s_1 > s_2, let x_1 = \min\{\varepsilon, s_1^3/2\} and x_2 = 0. For s_2 > s_1, let x_1 = 0 and x_2 = \min\{\varepsilon, s_2^3/2\}. Otherwise, let x_1 = x_2 = 0. Then:

    \begin{align*} \mathbb{E}_q u_i(\boldsymbol{t}, \boldsymbol{\tau}) &= (s_i - \min\{\varepsilon, s_i^3/2\})P(s_i \geq s_{-i}) \\ &= s_i^2 - s_i\min\{\varepsilon, s_i^3/2\} \\ &\geq s_i^2/2. \end{align*}

Thus the payoff induced by \boldsymbol{\tau} is feasible and INTIR, because it exceeds the ex interim equilibrium payoff. This is also an ex post Pareto improvement on the equilibrium, because the ex post payoffs are (s_1 - \min\{\varepsilon, s_1^3/2\}, 0) if s_1 > s_2, (0, s_2 - \min\{\varepsilon, s_2^3/2\}) if s_2 > s_1, and (s_1/2, s_2/2) otherwise. Finally, this payoff is not IC, because if s_1 < s_2, player 1 would profit from \boldsymbol{\tau} conditioned on a type s_1' \geq s_2.

Note that the payoffs (s_1, 0) and (0, s_2), i.e., the case of \varepsilon = 0, are not feasible. This non-IC payoff thus requires \varepsilon > 0 and is inefficient by a margin of \varepsilon. However, in practice the players may favor this payoff over (s_1/2, s_2/2). This is because the non-IC payoff is \varepsilon-welfare optimal, since whenever s_i > s_{-i} for either player, the supremum of the sum of payoffs is s_i.

6 Implementation of Conditional Type Revelation via Robust Program Equilibrium

Having shown that conditional commitment and revelation devices solve problems that are intractable under other assumptions, we next consider how players can practically (and more robustly) implement these abstract devices. In particular, can players achieve efficient equilibria without using the exact device profile in Theorem 1, which can only cooperate with itself? We now develop an implementation showing that this is possible, after providing some background.

Oesterheld [21] considers two computer programs playing a game. Each program can simulate the other. He constructs a program equilibrium — a pair of programs that form an equilibrium of this game — using “instantaneous tit-for-tat” strategies.

In the Prisoner’s Dilemma, the pseudocode for these programs (called "\epsilon\texttt{GroundedFairBot}") is: ``\texttt{With small probability} \, \epsilon: \texttt{Cooperate; Else: do what my counterpart does when playing against me}." These programs cooperate with each other and punish defection. Note that these programs are recursive, but guaranteed to terminate because of the \epsilon probability that a program will output Cooperate unconditionally. 

We use this idea to implement conditional commitment and revelation devices. For us, "revealing private information and playing according to the target action profile" is analogous to cooperation in the construction of \epsilon\texttt{GroundedFairBot}. We will first describe the appropriate class of programs for program games under private information. Then we develop our program, \epsilon\texttt{GroundedFairSIRBot} (where "SIR" stands for "strategic information revelation"), and show that it forms a \delta-Bayesian Nash equilibrium of a program game. Pseudocode for \epsilon\texttt{GroundedFairSIRBot} is given in Algorithm 1.

Player i's strategy in the program game is a choice p_i from the program space P_i, a set of computable functions from \times_{i=1}^n P_i \times C \times \{0, 1\} to \mathcal{A}_i \cup \{0, 1\}^{n-1}. A program returns either an action or a type revelation vector. Each program takes as input the players' program profile, the signal {c}, and a boolean that equals 1 if the program's output is an action, and 0 otherwise.

For brevity, we write p_i(r) for a call to a program with the boolean set to 1, otherwise p_i(y). Player i's action in the program game is a call to their program p_i((p_m)_{m=1}^n, {c}, 1). (We refer to these initial program calls as the base calls to distinguish them from calls made by other programs.) Then, the ex post payoff of player i in the program game is u_i(\boldsymbol{t}, (p_i((p_m)_{m=1}^n, {c}, 1))_{i=1}^n).

In addition to \mathcal{C} in the base game, we assume there is a randomization device \mathcal{C}^P on which programs can condition their outputs. Like Oesterheld [21], we will use programs that unconditionally terminate with some small probability. By using \mathcal{C}^P to correlate decisions to unconditionally terminate, our program profile will be able to terminate with probability 1, despite the exponentially increasing number of recursive program calls. In particular, \mathcal{C}^P reads the call stack of the players' program profile. At each depth level L of recursion reached in the call stack, a variable U_L is independently sampled from \text{Unif}[0,1]. Each program call at level L can read off the values of U_L and U_{L+1} from \mathcal{C}^P. The index L itself is not revealed, however, because programs that "know" they are being simulated would be able to defect in the base calls, while cooperating in simulations to deceive the other programs. To ensure that our programs terminate in play with a deviating program, \epsilon\texttt{GroundedFairSIRBot}(r) will call truncated versions of its counterparts' revelation programs: For p_i \in P_i, let [p_i] denote \p_i with immediate termination upon calling another program.

\epsilon\texttt{GroundedFairSIRBot}(r) checks if all other players' programs reveal their types (line 8 of Algorithm 1). If so, either with a small probability it unconditionally cooperates (line 11) — i.e., plays its part of the target action profile — or it cooperates only when all other programs cooperate (line 15). Otherwise, it punishes (line 17). In turn, \epsilon\texttt{GroundedFairSIRBot}(y) reveals its type unconditionally with probability \epsilon (line 20). Otherwise, it reveals to a given player j under two conditions (lines 25 and 28). First, player j must reveal to the user. Second, they must play an action consistent with the desired equilibrium, i.e., cooperate when all players reveal their types, or punish otherwise. (See Figure 1.)

Unconditionally revealing one's type and playing the target action avoids an infinite regress. Crucially, these unconditional cooperation outputs are correlated through \mathcal{C}^P. Therefore, in a profile of copies of this program, either all copies unconditionally cooperate together, or none of them do so. Using this property, we can show (see proof of Theorem 2 in Appendix B) that a profile where all players use this program outputs the target action profile with certainty. If one player deviates, first, \epsilon\texttt{GroundedFairSIRBot}(r) immediately punishes if that player does not reveal. If they do reveal, with some small probability the other players unconditionally cooperate, making this strategy slightly exploitable, but otherwise the deviator is punished. Even if deviation is punished, \epsilon\texttt{GroundedFairSIRBot}(y) may unconditionally reveal. In our approach, this margin of exploitability is the price of implementing conditional commitment and revelation with programs that cooperate based on counterparts' outputs, rather than a strict matching of devices, without an infinite loop. Further, since a player is only able to unconditionally cooperate under incomplete information if they know all players' types, \epsilon\texttt{GroundedFairSIRBot}(r) needs to prematurely terminate calls to programs thatndon't immediately unconditionally cooperate, but which may otherwise cause infinite recursion (line 4). This comes at the expense of robustness: \epsilon\texttt{GroundedFairSIRBotpunishes} some players who may have otherwise cooperated, with low probability.

Figure 1. Flowchart for a 2-player program game between player i using \epsilon\texttt{GroundedFairSIRBot}, and player j using an arbitrary program. An edge to a white node indicates a call to the program in that node; to a gray node indicates a check of the condition in that node; and to a node without a border indicates the output of the most recent parent white node. Wavy edges depict a call to the program in the parent node, with its child nodes omitted for space. Superscripts indicate the level of recursion.

THEOREM 2. Consider the program game induced by a base game G and the program spaces \{P_i\}_{i=1}^n. Assume all strategies returned by these programs are computable. For type profile \boldsymbol{t}, let \boldsymbol{\tau}(\cdot|\boldsymbol{t}) induce a feasible and INTIR payoff profile (u_i(\boldsymbol{t},\boldsymbol{\tau}))_{i=1}^n. Let\underline{\boldsymbol{\tau}} be the minimax strategy if one player j deviates, and arbitrary otherwise. Let \overline{u} be the maximum payoff achievable by any player in G, and \delta = \overline{u}((1-\epsilon)^{-2} - 1). Then the program profile given by Algorithm 1 (with \texttt{output_action} = 1) for players i=1,\dots,n, denoted (p_i)_{i=1}^n, is a \delta-Bayesian Nash equilibrium. That is, if players i \neq j play this profile, and player j plays a program p_j' \in P_j that terminates with probability 1 given that any programs it calls terminate with probability 1, then:

    \begin{align*} & \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|p_j',p_{-j}) \\ &\qquad \leq \delta + \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|p_j,p_{-j}). \end{align*}

PROOF SKETCH. We need to check (1) that the program profile (p_i)_{i=1}^n terminates (a) with or (b) without a deviation, (2) that everyone plays the target action profile when no one deviates, and (3) that with high probability a deviation is punished. First suppose no one deviates. If U_L < \epsilon for two levels of recursion in a row, the calls to p_i(y) and p_i(r) all unconditionally reveal (line 21) of \epsilon\texttt{GroundedFairSIRBot}(y)) and output the target action (line 6 of \epsilon\texttt{GroundedFairSIRBot}(r)), respectively. Because these unconditional cooperative outputs are correlated through \mathcal{C}^P, the probability that U_L < \epsilon at each pair of subsequent levels in the call stack is a nonzero constant. Thus it is guaranteed to occur eventually and cause termination in finite time, satisfying (1b). Moreover, each call to p_i(y) or p_i(r) in previous levels of the stack sees that the next level cooperates, and thus cooperates as well, ensuring that the base calls all output the target action profile. This shows (2).

If, however, one player deviates, we use the same guarantee of a run of subsequent U_L < \epsilon events to guarantee termination. First, all calls to non-deviating programs terminate, because any call to \epsilon\texttt{GroundedFairSIRBot}(r) conditional on U_{L+1} < \epsilon forces termination (line 4) of calls to other players' revelation programs. Thus the deviating programs also terminate, since they call terminating non-deviating programs. This establishes (1a). Finally, in the high-probability event that the first two levels of calls to (p_i)_{i=1}^n do not unconditionally cooperate, \epsilon\texttt{GroundedFairSIRBot}(r) punishes the deviator as long as they do not reveal their type and play their target action. The punishing players will know each other's types, since a call to \epsilon\texttt{GroundedFairSIRBot}(y) is guaranteed by line 28 to reveal to anyone who also punishes or unconditionally cooperates in the next level. Condition (3) follows.

A practical obstacle to program equilibrium is demonstrating to one’s counterpart that one’s behavior is actually governed by the source code that has been shared. In our program game with private information, there is the additional problem that, as soon as one’s source code is shared, one’s counterpart may be able to read off one’s private information (without revealing their own). Addressing this in practice might involve modular architectures, where players could expose the code governing their strategy without exposing the code for their private information. Alternatively, consider AI agents that can place copies of themselves in a secure box, where the copies can inspect each other’s full code but cannot take any actions outside the box. These copies read each other’s commitment devices off of their source code, and report the action and type outputs of these devices to the original agents. If any copy within the box attempts to transmit information that another agent’s device refused to reveal, the box deletes its contents. This protocol does not require a mediator or arbitrator; the agents and their copies make all the relevant strategic decisions, with the box only serving as a security mechanism. Applications of secure multi-party computation to machine learning [12], or privacy-preserving smart contracts [13] — with the original agents treated as the “public” from whom code shared among the copies is kept private — might facilitate the implementation of our proposed commitment devices.

7 Discussion

We have defined a new class of commitment games that allow revelation of private information conditioned on other players’ commitments. Our folk theorem shows that in these games, efficient payoffs are always attainable in equilibrium. Our examples, which draw on models of war and all-pay auctions, show how players with these capabilities can avoid welfare losses, while others (even with the ability to verifiably reveal private information) cannot. Finally, we have provided an implementation of this framework via robust program equilibrium, which can be used by computer programs that read each other’s source code.

While conceptually simple, satisfying these assumptions in practice requires a strong degree of mutual transparency and conditional commitment ability, which is not possessed by contemporary human institutions or AI systems. Thus, our framework represents an idealized standard for bargaining in the absence of a trusted third party, suggesting research priorities for the field of Cooperative AI [2]. The motivation forwork on this standard is that AI agents with increasing economic capabilities, which would exemplify game-theoretic rationality to a stronger degree than humans, may be deployed in contexts where they make strategic decisions on behalf of human principals [6]. Given the potential for game-theoretically rational behavior to cause cooperation failures [4, 20], it is important that such agents are developed in ways that ensure they are able to cooperate effectively.

Commitment devices of this form would be particularly useful in cases where centralized institutions (Dafoe et al. [2], Section 4.4) for enforcing or incentivizing cooperation fail, or have not been constructed due to collective action problems. This is because our devices do not require a trusted third party, aside from correlation devices. A potential obstacle to the use of these commitment devices is lack of coordination in development of AI systems. This may lead to incompatibilities in commitment device implementation, such that one agent cannot confidently verify that another’s device meets its conditions for trustworthiness and hence type revelation. Given that commitments may be implicit in complex parametrizations of neural networks, it is not clear that independently trained agents will be able to understand each other’s commitments without deliberate coordination between developers. Our program equilibrium approach allows for the relaxation of the coordination requirements needed to implement conditional information revelation and commitment. Coordination on target action profiles for commitment devices or flexibility in selection of such profiles, in interactions with multiple efficient and arguably “fair” profiles [28], will also be important for avoiding cooperation failures due to equilibrium selection problems.


We thank Lewis Hammond for helpful comments on this paper and thank Caspar Oesterheld both for useful comments and for identifying an important error in an earlier version of one of our proofs.


[1] Andrew Critch. 2019. A parametric, resource-bounded generalization of Löb’s theorem, and a robust cooperation criterion for open-source game theory. The Journal of Symbolic Logic 84, 4 (2019), 1368–1381.

[2] Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, and Thore Graepel. 2020. Open Problems in Cooperative AI. arXiv:2012.08630 [cs.AI]

[3] Ronald A Dye. 1985. Disclosure of nonproprietary information. Journal of accounting research (1985), 123–145.

[4] James D Fearon. 1995. Rationalist explanations for war. International organization 49, 3 (1995), 379–414.

[5] Françoise Forges. 2013. A folk theorem for Bayesian games with commitment. Games and Economic Behavior 78 (2013), 64–71.

[6] Edward Geist and Andrew J. Lohn. 2018. How might artificial intelligence affect the risk of nuclear war? Rand Corporation.

[7] Sanford J Grossman. 1981. The informational role of warranties and private disclosure about product quality. The Journal of Law and Economics 24, 3 (1981), 461–483.

[8] Sanford J Grossman and Oliver D Hart. 1980. Disclosure laws and takeover bids. The Journal of Finance 35, 2 (1980), 323–334.

[9] Jeanne Hagenbach, Frédéric Koessler, and Eduardo Perez-Richet. 2014. Certifiable Pre-play Communication: Full Disclosure. Econometrica 82, 3 (2014), 1093–1131.

[10] Boyan Jovanovic. 1982. Truthful disclosure of information. The Bell Journal of Economics (1982), 36–44.

[11] Adam Tauman Kalai, Ehud Kalai, Ehud Lehrer, and Dov Samet. 2010. A commitment folk theorem. Games and Economic Behavior 69, 1 (2010), 127–137.

[12] Brian Knott, Shobha Venkataraman, Awni Hannun, Shubhabrata Sengupta, Mark Ibrahim, and Laurens van der Maaten. 2021. CrypTen: Secure Multi-Party Computation Meets Machine Learning. In Advances in Neural Information Processing Systems, A. Beygelzimer, Y. Dauphin,
P. Liang, and J. Wortman Vaughan (Eds.).

[13] Ahmed Kosba, Andrew Miller, Elaine Shi, Zikai Wen, and Charalampos Papamanthou. 2016. Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts. In 2016 IEEE Symposium on Security and Privacy (SP). 839–858.

[14] Dan Kovenock, Florian Morath, and Johannes Münster. 2015. Information sharing in contests. Journal of Economics & Management Strategy 24 (2015), 570–596. Issue 3.

[15] Patrick LaVictoire, Benja Fallenstein, Eliezer Yudkowsky, Mihaly Barasz, Paul Christiano, and Marcello Herreshoff. 2014. Program Equilibrium in the Prisoner’s Dilemma via Löb’s Theorem. In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence.

[16] Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or No Deal? End-to-End Learning for Negotiation Dialogues.

[17] Giorgio Martini. 2018. Multidimensional Disclosure.

[18] Paul Milgrom and John Roberts. 1986. Relying on the information of interested parties. The RAND Journal of Economics (1986), 18–32.

[19] Paul R Milgrom. 1981. Good news and bad news: Representation theorems and applications. The Bell Journal of Economics (1981), 380–391.

[20] Roger B Myerson and Mark A Satterthwaite. 1983. Efficient mechanisms for bilateral trading. Journal of economic theory 29, 2 (1983), 265–281.

[21] Caspar Oesterheld. 2019. Robust program equilibrium. Theory and Decision 86, 1 (2019), 143–159.

[22] Caspar Oesterheld and Vincent Conitzer. 2021. Safe Pareto Improvements for Delegated Game Playing. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems. 983–991.

[23] Masahiro Okuno-Fujiwara, Andrew Postlewaite, and Kotaro Suzumura. 1990. Strategic Information Revelation. The Review of Economic Studies 57, 1 (1990), 25–47.

[24] Michael Peters and Balázs Szentes. 2012. Definable and Contractible Contracts. Econometrica 80 (2012), 363–411.

[25] Ariel Rubinstein. 1998. Modeling Bounded Rationality. The MIT Press.

[26] Hyun Song Shin. 1994. The burden of proof in a game of persuasion. Journal of Economic Theory 64, 1 (1994), 253–264.

[27] Branislav L Slantchev and Ahmer Tarar. 2011. Mutual optimism as a rationalist explanation of war. American Journal of Political Science 55, 1 (2011), 135–148.

[28] Julian Stastny, Maxime Riché, Alexander Lyzhov, Johannes Treutlein, Allan Dafoe, and Jesse Clifton. 2021. Normative Disagreement as a Challenge for Cooperative AI. arXiv:2111.13872 [cs.MA]

[29] Moshe Tennenholtz. 2004. Program equilibrium. Games and Economic Behavior 49, 2 (2004), 363–373.

[30] Hal R Varian. 2010. Computer mediated transactions. American Economic Review 100, 2 (2010), 1–10.

A Partial Unraveling Example

Consider the following game of strategic information revelation. We will show that in this game, there is a perfect Bayesian equilibrium that is inefficient, and there is an efficient payoff profile that is not IC. (That is, in this game, unconditional and partial type revelation and the framework of Forges [5] are not sufficient to achieve efficiency.) This example is inspired by the model in Martini [17].

Player 2 is a village that lives around the base of a treacherous mountain (i.e., along the left and bottom sides of [0,1]^2). Their warriors are camped somewhere on the mountain, with coordinates (\theta_x, \theta_y). Player 1 has no information on the warriors' location, hence the prior is (\theta_x, \theta_y) \sim \text{Unif}[0,1]^2. But they know that warriors at higher altitudes are tougher; strength is proportional to \min\{\theta_x, \theta_y\}. As in Example 3.1, player 1 can offer a split (s,1-s) of disputed territory. If the players fight, then player 1 will send in paratroopers at a location (t_x, t_y) to fight player 2's warriors at a cost proportional to their strength \min\{\theta_x, \theta_y\}. They want to get as close as possible to minimize exposure to the elements, consumption of rations, etc (i.e., minimize the squared distance (t_x - \theta_x)^2 + (t_y - \theta_y)^2). Meanwhile, player 2 wants the paratroopers to land as far from their village as possible, i.e., they want to maximize \min\{t_x, t_y\}. Player 2 wins the ensuing battle with probability equal to their army's strength, i.e., \min\{\theta_x, \theta_y\}.

Formally, the game is as follows. Only player 2 has private information, (\theta_x, \theta_y) \sim \text{Unif}[0,1]^2. Player 2 has the unrestricted revelation action set \mathcal{R}(t_2) = \partialrevelation(t_2). First, player 2 chooses \Theta_2 \in \mathcal{R}(t_2). Then player 1 chooses s \in [0,1]. Player 2 can either accept or reject s. If player 2 accepts, the pair of payoffs is (s, 1-s). Otherwise, player 1 plays (t_x, t_y) \in [0,1]^2, and for c_1, c_2 > 0:

    \begin{align*} u_1 &= 1 - 2\min\{\theta_x, \theta_y\} %- 2\min\{\theta_x, \theta_y\} - (t_x - \theta_x)^2 - (t_y - \theta_y)^2 - c_1 \\ u_2 &= \min\{t_x, t_y\} + \min\{\theta_x, \theta_y\} - c_2 \end{align*}

Let f^+(x) = \max\{0, f(x)\} for any function f. Define:

    \begin{align*} t^*(s) &= \begin{cases} \frac{3-\sqrt{5}}{2},& \text{if } s > c_2 - \frac{3-\sqrt{5}}{2} + 1\\ \frac{(3-\sqrt{5})(c_2 - s - 1)}{2} + 1,& \text{otherwise}. \end{cases}\\ r(s) &= 1 - s - t^*(s) + c_2 \\ q_{U_1(s)}(\theta_x, \theta_y) &= \frac{\mathbb{I}[\min\{\theta_x, \theta_y\} \leq t^*(s)]}{t^*(s)(2 - t^*(s))} \\ q_{U_2(s)}(\theta_x, \theta_y) &= \frac{\mathbb{I}[\min\{\theta_x, \theta_y\} \in (r^+(s), t^*(s)]]}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ p(s) &= \mathbb{I}[r(s) \geq t^*(s)] + \mathbb{I}[r(s) < t^*(s)] \cdot \frac{r^+(s)(2 - r^+(s))}{t^*(s)(2 - t^*(s))} \\ m_1(s) &= \frac{(r^+(s)^4 - r^+(s)^3 - r^+(s)) - (t^*(s)^4 - t^*(s)^3 - t^*(s))}{3(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ m_2(s) &= \frac{t^*(s)^2 - r^+(s)^2 - \frac{2}{3}(t^*(s)^3 - r^+(s)^3)}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ s^* &= \textrm{arg\,max}_s sp(s) + (1 - 2m_2(s) - 2(m_1(s) - t^*(s)^2) - c_1)(1-p(s)) \\ s(\theta_x,\theta_y) &= 1 - 2\min\{\theta_x, \theta_y\} + c_2 \end{align*}

Then, we claim:

PROPOSITION 3. Let c_1 = c_2 = 0.1. Let player 1's strategy be s^* then (t^*(s), t^*(s)) conditional on a given s if player 2 does not reveal their type, otherwise s(\theta_x, \theta_y) then (\theta_x, \theta_y). Let player 2's strategy be to reveal any and only types for which \min\{\theta_x, \theta_y\} > t^*(s^*), and to accept any and only s \leq 1 - t^*(s) - \min\{\theta_x, \theta_y\} + c_2. Let player 1's belief update to q_{U_1(s^*)} conditional on player 2 not revealing their type, and to q_{U_2(s)} conditional on player 2 not revealing their type and rejecting s.

Then these strategies and beliefs are a perfect Bayesian equilibrium. Further, there exist \theta_x, \theta_y such that this equilibrium is inefficient, and the payoff profile (1 - \min\{\theta_x, \theta_y\} - t^*(s^*) + c_2, \min\{\theta_x, \theta_y\} + t^*(s^*) - c_2) is (1) a Pareto improvement on the equilibrium payoff and (2) not IC.

Figure 2: Player 2's type space, with the non-unraveled types conditional on a rejection of s given in the shaded region. The black dot denotes (\mathbb{E}_{q_{U_2(s)}} \theta_x, \mathbb{E}_{q_{U_2(s)}} \theta_y), player 1's optimal (t_x, t_y) given a posterior q_{U_2(s)} that is uniform on the shaded region. Unraveling stabilizes into this region because any types for which \min\{\theta_x, \theta_y\} > \min\{\mathbb{E}_{q_{U_2(s)}} \theta_x, \mathbb{E}_{q_{U_2(s)}} \theta_y\} prefer to reveal, otherwise they prefer not to reveal, and types for which \min\{\theta_x, \theta_y\} < r(s) prefer to accept s.

PROOF.  We proceed by backward induction. If player 2 has not revealed their type and has rejected s, then given beliefs q_{U_2(s)}, we solve for the optimal (t_x, t_y). Player 1's expected payoff is 1 - 2\mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) - \mathbb{E}_{q_{U_2(s)}}((t_x - \theta_x)^2 + (t_y - \theta_y)^2) - c_1. The squared loss is minimized at (\mathbb{E}_{q_{U_2(s)}} \theta_x, \mathbb{E}_{q_{U_2(s)}} \theta_y). This is equivalent to the average of the centers of rectangles whose union composes the region \min\{\theta_x, \theta_y\} \in [r^+(s), t^*(s)] (see Figure 2), weighted by the areas of these rectangles, which can be shown to be:

    \begin{align*} t_x &= t_y \\ &= \frac{(1-r^+(s))(t^*(s) + r^+(s)) + 1 - t^*(s)^2}{2(2 - t^*(s) - r^+(s))} \\ &= t^*(s). \end{align*}

Thus (t^*(s), t^*(s)) is a best response. If player 2 has revealed their type and rejected s, then player 1's payoff is maximized at (t_x, t_y) = (\theta_x, \theta_y).

Next, player 2's best response to any s is to accept if and only if the acceptance payoff exceeds the rejection payoff given player 1's strategy, that is, 1 - s \geq t^*(s) + \min\{\theta_x, \theta_y\} - c_2.

Then, given beliefs q_{U_1(s)} for each s, player 1's optimal s if player 2 does not reveal is:

    \begin{align*} s^* &= \textrm{arg\,max}_s \mathbb{E}_{q_{U_1(s)}}(u_1 | \text{accept } s)P(\text{accept } s) + \mathbb{E}_{q_{U_2(s)}}(u_1 | \text{reject } s)(1-P(\text{accept } s)). \end{align*}

Given player 2's strategy,

    \[P(\text{accept } s) = P(1 - s \geq t^*(s) + \min\{\theta_x, \theta_y\} - c_2) = P(\min\{\theta_x, \theta_y\} \leq r(s)).\]

Since q_{U_1(s)} is uniform on \min\{\theta_x, \theta_y\} \leq t^*(s), this probability is given by the ratio of the areas of the regions \min\{\theta_x, \theta_y\} \leq r^+(s) and \min\{\theta_x, \theta_y\} \leq t^*(s). Thus P(\text{accept } s) = p(s). We have:

    \begin{align*} \mathbb{E}_{q_{U_2(s)}}(u_1 | \text{reject } s) &= 1 - 2\mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) - \mathbb{E}_{q_{U_2(s)}}((t^*(s) - \theta_x)^2 + (t^*(s) - \theta_y)^2) - c_1 \\ &= 1 - 2\mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) - \mathbb{V}_{q_{U_2(s)}}(\theta_x) - \mathbb{V}_{q_{U_2(s)}}(\theta_y) - c_1. \end{align*}

It can be shown (Lemma 3) that \mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) = m_2(s) and \mathbb{V}_{q_{U_2(s)}}(\theta_x) = \mathbb{V}_{q_{U_2(s)}}(\theta_y) = m_1(s) - t^*(s)^2. Therefore s^* is of the form given above.

If player 2 reveals, in the analysis above we now have:

    \begin{align*} \mathbb{E}_{q_{U_2(s)}}(u_1 | \text{reject } s) &= 1 - 2\min\{\theta_x, \theta_y\} - c_1 \\ P(\text{accept } s) &= \mathbb{I}[1 - s \geq 2 \min\{\theta_x, \theta_y\} - c_2] \\ \mathbb{E}(u_1) &= \begin{cases} s,& \text{if } s \leq 1 - 2\min\{\theta_x, \theta_y\} + c_2\\ 1 - 2\min\{\theta_x, \theta_y\} - c_1,& \text{otherwise}. \end{cases} \end{align*}

Thus s(\theta_x, \theta_y) = 1 - 2\min\{\theta_x, \theta_y\} + c_2 is optimal, since \mathbb{E}(u_1) increases with s up to 1 - 2\min\{\theta_x, \theta_y\} + c_2, after which it drops to 1 - 2\min\{\theta_x, \theta_y\} - c_1 < 1 - 2\min\{\theta_x, \theta_y\} + c_2.

It can be shown that s^* = 1, and so r(s^*) < 0 and 1-s^* < t^*(s^*) + \min\{\theta_x, \theta_y\} - c_2 for any type. Given these responses, if player 2 does not reveal their type, their payoff is t^*(s^*) + \min\{\theta_x, \theta_y\} - c_2. If player 2 reveals their type, since we have shown that s(\theta_x, \theta_y) = 1 - 2\min\{\theta_x, \theta_y\} + c_2, player 2's payoff is 2\min\{\theta_x, \theta_y\} - c_2, and so player 2 prefers to reveal if and only if \min\{\theta_x, \theta_y\} > t^*(s^*).

Finally, by the above strategy for player 2's type revelation, if player 2 does not reveal, to be consistent player 1 must update to the uniform distribution on the region defined by \min\{\theta_x, \theta_y\} \leq t^*(s^*). Thus q_{U_1(s^*)} is consistent. If player 2 also rejects s^*, player 1 knows that 1-s^* < t^*(s^*) + \min\{\theta_x, \theta_y\} - c_2, that is, \min\{\theta_x, \theta_y\} > r(s^*). Thus the updated belief is uniform on \min\{\theta_x, \theta_y\} \in (r(s^*), t^*(s^*)], so q_{U_2(s^*)} is consistent. This proves that the proposed strategy profile and beliefs are a perfect Bayesian equilibrium.

Given s^* = 1, all player 2 types reject s^* (offered if player 2 does not reveal) in equilibrium, since t^*(1) - c_2 > 0. The equilibrium payoffs for any types that do not reveal are:

    \begin{align*} u_1 &= 1 - 2\min\{\theta_x, \theta_y\} - (t^*(1) - \theta_x)^2 - (t^*(1) - \theta_y)^2 - c_1 \\ u_2 &= t^*(1) + \min\{\theta_x, \theta_y\} - c_2 \end{align*}

Consider the payoff profile

    \[(s_P(\theta_x,\theta_y), 1-s_P(\theta_x,\theta_y)) = (1 - t^*(1) - \min\{\theta_x, \theta_y\} + c_2, t^*(1) + \min\{\theta_x, \theta_y\} - c_2),\]

induced by the strategy profile in which player 1 offers s = s_P(\theta_x,\theta_y) and player 2 accepts any s \leq s_P(\theta_x,\theta_y). This is feasible because player 2 only reveals if \min\{\theta_x, \theta_y\} \leq t^*(1), and c_2 < t^*(1) < \frac{1}{2}, so s_P(\theta_x,\theta_y) \in (0, 1). For this to be a Pareto improvement on the perfect Bayesian equilibrium, it is sufficient that \min\{\theta_x, \theta_y\} > t^*(1) - c_1 - c_2. This payoff profile is not IC, because player 2's payoff increases with \min\{\theta_x, \theta_y\}, so any player 2 for which \min\{\theta_x, \theta_y\} < t^*(1) can profit from the strategy profile above conditioned on a type (\theta_x', \theta_y') such that \min\{\theta'_x, \theta'_y\} > \min\{\theta_x, \theta_y\}.

LEMMA 3. Given q_{U_2(s)} as defined above, \mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) = m_2(s) and \mathbb{V}_{q_{U_2(s)}}(\theta_x) = \mathbb{V}_{q_{U_2(s)}}(\theta_y) = m_1(s) - t^*(s)^2.

PROOF. We have:

    \begin{align*} \mathbb{E}_{q_{U_2(s)}}(\min\{\theta_x, \theta_y\}) = \frac{\int_0^1 \int_0^1 \min\{\theta_x, \theta_y\} \mathbb{I}[\min\{\theta_x, \theta_y\} \in [r^+(s),t^*(s)]] d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))}\end{align*}

    \begin{align*} &= \frac{\int_0^1 \int_0^{\theta_y} \theta_x \mathbb{I}[\theta_x \in [r^+(s),t^*(s)]] d\theta_x d\theta_y + \int_0^1 \int_{\theta_y}^1 \theta_y \mathbb{I}[\theta_y \in [r^+(s),t^*(s)]] d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\&= \frac{\int_0^1 \int_{\min\{\theta_y,t^*(s), r^+(s)\}}^{\min\{\theta_y, t^*(s)\}} \theta_x d\theta_x d\theta_y + \int_{r^+(s)}^{t^*(s)} \int_{\theta_y}^{1} \theta_y d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= \frac{\frac{1}{2}\int_0^1 (\min\{\theta_y, t^*(s)\}^2 - \min\{\theta_y,t^*(s),r^+(s)\}^2) d\theta_y + \int_{r^+(s)}^{t^*(s)} \theta_y (1 - \theta_y) d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= \frac{\frac{1}{2}\int_0^{t^*(s)} \theta_y^2 d\theta_y + \frac{1}{2}\int_{t^*(s)}^1 t^*(s)^2 d\theta_y - \frac{1}{2}\int_0^{r^+(s)} \theta_y^2 d\theta_y - \frac{1}{2}\int_{r^+(s)}^1 r^+(s)^2 d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &+ \frac{\frac{1}{2}(t^*(s)^2 - r^+(s)^2) - \frac{1}{3}(t^*(s)^3 - r^+(s)^3)}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= \tfrac{\frac{1}{6}t^*(s)^3 + \frac{1}{2}(1-t^*(s)) t^*(s)^2 - \frac{1}{6}r^+(s)^3 - \frac{1}{2}(1-r^+(s)) r^+(s)^2 + \frac{1}{2}(t^*(s)^2 - r^+(s)^2) - \frac{1}{3}(t^*(s)^3 - r^+(s)^3)}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\&= m_2(s). \end{align*}

We showed above that \mathbb{E}_{q_{U_2(s)}} \theta_x = \mathbb{E}_{q_{U_2(s)}} \theta_y = t^*(s). Further,

    \[\mathbb{V}_{q_{U_2(s)}} \theta_x = \mathbb{E}_{q_{U_2(s)}} \theta_x^2 - (\mathbb{E}_{q_{U_2(s)}} \theta_x)^2,\]


    \begin{align*} \mathbb{E}_{q_{U_2(s)}} \theta_x^2 &= \frac{\int_0^1 \int_0^1 \theta_x^2 \mathbb{I}[\min\{\theta_x, \theta_y\} \in [r^+(s),t^*(s)]] d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= \frac{\int_0^1 \int_0^1 \theta_x^2 \mathbb{I}[\min\{\theta_x, \theta_y\} \geq r^+(s)] d\theta_x d\theta_y - \int_0^1 \int_0^1 \theta_x^2 \mathbb{I}[\min\{\theta_x, \theta_y\} \geq t^*(s)] d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= \frac{\int_{r^+(s)}^1 \int_{r^+(s)}^1 \theta_x^2 d\theta_x d\theta_y - \int_{t^*(s)}^1 \int_{t^*(s)}^1 \theta_x^2 d\theta_x d\theta_y}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\&= \frac{\frac{1}{3}(1 - r^+(s))(1 - r^+(s)^3) - \frac{1}{3}(1 - t^*(s))(1 - t^*(s)^3)}{(t^*(s) - r^+(s))(2 - t^*(s) - r^+(s))} \\ &= m_1(s). \end{align*}

B Proof of Theorem 2

Fix the programs of players i \neq j as (p_i)_{i \neq j}. Suppose player j uses p_j. Given this assumption, we omit the subscripts of p(y) and p(r). Let p^L(y) and p^L(r) respectively denote calls to p(y) and p(r) made at level L. If U_L < \epsilon and U_{L+1} < \epsilon for some L reached in the call stack, then every call to p^L(y) and p^{L+1}(y) immediately returns \mathbf{1}. Consequently, every call to p^L(r), which must be a parent call to p^{L+1}(y), returns \tau_i^{c}(\boldsymbol{t}) because line 5 in \epsilon\texttt{GroundedFairSIRBot}(r) evaluates to \texttt{True}. (Notice that the shared random variables \{U_L\} are essentiall — if the programs unconditionally cooperated using independently sampled variables, an exponentially increasing number of variables would each need to be less than \epsilon for all calls at a given level to return the cooperative output.) Let \mathcal{U}_L be the event that U_L < \epsilon and U_{L+1} < \epsilon, and \mathcal{U}'_K be the event that U_{2K-1} < \epsilon and U_{2K} < \epsilon. Thus for the program profile to terminate in finite time, it is sufficient to show that with probability 1 there exists a finite L such that \mathcal{U}_L holds. Given that \mathcal{U}'_K for K=1,2,\dots are independent, because they do not overlap, we have:

    \begin{align*} P(\cap_{L=1}^\infty \mathcal{U}_L^c) &\leq P(\cap_{K=1}^\infty {\mathcal{U}'_K}^c) \\ &= \prod_{K=1}^\infty (1-P(\mathcal{U}'_K)) \\ &= \lim_{N \to \infty} (1-\epsilon^2)^{N} \\ &= 0. \end{align*}

Since \cap_{L=1}^\infty \mathcal{U}_L^c is the complement of the event we wanted to guarantee, this proves termination with probability 1. Further, the event \mathcal{U}_L is sufficient for every call of p^{L-1}(y) and p^{L-1}(r) to return \mathbf{1} and \tau_i^{c}(\boldsymbol{t}), respectively, and this holds for all levels less than L. Therefore all base calls of the programs in the proposed profile return the corresponding \tau_i^{c}(\boldsymbol{t}) with probability 1.

Now suppose player j uses p_j' \neq p_j. Let L^* be the smallest finite level such that U_{L^*} < \epsilon, U_{L^*+1} < \epsilon, and U_{L^*+2} < \epsilon (which exists with probability 1 by a similar argument to that above). Then all p_i^{L^*}(y) and p_i^{L^*+1}(y) for i \neq j return \mathbf{1}. Further, every p_i^{L^*}(r) for i \neq j calls the truncated programs [p_k](y) for k \neq i, guaranteed to terminate by definition, thus p_i^{L^*}(r) terminates with either \tau_i^{c}(\boldsymbol{t}) or \underline{\tau}_i^{c}. But because U_{L^*+2} < \epsilon also guarantees that p_i^{L^*+1}(r) terminates, all calls to the programs of i \neq j made by player j's programs terminate. Thus all base calls of programs in this profile with one deviation terminate with probability 1.

We now consider the possible cases. Suppose U_1 \geq \epsilon and U_2 \geq \epsilon. First, note that any players i \neq j using \underline{\tau}_i^{c} know each other's types. To see this, note that all calls to p_i^{L^*}(y) for i \neq j return \mathbf{1}. So any call to p_i^{L^*-1}(y) for i \neq j will reveal to player k \neq i if either a_k = \underline{\tau}_k^{c} or U_{L^*} < \epsilon. The second condition is satisfied by assumption. Inductively applying this argument for L \leq L^* - 1, note that if U_{L+1} \geq \epsilon, we will only have a_k \neq \underline{\tau}_k^{c} if U_L < \epsilon (satisfying line 11 of \epsilon\texttt{GroundedFairSIRBot}(r)), but then this is sufficient to have p_i^{L}(y) return \mathbf{1} (line 21). If player j does not reveal their type, all players i \neq j return \underline{\tau}_i^{c}. Otherwise, \epsilon\texttt{GroundedFairSIRBot}(r) proceeds to line 13 for all players i \neq j. If player j plays \tau_{j}^{c}(\boldsymbol{t}), then all other players also play \tau_i^{c}(\boldsymbol{t}), giving the target payoff profile. Otherwise, all players i \neq j return \underline{\tau}_i^{c}. We therefore have that with probability at least (1-\epsilon)^2, all players i \neq j use \underline{\tau}_i^{c} whenever the outputs of p_j' do not match those of p_j. Hence:

    \begin{align*} & \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|p_j',p_{-j}) \\ &\qquad \leq (1 - (1-\epsilon)^{2}) \overline{u} \\ &\qquad + (1-\epsilon)^{2}\mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|p_j,p_{-j}) \\ &\qquad \leq \delta + \mathbb{E}_{\boldsymbol{t}_{-j} \sim q(\cdot | t_j)}(u_j(\boldsymbol{t}, \boldsymbol{\tau})|p_j,p_{-j}). \end{align*}

  1. A player who chooses to "not commit" submits a device that is not a function of the other players' devices. In this case, the other devices can only condition on this non-commitment choice, not on the particular action this player chooses.
  2. A player who chooses to "not commit" submits a device that is not a function of the other players' devices. In this case, the other devices can only condition on this non-commitment choice, not on the particular action this player chooses.