New Research In
Physical Sciences
Social Sciences
Featured Portals
Articles by Topic
Biological Sciences
Featured Portals
Articles by Topic
 Agricultural Sciences
 Anthropology
 Applied Biological Sciences
 Biochemistry
 Biophysics and Computational Biology
 Cell Biology
 Developmental Biology
 Ecology
 Environmental Sciences
 Evolution
 Genetics
 Immunology and Inflammation
 Medical Sciences
 Microbiology
 Neuroscience
 Pharmacology
 Physiology
 Plant Biology
 Population Biology
 Psychological and Cognitive Sciences
 Sustainability Science
 Systems Biology
Structure theorems for game trees

Contributed by Robert Wilson
Abstract
Kohlberg and Mertens [Kohlberg, E. & Mertens, J. (1986) Econometrica 54, 1003–1039] proved that the graph of the Nash equilibrium correspondence is homeomorphic to its domain when the domain is the space of payoffs in normalform games. A counterexample disproves the analog for the equilibrium outcome correspondence over the space of payoffs in extensiveform games, but we prove an analog when the space of behavior strategies is perturbed so that every path in the game tree has nonzero probability. Without such perturbations, the graph is the closure of the union of a finite collection of its subsets, each diffeomorphic to a corresponding pathconnected open subset of the space of payoffs. As an application, we construct an algorithm for computing equilibria of an extensiveform game with a perturbed strategy space, and thus approximate equilibria of the unperturbed game.
1. Introduction
The most useful general result in game theory is the structure theorem of Kohlberg and Mertens (1). It characterizes the topological structure of the graph of the Nash equilibrium correspondence over the space of payoffs from finite normalform games. Its proof is very simple, yet its corollaries include the existence of equilibria and strategically stable sets of equilibria, and the resulting index theory shows that a component with nonzero index contains a strategically stable set and that an equilibrium with negative index is typically dynamically unstable. Its practical applications include efficient algorithms for computing equilibria of games in normal form. Here we develop analogs for the graph of equilibrium outcomes over a domain that is the space of payoffs at final nodes of a game tree with perfect recall. As for normalform games, these results yield an algorithm for computing equilibria of games in extensive form.
Kohlberg and Mertens (1) prove that the equilibrium graph is homeomorphic to the space of players' payoffs from purestrategy profiles of the normal form. For a fixed game tree, however, the equilibrium graph cannot be homeomorphic to the space of payoffs at final nodes, because such games typically have continua of equilibria. Nevertheless, Kreps and Wilson (2) prove that a generic extensiveform game has finitely many equilibrium components, and all equilibria in each component have the same outcome, i.e., the same probability distribution on final nodes. Conceivably the analog of Kohlberg–Merten's characterization might be true for the graph of equilibrium outcomes, but we show in Section 3 that this analog is false. We establish two weaker characterizations. In Section 4, for a partition of the graph corresponding to the subtrees of equilibrium paths, we construct a homeomorphism from each part into a subset of the space of games; the graph is then the closure of the union of these parts. In Section 5 we prove an exact analog when the tree has no zeroprobability events. This result applies to any construction based on perturbing players' simplices of mixed strategies, e.g., the graphs of ɛperfect and ɛproper equilibria. Section 6 develops an efficient algorithm for computing equilibria of extensiveform games.
2. Formulation
We consider the space 𝒢 of extensiveform games obtained by assigning payoffs to the final nodes of a finite game tree Γ ≡ (T, ≺, U, N, P_{*}) with perfect recall. T is the set of nodes and ≺ is the irreflexive binary relation of precedence in the tree (T, ≺); that is, ≺ is acyclic and totally orders the predecessors {t′ ≺ t} of t. The subset of final nodes (those with no successors) is Z ⊂ T, U is the partition of T∖Z into information sets of the players and nature, N is the set of players, and P_{*}(z) > 0 is the probability that nature's actions do not exclude the final node z. U_{n} ⊂ U is the collection of information sets for player n ∈ N and A_{n}(u) is n's set of actions (branches of the tree) available at his information set u ∈ U_{n}. Let A_{n} ≡ ∪_{u}A_{n}(u) be the entire set of n's actions, each labeled differently. Write u ≺ z, or equivalently z ≻ u, if t ≺ z ∈ Z for some node t ∈ u, and write (u, i) ≺ z if t ≺ t′ ⪯ z for some node t′ that follows node t ∈ u and action i ∈ A_{n}(u). Similarly i ≺ i′ if i, i′ are actions at u, u′ with (u, i) ≺ u′. Perfect recall implies that each (U_{n}, ≺) is a tree. Player n's set of pure strategies is S_{n} ≡ {s : U_{n} → A_{n}s(u) ∈ A_{n}(u)}, and his simplex of mixed strategies is Σ_{n} ≡ Δ(S_{n}). Kuhn (3) shows that in a game tree with perfect recall each player n can implement a mixture of pure strategies by a payoffequivalent behavior strategy b_{n} = (b_{n}(u))_{u∈Un} in which each b_{n}(u) ∈ Δ(A_{n}(u)) is a mixture of actions in A_{n}(u); i.e., b_{n}(iu) is the conditional probability at u that n chooses i.
For the fixed tree Γ, the space of games is 𝒢 = ℜ^{N×Z}, where a game G ∈ 𝒢 assigns payoff G_{n}(z) to player n at final node z. The space of outcomes is Ω = Δ(Z), where an outcome P ∈ Ω assigns probability P(z) to z. Let ℰ ⊂ 𝒢 × Ω be the graph of pairs (G, P) for which P is the outcome induced by P_{*} and an equilibrium of the game G.
3. A Counterexample
A simple example shows that the analog of KohlbergMertens' structure theorem cannot be true for every game tree. Consider the tree in which player 1 chooses an action in A_{1} = {T, B} and the game ends if 1 chooses T; otherwise, knowing that 1 chose B, 2 chooses an action in A_{2} = {L, R}. For this tree, the projection p : ℰ → 𝒢, p(G, P) = G, is a proper map; i.e., the inverse image of a compact set is compact. Therefore, if there exists a homeomorphism ℋ : ℰ → 𝒢, then the composite map p ○ ℋ^{−1} : 𝒢 → 𝒢 is also proper, so its local degree is the same at every game in 𝒢; see Dold (4) Section VIII.4.45. To establish a contradiction, it suffices to present examples at which p ○ ℋ^{−1} has different local degrees. Consider the games (in normal form) The game G_{33} has a unique equilibrium path BL that persists in a neighborhood of G_{33}, because B remains a strictly dominant strategy for player 1 and L remains the unique best reply for 2. Therefore, the local degree of p ○ ℋ^{−1} at G_{33} must be +1 or −1. The game G_{31} has the two equilibrium paths BL and T, and again all games in a neighborhood of G_{31} have these same two outcomes. Therefore the local degree of p ○ ℋ^{−1} at G_{31} must be −2, 0, or +2. This contradiction does not occur over the space of normalform games. For instance, in that larger space, the local degree of p ○ ℋ^{−1} at G_{31} is +1: the index of the outcome T is 0 and the index of BL is +1, and the local degree at G_{31} is the sum of these indices. Similar counterexamples can be constructed for the graphs of subgameperfect and sequential equilibrium outcomes. Further analysis reveals two ways that the manifold property of the space of games on the tree is not inherited by the graph of equilibrium outcomes. First, (G_{32}, T) has a neighborhood in ℰ that is a manifold with boundary. Second, (G_{21}, T) is a point where the graph bifurcates over the segment 2 − ɛ < x < 2 + ɛ: if x < 2, then T is the unique equilibrium outcome, at x = 2 there is a continuum of equilibrium outcomes, and if x > 2, then both T and BL are equilibrium outcomes.
4. A Partial Structure Theorem for Games in Extensive Form
In this section, we prove a structure theorem for graphs of equilibrium outcomes on subtrees. For this section only, we assume that the space of games in 𝒢_{+} ≡ ℜ. This assumption is without loss of generality, because there exists a homeomorphism between 𝒢 and 𝒢_{+} that preserves the equilibrium correspondence. Indeed, there exists a monotonic function f : ℜ → ℜ_{++} such that f(x) goes to zero as x goes to −∞, and f(x) goes to +∞ as x goes to +∞. Then the map that sends G ∈ 𝒢 to G′ ∈ 𝒢_{+} given by G′_{n}(z) = G_{n}(z) + f(λ_{n}) − λ_{n}, where λ_{n} = min_{z}G_{n}(z), is a homeomorphism that preserves the equilibrium correspondence. It suffices therefore to use the graph Ē of equilibrium outcomes over 𝒢_{+}. Consider first the subset ℰ_{+} ≡ {(G, P) ∈ ĒP(z) > 0 ∀z} of the graph over 𝒢_{+}, where all nodes have positive probability. Define the map Φ : ℰ_{+} → 𝒢_{+}, Φ(G, P) = H, by H_{n}(z) = G_{n}(z)P(z).
Lemma 4.1.
Φ : ℰ_{+} → 𝒢_{+} is a homeomorphism.
Proof:
We construct a continuous map Ψ : 𝒢_{+} → ℰ_{+} and show that Ψ ○ Φ and Φ ○ Ψ are identities on ℰ_{+} and 𝒢_{+}, respectively, which immediately implies the result.
Define Ψ : 𝒢_{+} → ℰ_{+} as follows. Given any H ∈ 𝒢_{+}, first define H_{n}(u, i) ≡ ∑_{z≻(u,i)}H_{n}(z) and H_{n}(u) ≡ ∑_{i∈An}_{(u)}H_{n}(u, i); next, define b_{n}(iu) ≡ H_{n}(u, i)/H_{n}(u); then the outcome P is obtained from P_{*} and the behavior profile b; and finally, define G_{n}(z) ≡ H_{n}(z)/P(z). Each of these is positive and G ∈ 𝒢_{+} as required. The outcome P induces probabilities P(u) and P(u, i) of the events that the information set u is reached and that action i is taken there; and because P(u, i) > 0, also the conditional probability Q(zu, i) = P(z)/P(u, i) if z ≻ (u, i). Thus Therefore which implies that G_{n}(u, i) = G_{n}(u) for each i ∈ A_{n}(u). From its definition above, G_{n}(u, i) is n's continuation value from action i at u in the game G with behavior profile b, so the fact that it is the same for each action i ∈ A_{n}(u) verifies that b is an equilibrium. Therefore, (G, P) ∈ ℰ_{+} as required for Ψ to be well defined. Obviously, Ψ is a continuous map. Also, by construction, Φ ○ Ψ is the identity on G_{+}. To complete the proof, it remains to be shown that Ψ ○ Φ is the identity on ℰ_{+}. To prove this, suppose (G′, P′) ∈ ℰ_{+} and let v′_{n}(iu) be n's equilibrium continuation value from i at u. Because P′ ≫ 0, all actions at u are optimal, so v′_{n}(iu) = v′_{n}(u) for all i ∈ A_{n}(u), and ∑_{z≻(u,i)}G′_{n}(z)P′(z) = v′_{n}(u)P′(u)b′_{n}(iu), where b′ is the behavior profile for P′. Therefore, if Φ(G′, P′) = H and Ψ(H) = (G, P) as above, then b′_{n}(iu) = H_{n}(u, i)/H_{n}(u) ≡ b_{n}(iu). Because P′ is uniquely determined by b′, P′ = P; and because G′_{n}(z)P′(z) = H_{n}(z) = G_{n}(z)P(z), also G′ = G. Thus Ψ ○ Φ is the identity, as asserted. □
Each equilibrium induces an equilibrium with full support on the pruned tree obtained by eliminating all nodes following a branch with zero probability. Therefore:
Theorem 4.2.
There exist finitely many subsets ℰ_{k} ⊂ Ē, such that (a) each ℰ_{k} is homeomorphic to an open pathconnected subset of 𝒢_{+} and (b) Ē is the closure of ∪_{k}ℰ_{k}.
Proof:
Each ℰ_{k} is the subset of the graph consisting of all pairs (G, P) ∈ Ē in which the support of the outcome P is Z_{k} ⊂ Z and the outcome is induced by an equilibrium in which any pure strategy is strictly inferior if it does not exclude some information set on the equilibrium path and uses there an action that has zero probability in the equilibrium. It is then clear that these ℰ_{k}s satisfy part b, so it remains to prove a. Let 𝒢_{k} be the projection of 𝒢_{+} onto the coordinates corresponding to Z_{k}. 𝒢_{k} is the space of games with the tree Γ_{k} obtained from Γ by retaining only branches leading to nodes in Z_{k}. Let Ê_{k} be the graph of the (completely mixed) equilibrium outcome correspondence over 𝒢_{k}. By Lemma 4.1, Ê_{k} is homeomorphic to 𝒢_{k}. Express 𝒢_{+} as the product of 𝒢_{k} with ℱ_{k} = ℜ. Then Ê_{k} × ℱ_{k} is homeomorphic to 𝒢_{k} × ℱ_{k}, using the identity map on the second factor ℱ_{k}. ℰ_{k} is an open subset of Ê_{k} × ℱ_{k} and is therefore mapped onto an open subset of 𝒢_{+} by the homeomorphism. It remains to show that ℰ_{k} is pathconnected. Let ((G_{k}, F_{k}), P) and ((G′_{k}, F′_{k}), P′) be two points in ℰ_{k}. Because Ê_{k} is homeomorphic to 𝒢_{k}, connect (G_{k}, P_{k}) and (G′_{k}, P′_{k}) by a path in Ê_{k}. Let F*_{k} ∈ ℱ_{k} be a vector whose coordinates are all strictly less than the payoff received by any player at any equilibrium along this path, and less in each coordinate than F_{k} and F′_{k}. Then the linear paths connecting ((G_{k}, F_{k}), P) to ((G_{k}, F*_{k}), P) and ((G′_{k}, F′_{k}), P′) to ((G′_{k}, F*_{k}), P′) are in ℰ_{k}, because decreasing payoffs to players at nodes in Z∖Z_{k} has no effect on equilibria. Now the points ((G_{k}, F*_{k}), P) and ((G′_{k}, F*_{k}), P′) can be pathconnected using the path in Ê_{k}. The choice of F*_{k} ensures that the path belongs to ℰ_{k}. □
The counterexample in Section 3 shows that, for some trees, a violation of the manifold property occurs at boundaries among the open subsets ℰ_{k}. These boundaries lie within the lowerdimensional set of nongeneric games excluded by theorems showing that a generic extensiveform game has a finite number of equilibrium outcomes; see Kreps and Wilson (2) or Govindan and Wilson (5).
5. A Structure Theorem for Games with Perturbed Strategies
In this section, we assume that each player n's simplex Σ_{n} of mixed strategies is perturbed to the compact convex subset Σ ⊂ Σ_{n} disjoint from the boundary. Because P_{*} ≫ 0 by assumption, these perturbations assure that every mixedstrategy profile in Σ^{ɛ} ≡ ∏_{n}Σ yields a positive probability for every final node.
From a mixedstrategy profile σ, one can derive the corresponding “nonexclusion” or enabling profile p ∈ ∏_{n}[0, 1]^{Ln} as follows. L_{n} is the set of player n's last actions; that is, i ∈ L_{n} ⊂ A_{n} iff there exists z ∈ Z such that i is the ≺maximal element in A_{n}(z) ≡ {i′ ∈ A_{n}i′ ≺ z}; that is, i = ℓ_{n}(z) ≡ Arg max A_{n}(z). If L_{n} = ∅, then n is a dummy player so p_{n} can be omitted from the profile. For each i ∈ L_{n}, p_{n}(i) is the probability under σ_{n} that n's selected pure strategy does not exclude i or any of n's actions preceding i. One computes p_{n}(i) as follows. The subset of n's pure strategies that do not exclude z is S_{n}(z) ≡ {s ∈ S_{n}(u, i) ≺ z ⇒ s(u) = i}. If n uses the mixed strategy σ_{n}, then the probability that n does not exclude z is P_{n}(z) = ∑_{s∈Sn}_{(z)}σ_{n}(s), or P_{n}(z) = 1 if A_{n}(z) = ∅. Let Z_{n}(i) ≡ ℓ(i) = {zℓ_{n}(z) = i} and let s_{n}(i) ≡ S_{n}(z) for each z ∈ Z_{n}(i) be the event that n's pure strategy enables i ∈ L_{n}. Then p_{n}(i) ≡ Prob_{σ}(s_{n}(i)) = P_{n}(z) for each z ∈ Z_{n}(i).
The feasible set of enabling profiles is 𝒞 ≡ ∏_{n}C_{n}, where for each nondummy player n, his feasible set of enabling strategies is Observe that C_{n} is compact and convex. Because Σ is disjoint from the boundary of Σ_{n}, p_{n} ∈ C_{n} only if each p_{n}(i) > 0.
For a mixed strategy σ_{n} ∈ Σ, the induced enabling strategy p_{n} is equivalent to a behavior strategy b_{n} obtained as follows. Each b_{n}(iu) is proportional to β_{n}(u, i) ≡ ∑σ_{n}(s), where the sum is over those pure strategies s ∈ S_{n} such that if (u′, i′) ⪯ (u, i), then s(u′) = i′. Each pure strategy in this sum selects an action at each of n's next information sets after (u, i), if any. Therefore, if (u′, i′) is the immediate predecessor of u among n's information sets, then β_{n}(u′, i′) = ∑_{i∈An}_{(u)}β_{n}(u, i). This recursion enables calculation of β_{n} by working backward from n's final information sets where β_{n}(u, i) = p_{n}(i). Conversely, from a behavior strategy one can derive the enabling strategy via p_{n}(i) = ∏_{(u′,i′)⪯(u,i)}b_{n}(i′u′) for each i ∈ L_{n}, because the normalizing factors cancel along a path. Similarly, an enabling profile p yields an outcome P via P(z) = P_{*}(z) × ∏_{n}P_{n}(z), and an outcome implies the behavior strategy via b_{n}(iu) ∝ ∑_{z≻(u,i)}P(z).
An enabling strategy is the minimal representation of a behavior strategy that preserves the linearity and convexity of the space of mixed strategies. This representation avoids complications from auxiliary constraints imposed by nonminimal representations. S. Elmes (Columbia D.P. 490, 1990, personal communication) notes these complications in her repair of defects in appendix C of ref. 1. Enabling strategies are closely related and essentially equivalent to strategies in sequence form [von Stengel (6)].
Given an outcome P, the expected payoff of player n can be written where P^{n}(z) ≡ P_{*}(z) × ∏_{n′≠n}P_{n′}(z), v_{n}(∅) ≡ ∑_{zAn}_{(z)=∅}G_{n}(z)P^{n}(z), and v_{n}(i) ≡ ∑_{z∈Zn}_{(i)}G_{n}(z)P^{n}(z). As in Gül, Pearce, and Stacchetti (7), let r_{n} : ℜ^{Ln} → C_{n} be the retraction that maps x to the point in C_{n} closest to x in Euclidean distance; namely, p_{n} = r_{n}(x) iff ∑_{i∈Ln}[p′_{n}(i) − p_{n}(i)][x(i) − p_{n}(i)] ≤ 0 for all p′_{n} ∈ C_{n}.
Lemma 5.1.
An enabling strategy p_{n} ∈ C_{n} for a nondummy player n is an optimal reply to σ ∈ Σ^{ɛ} (or an equivalent profile of behavior or enabling strategies) iff p_{n} = r_{n}(p_{n} + v_{n}).
Proof:
A mixed strategy σ_{n} ∈ Σ is optimal for player n iff for all σ′_{n} ∈ Σ where p_{n}(i) ≡ ∑_{s∈sn}_{(i)}σ_{n}(s) and p′_{n}(i) ≡ ∑_{s∈sn}_{(i)}σ′_{n}(s) are the corresponding enabling strategies in C_{n}. Because the possible values of p′_{n} include all of C_{n}, this is precisely the variational inequality that characterizes the equality p_{n} = r_{n}(p_{n} + v_{n}). □
Thus in terms of enabling strategies, an equilibrium is a fixed point p = r(p + v) in the sense that p_{n} = r_{n}(p_{n} + v_{n}) for each player n, where v is derived from p as above. Hereafter, we consider only equilibria in enabling strategies. As above the set of enabling profiles is 𝒞. Represent a game as a point G ∈ 𝒢 ≡ ℜ^{N×Z}, the space of players' payoffs at final nodes. Let ℰ ⊂ 𝒢 × 𝒞 be the graph of the equilibrium correspondence over the space of games for the game tree Γ. Let E[⋅⋅] be the conditional expectation operator for a fixed strictly positive probability distribution in Δ(Z). Define the map ℋ : ℰ → 𝒢, ℋ(G, p) = H, by or H_{n}(z) = G_{n}(z) if A_{n}(z) = ∅, for each player n and final node z, where g_{n}(i) = E[G_{n}(z)Z_{n}(i)] for each i ∈ L_{n}.
Theorem 5.2.
ℋ is a homeomorphism.
Proof:
Define 𝒦 : 𝒢 → ℰ as follows. Given H ∈ 𝒢, first let h_{n}(i) = E[H_{n}(z)Z_{n}(i)] for each n and i ∈ L_{n}. Also, let p_{n} = r_{n}(h_{n}) and v_{n} = h_{n} − p_{n} for each nondummy player n. Next, let g_{n}(i) = h_{n}(i) + [v_{n}(i) − ∑_{z∈Zn}_{(i)}H_{n}(z)P^{n}(z)]/P^{n}(Z_{n}(i)); and finally, let G_{n}(z) = H_{n}(z) + g_{n}(ℓ_{n}(z)) − h_{n}(ℓ_{n}(z)), or G_{n}(z) = H_{n}(z) if A_{n}(z) = ∅. The G thus constructed satisfies ∑_{z∈Zn}_{(i)}G_{n}(z)P^{n}(z) = v_{n}(i) for i ∈ L_{n}. Therefore, v_{n}(i) is indeed n's marginal expected payoff from increasing p_{n}(i), which by Lemma 5.1 is sufficient for p_{n} to be an optimal reply by a nondummy player n in the game G. Thus, 𝒦 is a well defined continuous map. It is immediate from our construction that ℋ ○ 𝒦 is the identity on 𝒢. We will now show that 𝒦 ○ ℋ is the identity on ℰ, which then implies that 𝒦 = ℋ^{−1}, i.e., ℋ is a homeomorphism. Suppose H = ℋ(G′, p′) and (G, p) = 𝒦(H). For each n and i ∈ L_{n}, h_{n}(i) ≡ E[H_{n}(z)Z_{n}(i)] = p′_{n}(i) + v′_{n}(i). Because p_{n} ≡ r_{n}(h_{n}), we therefore have that p_{n}(i) = p′_{n}(i) and also that v_{n}(i) = v′_{n}(i) for all n and i ∈ L_{n}. By the definition of v_{n} and ℋ, Hence, g′_{n}(i) = h_{n}(i) + [v_{n}(i) − ∑_{z∈Zn}_{(i)}H_{n}(z)P^{n}(z)]/P^{n}(Z_{n}(i)), which by the definition of 𝒦 is g_{n}(i). Consequently, G = G′, and 𝒦 ○ ℋ is the identity on ℰ. □
A repetition of the proof in ref. 1 shows that H extends to a homeomorphism between the onepoint compactifications of ℰ and 𝒢 and that proj_{𝒢} ○ ℋ^{−1} is linearly homotopic to the identity map on the onepoint compactification of 𝒢; thus, proj_{𝒢} is a map of degree one. An obvious corollary is that their theorem applies to Nash equilibria of normalform games with perturbed sets of mixed strategies. Theorem 5.2 applies to stronger definitions of equilibrium in extensiveform games based on perturbed strategy sets. For example, if Σ = {ɛσ̄ + [1 − ɛ]σσ ∈ Σ_{n})}, where ɛ > 0 and σ̄ is the barycenter of Σ_{n}, then ℰ is the graph of ɛperfect equilibria over the space of extensiveform games on the tree Γ. Similarly, if Σ is the convex hull of the points generated by all permutations of the coordinates of the vector (1, ɛ, ɛ^{2}, …), rescaled to lie in Σ_{n}, then ℰ is the graph of ɛproper equilibria; see ref. 1, proposition 5. In other applications where each r_{n} is smooth because Σ has a smooth boundary, ℋ is a diffeomorphism.
6. An Algorithm for Computing Equilibria of Perturbed ExtensiveForm Games
In this section, we apply Theorem 5.2 to construct an algorithm for computing equilibria of an extensiveform game defined on a tree Γ with perturbed sets of mixed strategies. This algorithm is a variant of the algorithms in Govindan and Wilson (8, 9) for normalform games; see those articles for technical background, detailed specifications, computer programs, and numerical results for Nplayer games. Proofs in those articles apply here almost verbatim: the only difference is that the retraction r_{n} to player n's simplex Σ_{n} of mixed strategies is replaced here by the retraction to the polytope 𝒞_{n} of n's enabling strategies.
First, we describe a general parametric method that exploits the key simplifying feature that the retraction r is independent of the payoffs. Represent the game G as the pair (G̃, g) where G̃_{n}(z) = G_{n}(z) − g_{n}(ℓ_{n}(z)) using g_{n} ∈ ℜ^{Ln} as defined in Section 5. Using the homeomorphism ℋ of Theorem 5.2, the algorithm finds an equilibrium of G by tracing the solutions of the equation ℋ(G̃, g + λγ; p(h)) = H ≡ (G̃, h), where λγ parameterizes a ray whose origin represents the game (G̃, g) whose equilibria are to be computed. As in Theorem 5.2, at each solution h, the equilibrium p(h) of the game (G̃, g + λγ) is the enabling profile p(h) = r(h) obtained by retracting h to 𝒞. The algorithm starts by choosing a ray γ and an initial scalar parameter λ° sufficiently large that the game (G̃, g + λ°γ) has a unique equilibrium p°. One then follows the trajectory in the graph above the line segment through the two points (G̃, g + λ°γ) and (G̃, g) as the parameter λ decreases to zero. The implementation must contend with the usual two complications of homotopy methods: (i) the ray γ must be generic to ensure that the equilibrium outcome p° is unique when λ° is sufficiently large and to exclude bifurcations along the trajectory; and (ii) the trajectory in the graph includes reversals of orientation, so the parameter λ cannot decrease monotonically. Each time the trajectory crosses λ = 0 yields an additional equilibrium of the game (G̃, g), and for generic games these equilibria have alternating indices +1 and −1. Different choices of the ray γ can yield different equilibria.
The parametric method can be implemented via the Global Newton Method (GNM) of Smale (10). Using a monotonic time parameter t, the trajectory is (h(t), λ(t))_{t≥t°}. The algorithm starts at λ(t°) = λ° and finds the kth equilibrium on the trajectory at a time t^{k} > t^{k−1} for which λ(t^{k}) = 0. Actual computations use discrete steps, but here time is assumed to be continuous. In its simplest form, GNM finds a root of a differentiable function F by tracing the trajectory of the differential equation ḣ = −θ(h)[DF(h)]^{−1}⋅F(h) starting from an initial point h°. In this form, θ(h) is a continuous scalar velocity, and DF(h) is the Jacobian matrix of F at h. However, better numerical properties are obtained by starting from a unique solution h° to F(h°) = λ°γ and using the equations ḣ = −(Adj[DF(h)])⋅γ and λ̇ = −Det[DF(h)] corresponding to the velocity θ(h) = −λ̇/λ. In particular, replacing the inverse by the adjoint matrix of the Jacobian enables the trajectory to pass through singularities of codimension 1. Note that the trajectory reverses orientation each time the determinant changes sign.
GNM is invoked by translating the equations in the proof of Theorem 5.2. Assuming there are no dummy players, let F : ∏_{n}ℜ^{Ln} → ∏_{n}ℜ^{Ln} be the displacement map of ℋ with G̃ fixed; that is, for each i ∈ L_{n}, where from p = r(h) one constructs the outcome P. Then an equilibrium of the game (G̃, g + λγ) is obtained from a solution of the equation F(h) = λγ. In vector form, let F(h) = h − g − p − G̃⋅Q(p), where in the matrix Q(p) a nonzero element Q_{zi} = P^{n}(z) if z ∈ Z_{n}(i) and i ∈ L_{n}. The Jacobian DF at h is DF(h) = I − [I − G̃⋅DQ(p)]⋅Dr(h), where I is the identity matrix, DQ is the Jacobian of Q at p = r(h), and Dr is the Jacobian of r at h. The resulting trajectory of GNM is the path of the differential equation Everywhere on this trajectory, ℋ(G̃, g + λγ; p(h)) = (G̃, h) if one starts with F(h°) = λ°γ, where for a generic ray γ, λ(t°) = λ° is sufficiently large that p° = r(h°) is the unique equilibrium of the game (G̃, g + λ°γ). The Jacobian DF is continuous except that, when 𝒞 is polyhedral (as in important applications), Dr changes discontinuously where some p_{n} moves from one face to another of C_{n}. We show in ref. 9 that the trajectory is continuous across such a boundary even though its direction changes discontinuously. For standard applications such as ɛperfect equilibria (used to approximate sequential equilibria), the blockdiagonal matrix Dr is constant on each face of the polyhedron 𝒞.
Acknowledgments
This work was funded in part by grants from the Social Sciences and Humanities Research Council of Canada and the National Science Foundation of the United States.
Abbreviation
 GNM,
 Global Newton Method
 Accepted April 27, 2002.
 Copyright © 2002, The National Academy of Sciences
References
 ↵
 ↵
 ↵
 Kuhn H,
 Tucker A
 Kuhn H
 ↵
 Dold A
 ↵
 ↵
 ↵
 Gül F,
 Pearce D,
 Stacchetti E
 ↵
Govindan, S. & Wilson, R. (2002) J. Econ. Dynam. Control 26, in press.
 ↵
Govindan, S. & Wilson, R. (2002) J. Econ. Theory 104, in press.
 ↵
Citation Manager Formats
More Articles of This Classification
Physical Sciences
Applied Mathematics
Social Sciences
Related Content
 No related articles found.
Cited by...
 No citing articles found.