More coursework: 1 - A | B | C | D | E | F | G | H | I - J | K - L | M | N - O | P - S | T | U - Y

The History and Taxonomy of Game Theory

The History and Taxonomy of Game Theory Contents Introduction and Brief History I - Static Non-Cooperative Games 1. Strategic Normal

Games 2. Bayesian (Incomplete Information) Games II - Dynamic Non-Cooperative Games 1. Strategic Extensive Games 2. Imperfect

Information Games 3. Finite Repeated Games 4. Infinitely Repeated Games 5. Dynamic Bayesian Games III - Coalitional Games 1. Simple

Objection (TU, NTU, Core) 2. Stable Objection (Stable Sets) 3. Balanced Objection (Bargaining Set, Kernel, Nucleolus, Shapley Value) 4.

Bargaining (NBS, Rubinstein) IV - Summary/Glossary of Important Terms and Concepts V - Some Useful Game Theory Books Introduction and

Brief History Game Theory has emerged recently as a powerful challenger to the conventional method of examining economics. Although many

illustrious predecessors worked on problems in what can be called "game theory", the fundamental, formal conception of game theory as part and

parcel of economic theory were first organized in John von Neumann and Oskar Morgenstern's 1944 classic, Theory of Games and Economic

Behavior (1944) which used the "maximin" solution concept derived earlier by John von Neumann (1928) to solve simple strategic normal games.

Game Theory can be roughly divided into two broad areas: non- cooperative (or strategic) games and cooperative (or coalitional) games. The

meaning of these terms are self evident, although John Nash claimed that one should be able to reduce all co-operative games into some

non-cooperative form - a position that became known as the "Nash Program". Within the literature, a distinction can also be made between

"normal" form games (static), and "extensive" form games (dynamic) as well as by the type of informational structure assumed. John von Neumann

and Oskar Morgenstern originally visualized game theory as a more "General Theory of Rational Behavior" (the original working title of their 1944

book). The main issue about game theory is that instead of agents making decisions as reactions to exogenous "prices" or some other "dead

variable", their decisions are strategic reactions to other agents' actions ("live variables"). An agent is faced with a set of moves he can play and will

form a strategy, a best response to his environment, which he will play by. Strategies can be either "pure" (play a particular move) or "mixed"

(random play). A "Nash Equilibrium" will be reached when each agent's actions begets a reaction by all the other agents which, in turn, begets the

same initial action. In other words, all best responses of all players are in accordance with each other. They then went on to introduce the strategic

normal game, strategic extensive game, and the concept of pure/mixed strategies and coalitional games. The axiomatization of expected utility

theory that they provide in their initial chapter was to be an extraordinary impetus for the later axiomatization of Neo-Walrasian theory. In 1950,

John F.Nash introduced the concept of a "Nash Equilibrium" (NE), which became the organizing concept underlying most of Game Theory - even

though the concept was actually known from as far back as Cournot (1838). Nash followed this up in 1951 with the concept of a "Nash

Bargaining Solution" (NBS) for coalitional games. Then the floodgates opened around Nash Equilibrium. In the field of non-cooperative games, R.

Duncan Luce and Howard Raiffa (1957) provided the first popular textbook on Game Theory and, in it, they formalized the idea of the Iterated

Elimination of Dominated Strategies (IEDS) method for Strategic Normal Games. i.e. the method of finding Nash equilibria by ruling out strategies

which are obviously "worse" in a precise sense. They also introduced the concept of "Repeated Game" (static games which are played several

times over by the same people). H.W. Kuhn (1953) introduced extensive games with "imperfect information" (i.e. where one does not know what

moves have already been played by other players). William Vickrey (1961) provided the first formalization of "auctions". Reinhard Selten (1965)

developed the concept of a "Sub-game Perfect Equilibrium" (SPE) (i.e. elimination by backward induction) as a refined solution for extensive form

games. John C. Harsanyi (1967-8) developed the concept of a "Bayesian Nash Equilibrium" (BNE) for Bayesian games (i.e. games with

incomplete information - where there is some uncertainty surrounds the moves, or "nature plays"). In coalitional (co-operative) games further

refinements also occurred. Lloyd Shapley (1953) introduced the concept of the "Shapley Value" and the "Core" (which had been originally

conceived by Edgeworth (1881)) as solutions to Coalitional Games. Throughout the early 1960s, Robert J. Aumann and Martin Shubik began to

apply Game Theory extensively throughout economics (e.g. industrial organization, general equilibrium, monetary theory, etc.) as well as invent

several solution concepts for Coalitional Games (e.g. Bargaining Set, Strong Equilibrium), develop theorems for "large games" with infinite players,

as well as provide some early statements of the "Folk Theorems" (solution concepts for Repeated Games). David Schmeidler (1969) developed

the "Nucleolus" solution to Coalitional Games. Further developments emerged in the 1970s. John C. Harsanyi (1973) provided a remarkably

insightful new interpretation of the concept of a "mixed strategy". Robert J. Aumann (1974) defined "Correlated Equilibrium" for Bayesian Games

while Reinhard Selten (1975) introduced "Trembling Hand Equilibrium" for Bayesian Games. Further definitions came: Robert J. Aumann (1976)

formally defined the concept of "Common Knowledge", opening a floodgate of literature on the concept of "knowledge" whereas B.D. Bernheim

(1984) and D.G. Pearce (1984) formalized the concept of "rationalizability". Advancements continued apace: David M. Kreps and Brian Wilson

(1982) introduced the concept of "Sequential Equilibrium" (SEQE) for extensive games with imperfect information while Ariel Rubinstein (1982),

following the "Nash Program", transformed the co-operative Nash Bargaining Solution (NBS) into a non-cooperative Strategic Extensive Game.

Elon Kohlberg and Jean-Francois Mertens (1986) developed the concept of "Forward Induction" for extensive games. Drew Fudenberg and E.S.

Maskin (1986) developed one of the more complicated "Perfect Folk Theorems" for infinitely repeated games. Finally, J.C. Harsanyi and R.Selten

(1988) developed the idea of "equilibrium selection" for any type of game while Drew Fudenberg and Jean Tirole (1991) developed the "Bayesian

Perfect Equilibrium" (BPE) for Extensive Bayesian Games. Evolutionary Game Theory started its development slightly later - although it can be

thought of as part of the Nash Program. Its objective is to apply the concepts of non-cooperative game theory to explain such phenomena which

are often thought to be the result of cooperation or human design - i.e. "institutions" and "conventions" such as market formation, price

mechanisms, social rules of conduct, money and credit, etc. One of the earliest exponents of the theory of evolutionary games was Thomas C.

Schelling (1960, 1981) who argued that apparently "cooperative" social institutions (in this case, settlements to conflicts) are maintained by

essentially by "threats" of punishment and retaliation. Several Nobel Prizes have been awarded to some of major figures of Game Theory: the

Nobel was shared by John Nash, J.C. Harsanyi and R. Selten in 1994 and William Vickrey in 1996. Herbert Simon won the Nobel in 1979 for

concepts (e.g. bounded rationality) which have since been incorporated into the corpus of (Evolutionary) Game Theory. Type I - Static

Non-Cooperative Games Also called "normal-form" games or games where everyone plays "at once". (1) Strategic Normal Games Is the most

familiar game. The basic structure of which is as follows: · N agents · Each agent has a set of actions, Ai. · Each agent forms preferences over the

set of "joint actions" A (= product of Ai) an "outcome" of a game depends on what everybody plays. Thus, a payoff to a player i depends on all N

players' actions. Strategy: A strategy is a choice of action by the individual; each agent then constructs a best response function to all other possible

configurations of actions of his opponents. We have two types: (1) Pure Strategy = deterministic play e.g. "I play left always". (2) Mixed Strategy

= Randomized play, e.g. "I play left half the time and right the other half depending on a flip of a coin in my head." Equilibrium: Nash Equilibrium

(NE) - which means no one has incentive to change strategy - whether mixed or pure. Examples: Prisoner's Dilemma, Cournot Duopoly. Notes: In

the old (and sometimes new) days, solutions to problems were solved not by "Nash Equilibrium" but by "Iterated Elimination of Dominated

Strategies" (IEDS) (i.e. ruling out choices that are simply unreasonable until we reach some solution). To do this effectively, it often requires

"assuming" the other person is rational too. Thus, from this literature, we have the concept of a "Rationalizable" Strategy (one that makes "sense"

given our beliefs about our opponents' knowledge and rationality) as well as formal definitions of "Mutual Knowledge" and "Common

Knowledge". The issue of "stability" of the Nash equilibrium becomes more important in this context. (2) Bayesian (Incomplete Information)

Games Also called Bayesian normal games or "incomplete information" games. The structure of this game is the same as the static strategic normal

game presented before, except now there is external randomness, i.e. "nature" plays too ("chooses" a state of nature randomly) and that affects the

outcome and thus payoffs. So, preferences are defined not on the set of "joint actions" but rather on "joint actions" combined with "state space"

(space of possible states of the world, e.g. [rain, shine]). Thus the essential components are: · Preferences are formed over "lotteries" (probability

distributions over set of joint actions) as opposed to sure outcomes. Strategy: For the individual, a strategy is now a function from the state space

to the action space. So, a pure strategy in this context is not simply "do left" but rather, "If it rains, play left; if it shines, play right." Similarly, a

mixed strategy is: "If it rains, play left half the time and right half the time; if it shines, play left one-third of the time and right two- thirds of the time".

Equilibrium: "Bayesian Nash Equilibrium" (BNE) which means nobody changes their strategies (but, recall, strategies are now "functions" from a

state space). Examples: Auctions, double auctions (i.e. simultaneous bargaining), and other situations of asymmetric information - notably

"principal-agent" models. Notes: It's called "incomplete information" because the state of nature that will be realized is unknown. Sometimes "states

of nature" are substituted for "types of players". So, you may be a good type or a bad type, but you don't know. At the beginning of the game,

nature plays and determines the type of person you are (and the type everybody else is). Then you play your action. But as this is a "static" game,

you and nature play at the same time. So what you play is really something like: "If nature tells me I'm a good type, I'll play right; but if nature tells

me I'm a bad type, I'll play left." So, "hidden types" are a way of thinking about this. Also: it is assumed that your "type" is uncorrelated with the

"types" that nature ascribes to others. If this is not true, then we have a "Correlated Equilibrium" which must be dealt with differently. There is one

more note: sometimes we make mistakes in our choices. If there are mistakes, then we have "Trembling Hand Equilibrium" as one which tends to

survive such errors. Issues of "mechanism design" (i.e. "principal-agent") which overlaps with much of older economics can also be subsumed

under this topic. Type II - Dynamic Non-Cooperative Games Also called "extensive form" games or games where plays are made in succession

over time. Thus, it is normally drawn as a "decision tree" where people have their turn to make decisions at different "node". (1) Strategic Extensive

Games These are the dynamic equivalent of strategic normal games. In the "decision tree", at every node, someone plays. Thus, an "action" by

agent is the action he takes at a particular node (i.e. the "edge" he chooses to move along in the tree). Thus, the components are as follows: · N

agents · Each agent has set of "actions" Ai (= actions taken at particular nodes where it is the turn of player i). · Each agent has preferences over

the "final payoff" in the final period (where the tree ends up). Strategy: a strategy for a player is a function from every node where he has a move to

the actions he can take at that node. Now, we have three types of Strategy: (1) Pure Strategy: e.g. at node 1, play Right; at node 3, play left; at

node 5, play Right, etc. A single pure strategy must account for all nodes when that player has a move. (2) Behavioral Strategy: at a particular

node, play Right with 50% probability, play left with 50% probability. So a "behavioral strategy" means random behavior at a SINGLE node. It

does not need to account for all nodes. (3) Mixed Strategy: A probability distribution over pure strategies (i.e. strategies which account for all

nodes). [thus note the important difference between Behavioral Strategy (random at one node) and a Mixed Strategy (at all nodes, or, rather,

randomizes over the pure strategies)] Equilibrium: Two types of equilibrium: (1) Nash Equilibrium (NE): nobody deviates from strategy. (2)

Sub-game Perfect Equilibrium (SPE): equilibrium by backwards induction (start from the end of the tree and work your way back up and rule out

moves that are not credible). Note that an SPE is an NE, but not every NE is a SPE. Note also that the SPE is unique. Also: SPE is always in

Pure Strategies and not in Mixed Strategies. Examples: Stackelberg Duopoly, Sequential Bargaining (i.e. Bargaining Game with Alternating Offer).

Note: Using IEDS in an extensive form game introduces the concept of "Forward Induction", which is also related to SPE. (2) Imperfect

Information Games Note that this is not "incomplete" information, but "imperfect" information. Incomplete information means "nature" plays.

Imperfect information means essentially that an agent is at a node, but she doesn't know how she got there. So, she's not really sure what's been

played already. Thus "imperfect information" means there are nodes which are "equivalent" to the agent who's turn to play it is (she doesn't know

how she got there). A set of "equivalent" nodes is called an "information set". Set up as before in our extensive game (N agents, each play at

particular nodes, preferences formed over payoffs at the ends of the tree, etc.) The new components are: Strategy: Strategy is a function from

every node (or a class of nodes, i.e. information set - if there are indeed some which are "equivalent") to a set of actions. Equilibrium: NE and SPE

both exist, but now it is no longer certain that SPE is in Pure Strategies, i.e. SPE can be in mixed strategies. A new concept of equilibrium exists

here too: Sequential Equilibrium (SEQE): This is equilibrium where an agent at an information set forms "beliefs" about the exact node he is in and

acts accordingly. So plays Behavioral Strategy at the class of nodes, which are "equivalent", which are consistent with beliefs. ("Beliefs" are

Bayesian updated and are "sequentially rational" i.e. logically consistent with past possible actions). This helps eliminate cases where we got several

sub-game perfect equilibrium (SPE) but we suspect one makes more sense than the others. (3) Finite Repeated Games Essentially an extensive

form game with "simultaneous" moves - i.e. several people play at the same time at a particular node. Thus, it can be viewed a static strategic

normal game (e.g. Prisoner's Dilemma), that is now played twice (or more times) instead of once. The number of times played is finite, however. A

"payoff" to an agent is the "stream" of earnings a player receives from every time played. · Set up game like a strategic normal game. But then

convert to extensive game where players play simultaneously at a node. Strategy: A strategy is a function from nodes to actions - but recall that

each "node" is a strategic normal game. A particular strategy can thus be like follows: "at time 1 (i.e. node 1) play Cooperate; at time 2 (node 2),

play Defect if my opponent Defected in the previous time but play Cooperate if my opponent played Cooperate in the previous time." Equilibrium:

There is a NE and also a SPE. But there might not be a "unique" SPE anymore. Examples: Repeated Prisoner's Dilemma, Bank Run Models. (4)

Infinitely Repeated Games Like Finite Repeated Games, except that now we repeat the static normal form game over and over again an infinite

number of times. Thus payoffs are an infinite stream of earnings which we form preferences over. Of course, as we play an infinite number of times,

then the total stream of payoffs can easily be infinite - thus we need some way of comparing them. There are many ways of "discounting" them into

a specific number to compare them: e.g. Limit-Of-Means, Conventional delta-discounting, strict dominance, overtaking criteria. As there is no

"end", we can't do "backward induction" and obtain an SPE. Thus, we must resort to so-called "folk theorems". · Nash Folk Theorem: essentially:

all agents agree to play a certain sequence of actions. If anyone deviates, then all the other people stop what they are doing and concentrate on

punishing the deviant. Then this set of actions is a "Nash Equilibrium" if no one has an incentive to deviate. · Perfect Folk Theorem: here, the threats

of punishment must be credible, i.e. deviant must believe others are willing to stop what they're doing and punish him (so no deviation from

punishment either). If a punisher doesn't punish, he is punished. So a set of actions which is enforced by this (i.e. no one deviates) and is credible

like this, is similar to a Sub-game Perfect Equilibrium. Examples: As Infinitely Repeated Games involved "Cooperation", then we may think of

Collusion Models of Industry as example. Other examples include Efficiency Wages, Dynamic Consistency of Monetary Policy, etc. (5) Dynamic

Bayesian Games These are extensive form games with incomplete information. Thus, we return to the case of "nature" playing, but in a dynamic

context. Like a strategic extensive- form game only that now "nature" plays at certain nodes. A particular realized "state of nature" is an edge that

takes us to a new node. Strategy: A player's strategy is a function from nodes to actions (each node associated with an action) as before.

Equilibrium: NE and SPE (but now called "Bayesian Nash Equilibrium" (BNE) and "Bayesian Perfect Equilibrium" (BPE)) Examples: Any variety

of signaling games (e.g. job market signaling, bank borrower signaling, insurance signaling, central bank "reputation" signaling); cheap-talk games.

Type III - Coalitional Games Cooperative or "Coalitional" games are essentially games where "Actions Sets" are available to coalitions of players.

The "Nash Program" sets forth the claim that all coalitional games can be reduced to a non-cooperative formulation. Contrary to this notion is what

may be termed the "Aumann Program" which argues that non-cooperative games essentially claim that agents' interests are opposed, yet one

cannot really say that when there are more than two people, that the interests of all agents are completely opposed to one another: two agents may

have a common interest in ganging up on the third. The formal structure of a coalitional game is as follows: · N agents which can be arranged into

2N coalitions. · A coalition S has a set of actions, As, available to that coalition. · Each agent forms preferences over the set of "outcomes" An

"outcome" of a coalitional game is two things: the coalitions that form among players and the actions that each coalition will take (the point in the set

A = product of As). Thus, a payoff to a player i depends on which coalition (if any) he belongs to, which coalitions have been formed and the joint

action of all the coalitions. Strategy: A strategy is a choice of action by the coalition (and not the individual - although the internal stability of a

coalition is an issue); each coalition constructs a best response function to all other possible configurations of actions by other coalitions or agents.

There are many equilibrium concepts in coalitional games depending on the structure of the game. We divide them into four general types: Simple

Objection, Stable Objection, Balanced Objection and Bargaining. The differentiation will become clearer as we move on. I - Simple Objection

Coalitional Games In a simple objection game, an equilibrium is broken if any one person deviates ("objects") to the resulting allocation. This is

divided into two types: games with Transferable Utility (TU) and games with Non-Transferable Utility (NTU) (1) Coalition Games with

Transferable Utility (TU) · Set of agents (N) and possible coalitions (2N) · Value function v(S) which gives total payoff to coalition S which is to

be divided within the coalition in a pre-defined way. · A "cohesive" coalition ensures that the value of the payoff to any player in the coalition is

greater than the value of the payoff that player would get under any other coalition (including sub-coalitions) or by herself. Equilibrium: The

equilibrium concept here is the famous "core". The core is a set of payoffs to each individual which cannot be "blocked", i.e. there is not other

coalition S and another allocation feasible to that coalition such that each member of that coalition gets a better payoff than what he would get from

the core allocation. (2) Coalitional Games with Non-Transferable Utility (NTU) · Same as before, except that the value function v(S) just gives a

total payoff and says nothing about how it is to be divided within the coalition. As a result, when joining a coalition, one is not sure of what exactly

one will be getting. The value of a coalition v(S) is thus some set of possible configurations. A game with NTU is thus more general than a game

with TU. · There is a set of economy-wide "consequences" (call it X) which an agent can face and thus forms preferences over. v(S) is thus some

subset of X. Equilibrium: A Core can still be defined as before. II - Stable Objection Games In our previous game, a deviation by an agent or

another coalition would be enough to eliminate an allocation from the Core. However, we did not consider whether that deviation was "stable" in

the sense that the allocation they deviate to will itself be deviated from by another renegade coalition which may make the members of the original

breakaway coalition worse off. Thus, one may argue that a deviation is not "credible" if it is not ultimately stable. · Structure is the same as in the

Transferable Utility (TU) case: a set of agents N and possible coalitions 2N. Payoffs to coalitions are v(S) and they are divided within a coalition in

some pre-defined way. · Imputations: a set I is the set of "imputations" of the game. An "imputation" is a feasible allocation where the payoff to

each agent in the game is greater than the payoff he obtained when alone in the original game. Thus, we shall refer to allocations as "imputations". ·

An "objection" by a coalition S is an imputation (i.e. an allocation) such that the value of the original imputation for this coalition is less than the

value of the new imputation. v(S) under new imputation v(S) under old imputation. Naturally, a new imputation proposed by S must be feasible for

S. Equilibrium: We have two equilibria now: (1) Core: as before - the set of all imputations that are not "objected" to. (2) Stable Set: a stable set Y

is a set of imputations such that (i) there is no coalition which objects by moving to an imputation within the set Y (internal stability) and (ii) any

imputation outside Y will be objected to by some coalition S with an imputation in Y (external stability). Notes: Essentially, the stable set says that

all other allocations outside Y will be objected to by some coalition and no coalition objects to a point in Y. This is like in the core. The major

difference is that a stable set is also, like its name indicates, "stable", i.e. effectively all the objections to points outside Y are for points inside Y.

Thus, while a point is omitted from the core if a deviation occurs to another point, it can be allowed in the stable set if that second allocation is itself

deviated from. Thus, the core is a subset of the stable set. There can also be multiple stable sets in any coalitional game - but they must proper

subsets of each other. However, if the core itself is a stable set, then there is only one stable set (i.e. the core itself). Finally, extensions to NTU

games are straightforward. III - Balanced Objection Games In our previous cases, a deviation occurs by a single coalition's objection. That is

enough to exclude points from the core. The stable set further requires that the point this coalition deviates to will itself not be deviated from. But

we have still not considered the possibility that an objection may have a counter-objection. A "counter-objection" is not like that of the stable set -

it is not an objection to the point where you deviate to. Rather, it is the suggestion of a different point which is better than the point you deviate to.

There are three types of equilibria in such cases: Bargaining Sets, Kernels, Nucleoli and Shapley Values. We are borrowing the objection/counter

objection characterization of the concepts by Osborne & Rubinstein (1994). All games are thus TU (although transfers to NTU have been done).

(1) The Bargaining Set: · The game is structured as before (TU). · An objection to a point is defined as a pair (x, S) which objects to another

allocation y where each member of S is better off in the new allocation. · A Counter-Objection is a pair (z, T) which objects to the proposed

objection (x, S) where every member of T is better off under z than x. Note that some members of T could have been in S. · Both an objection

and counter-objection in this case is player-to-player. Thus, we should really have said that an objection by player 1 to player 2 is the proposition

of a coalition S in which 1 is a member and 2 is not. Similarly, the counter-objection of player 2 to player 1 is a coalition T in which 2 is a member

but 1 is not. (Think of it as as bratty brawl: Objection of 1 to 2 "I don't like allocation y, thus I am going to make a coalition S that excludes you

where we go to point x"; Counter-objection of 2 to 1: "Well, I don't like that plan. In fact, I am going to make my own coalition T which excludes

you and even has some of your friends and go to another point z). Equilibrium: The set B is a "bargaining set" is such that every objection (x, S) to

a point in that set has a counter-objection (z, T). Notes: Since each objection and counter-objection must exclude the person you are objecting to,

then a "core" is just the set of allocations where no one objects or counter-objects to anybody else. Thus, the core is a subset of the bargaining set.

(2) The Kernel: · The game is structured as before (TU). · An objection (x, S) is defined as before (person-to- person), only now we claim that

the objection of player 1 against 2 is the formation of a coalition which excludes 2 (who preferred the original point). A counter objection (y, T) by

player 2 against player 1 notes that by forming of his own coalition (which excludes 1), the gain of moving from the original point to player 1's

proposed x is less than the gain of moving from the original point to player 2's proposed y. By "gain" and "loss" we mean gains and losses of all

members of the respective coalition (i.e. gain of coalition/loss of coalition). Equilibrium: "Kernel" which is defined as the set of all allocations such

that for any objection by any player against another, there is a counter objection. Notes: The difference between the kernel and the bargaining set

is that the objections and counter-objections in the kernel case require us to consider the payoffs of all the members of the coalition and "measure"

the total gain/total loss of a coalition's move. Thus, there is a required element of cardinality and cross-person comparisons of utility in kernels.

Note that as a result, the kernel is a subset of the bargaining set. (3) The Nucleolus: · The game is structured as before (TU). · An objection (x, S)

is now defined as a coalition-to- coalition objection - i.e. an objection by coalition S to a point is the formation of coalition S and a move to point x

which which yields higher payoffs to the coalition. A counter objection (y, T) by coalition T against coalition S is the formation of coalition T such

that by moving from the original point to S's proposed x, the coalition T suffers a loss and the gain that T would get by staying at the original point

and not going to x is greater than the gain S would get by moving to the new point x. Equilibrium: "

Word Count: 4877

**Source:** Essay UK - http://www.essay.uk.com/coursework/the-history-and-taxonomy-of-game-theory.php

This coursework was submitted to us by a student in order to help you with your studies.

This page has approximately ** words.**

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay UK, *The History And Taxonomy Of Game Theory*. Available from:
<https://www.essay.uk.com/coursework/the-history-and-taxonomy-of-game-theory.php> [22-02-20].

If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: