\section{Background}
\label{background}
\subsection{Game Theory}
\par When there is a situation that involves two or more decision makers, ie agents, the study of the interaction among these agents is known as game theory. In general situations in game theory can be divided into two main groups, strategic form and extensive form games. A strategic form game is one in which all players in the game perform their actions simultaneously. This form of game intrinsically maps better to a distributed optimization solution. The other group, extensive form games, are those in which a player has some knowledge of their opponent or opponents' moves before they choose an action \cite{smyrnakis_dynamic_2010}. Our models are extensive form. 

\par In a scenario such as coordinating help during a disaster, a centralized optimization solution for the autonomous agents helping is not possible\cite{smyrnakis_dynamic_2010}. Only when the actual crisis that occurs is extremely similar to the crises that we used when developing our response model will our response be close to the global optimum solution. If that is not the case, then our solution may be horribly inefficient\cite{smyrnakis_dynamic_2010}.

\par On a small scale, a distributed complete solution works well. However, as we scale up that solution, the costs of needed resources and increased complexity makes the solution infeasible\cite{smyrnakis_dynamic_2010}.There are a few distributed solutions that require less resources than a fully implemented distributed complete communication solution in which all agents need to have a full view of what is occurring. These are fictitious play, maximum gain, and regret matching. Since less information is passed between agents, the solution could get stuck in a local optima, but at least the problem is tractable\cite{smyrnakis_dynamic_2010}.

\par Fictitious play is based on the assumption that the strategy of the opponent is fixed \cite{smyrnakis_dynamic_2010}. Fictitious play is a method in which a player holds some belief about the strategy of the players it is playing against(opponents). Iteratively, this player uses these belief strategies in order to pick an action that will maximize his expected reward. After every turn, the beliefs the player holds concerning the opponents' strategies are updated with the results of seeing the action of the opponent in the past turn. If the opponents use a fixed(pure) strategy, then fictitious play is guaranteed to converge to a Nash Equilibrium(NE). This convergence, however, will most likely occur very slowly \cite{smyrnakis_dynamic_2010}. 

\par Other forms of fictitious play also exist. There is a variation of fictitious play in which recent observations are given more weight than older observations. This treats the opponents strategy as dynamic while we attempt to calculate it. \cite{smyrnakis_dynamic_2010}'s model of fictitious play uses Hidden Markov Models to model the beliefs of opponent strategies held by the player.

\subsection{Q Learning}
A single stage game is a game in which there is only one action to be chosen and the game is not repeated. A multi-stage game then, in general, is a game in which a player chooses multiple actions over the course of play. The other common feature between these types of games is that all players have full knowledge of the world,e.g. rewards available to players, how the game is played, and, in the case of a multi-stage game, the history of prior actions. 

\par In reinforcement learning, the goal is to learn the optimal policy of actions an agent should make in order to get the highest reward. The problem is that in reinforcement learning, the player does not know the utilities nor how to play the game\cite{russell_artificial_2010}. In a multi-agent reinforcement domain a different type of model is needed. Consequently, a Markov Game is an environment in which the all of the combined actions of the agents in the game at a given time \emph{t} lead to the next state in a stochastic manner. This is the form of game we use as a model.

\par Q learning is represented by the following forms of the Bellman equations. 

\begin{equation}\label{Q Function}
Q(s,a) = R(s,a) + \gamma\sum_{s'} P_{(s,a)}(s')V(s')
\end{equation}
\begin{equation}\label{Value Function}
V(s) = \max_a (Q(s,a))
\end{equation}

\par Where $Q(s,a)$ defines a Q function and $V(s)$ is the value function. A Q function is a function that gives the expected sum of discounted future reward by performing a given action in a certain given state. The Q function does make the assumption that from the given action and state combination the player makes all optimal moves until the completion of the game. Q Learning is the online technique that allows a player to, over many iterations, learn the Q function for that action and state combination. One obstacle when using Q learning, however, is that the state space for agents increases exponentially as more agents are added\cite{uther_adversarial_1997}.

\par At first glance it may seem that there is a disparity between one-stage games and Markov games. It turns out that the Q function is the link between these two differing types of game. When seen as a payoff function, the Q function of each state in a one stage game will only reach an equilibrium if the policies for the overall multi-stage game are in equilibrium. This means that by finding an appropriate Q function, an equilibrium for a Markov game can also be found \cite{littman_friend-or-foe_2001}.

\begin{equation}\label{Q Learning}
Q(s,a) := \alpha Q(s,a) + (1-\alpha)(R(s,a) + \gamma V(s'))
\end{equation}

\par The learning rate of the algorithm is denoted by $\alpha$. If alpha decreases towards zero during each subsequent iteration of the algorithm, then the Q values given are guaranteed to converge. If this is not the case, the Q values have no guarantee of convergence. While initially it could be assumed that this would be a bad thing, in practice it is actually helpful to not have the Q values converge. For example, in the case of a learning player playing against an opponent who is itself learning through each iteration of the game, it would be disastrous for the player if they converged to a fixed strategy while their opponent continued to learn\cite{uther_adversarial_1997}. Q learning is only guaranteed to be optimal when the opponents stick to using a pure strategy. Luckily, however, Q learning is a robust approach when opponents use a mixed strategy\cite{amato_high-level_2010}.

\subsubsection{Opponent Modeling Q Learning}
\par Opponent Modeling Q Learning(OM) \cite{uther_adversarial_1997 }allows a player to take advantage of the less than optimal moves an opponent may make during a game. OM uses all of the same information as Minimax Q Learning, but also keeps track of how many times the opponent chooses in a certain action in each state. This extra information allows the player to overcome the problem from Minimax Q Learning of having an opponent agent that does not try each move from a state infinitely often. This occurs in Minimax because the player cannot control the actions of the opponent. This creates problems because the values for Minimax will most likely not be correct without this view of the state space. Since we now have the count of the opponents actions for each state we can generate a probability distribution of our opponents actions, $P(a_o|s)$. Substituting this distribution for the distribution of the player's previous actions in state s from the Minimax Q Learning expectation we get:
\begin{equation}
E(Q(s,a_p)) = \sum_{a_o}P(a_o|s)Q(s,a_p,a_o)
\end{equation}
After calculating all the possible values, the player picks the value with the highest expected reward for this state, $Q(s,a_p)$. The value of this action happens to be the value, V(s) for that state. This method of choosing the action given the probability distribution of previous actions is known the same as arriving at a best action during fictitious play.


\subsection{Experience Weighted Attraction}

Experience Weighted Attraction\footnote[1]{The information for this model was found in \cite{camerer_experience-weighted_1999} but there are also numerous other versions of this paper that have similar information. That being said, these papers are by far the most informative papers available on this topic as there is not much other literature available.} is a learning model that combines two seemingly disparate learning models, belief and choice reinforcement. In a belief learning model, a player keeps track of the history of the moves of other players and develops a belief of how the other players act. Then, given these beliefs, they choose a best-response which will maximize their expected payoff. In a choice reinforcement model, the strategy assumes that the previous payoffs of chosen strategies affect how a strategy is currently chosen. Most of the time players don't have a belief about how other players will play; they only care about the payoffs received in the past, not how the play evolved to yield those payoffs.

\par These two learning models are treated as fundamentally different approaches, but EWA shows that they are related. EWA's three modeling features include both models as special cases. The first feature deals with how the strategies are reinforced. In choice reinforcement, the strategies that are not chosen are not reinforced at all. However, in EWA the strategies that were not chosen are reinforced according to what they would have earned if they had been chosen multiplied by a discount factor of  $\delta$. Intuitively, this should make sense as humans as well as animals can learn from actions and experiences that are not directly reinforced by their action choice. 

\par The next feature is two decay rates used in EWA that control the growth rates of attractions. The attraction in belief models is the expected payout of a given action while in reinforcement learning the attraction is a number that continues to grow with the number of actions made. EWA handles both of these different views of attraction with a discount factor of $\phi$ for attractions of past actions and $\rho$ for experience. 

\par The last feature is the initial weights for attraction and experience. $N(0)$ is the term that describes the initial weight for the experience. When $\delta = 0$, $\rho = 0$, and $N(0) = 1$, the EWA attractions match those of reinforcement. When $\delta = 1$ and $\phi = \rho$, given that $N(0)$ reflects some prior beliefs of the algorithm, then the EWA attraction strategies are equal to a belief class expected gain or payoff.

Every round, two variables of the EWA algorithm are updated. The first is $N(t)$ which is the variable that stores a value related to how many times a state has been seen. The second variable is $A_i^j(t)$, or the attraction of a strategy after time $t$ . The attraction can be thought of as the pull a certain strategy has to lure a user into selecting that strategy.
\begin{equation}
N(t) = \rho \cdot N(t - 1) + 1, t \geq 1
\end{equation}
\begin{eqnarray}
A_i^j & = & \dfrac{\rho\cdot N(t - 1)\cdot A_i^j(t - 1)}{N(t)} + \nonumber\\
& & \dfrac{[ \delta + (1 - \delta)\cdot I(s_i^j,s_i(t))]\cdot \pi_i(s_i^j,s_{-i}(t))}{N(t)}
\end{eqnarray}

In reinforcement models, rewards have to be given for choice reinforcement. Reinforcements in EWA are updated according to the following two principles, which can be reduced to the single updating equation listed second.
\begin{equation}
R_i^j(t) = \left\{ \begin{array}{ll}
\phi\cdot R_i^j(t - 1) +\pi_i(s_i^j,s_{-i}(t)) & \mbox{if $s_i^j = s_i(t)$,}\\
\phi\cdot R_i^j(t - 1) & \mbox{ if $s_i^j \neq s_i(t)$.} \end{array} \right. 
\end{equation}
\begin{equation}
R_i^j(t) = \phi \cdot R_i^j(t - 1) + I(s_i^j,s_i(t)) \cdot \pi(s_i^j,s_{-i}(t)
\end{equation}

Parameters used in the equations for EWA will vary depending on the data being analyzed. The lack of theory related to how parameter values change for a given game is a major hindrance to the use of this algorithm.