\documentclass[a4paper,10pt,notitlepage]{article}
\usepackage{graphicx}
\usepackage[margin=1in, top=0.5in, bottom=0.5in]{geometry}
\usepackage{subfig}
\usepackage{appendix}

%opening
\title{Prisoner's Dilemma: Experimenting with Genetic Algorithms}
\author{Carlos Garc\'{\i}a Cordero}

\begin{document}

\maketitle

\begin{abstract}
%Definitions
The Prisoner's Dilemma (PD) is a typical game theory problem that has no true ``solution''. However, standard game theory suggests that for every non-zero sum game\footnotemark, it is possible to find an equilibrium between the players. If player A picks his best possible move taking into account the move of Player B, and Player B is doing the same, then it is said that the game has found a (Nash) equilibrium. For this particular problem, the equilibrium is found when both players are defecting \cite{fogel93} . And so, in a standard game set up\footnotemark, defecting dominates cooperation.

\footnotetext{\textbf{non-zero sum games} are games where the players involved are not allowed to share information as to decide which ``move'' to take.}
\footnotetext{We define a ``standard game'' as a game that has the payoff matrix defined by Axelrod\cite{axelrod80}}


%Description of GA
Genetic algorithms will be used to try to find solutions to the iterated Prisoner's Dilemma (IPD)\footnotemark. In this context, \textit{Individuals} will represent strategies of playing the IPD; \textit{chromosomes} will represent the collection of moves that can be taken depending on the opponent's history; and \textit{alleles} will represent the possible moves that a player can select.

\footnotetext{The \textbf{Iterated Prisoner's Dilemma} consists of playing PD games repeatedly, where each player has the history of moves made by his opponent in the last games. Players are able to formulate strategies based on games played in the past.}

%Objective of the GA
The purpose of using genetic algorithms is to find solutions and interesting questions to the problem at hand. We explore how sensitive individuals are to the different variables of the Prisoner's Dilemma, and we try to find ways to reach different strategies than the expected ones.

\end{abstract}


\section[Strategy Encoding]{Strategy Encoding\footnote{Answers to questions 2 and 4.}}
Individuals have been granted with the ability to choose the amount of information they wish to use to formulate their strategies. The chromosomes have been divided into two sections, one that encodes the amount of games to take into consideration from the opponent's history, and another that specifies the move to select given a particular combination of opponent's moves from the past. We name these sections \textit{history section} and \textit{strategy section} respectively.


\subsection{Chromosome's History Section}
The \textit{history section} of the chromosome consists of a fixed number of bits that determine how many games the individual will take into account when choosing which move to make. We define the \textit{history size} as the amount of games we need to take into account. The number of \textbf{ones} determine the \textit{history size} for the strategy. A history section of only zeroes takes into account 1 game (Figure \ref{fig:history_no_bits}), and one bit set means that two games will be taken into account (Figure \ref{fig:history_one_bit}). 

It's important to note that the position of the bits set does not affect the games taken into account (Figure \ref{fig:history_three_bits} and Figure \ref{fig:history_three_bits_b}). 

\begin{figure} [ht]
 \centering
  \subfloat[One game]{\fbox{\label{fig:history_no_bits}\includegraphics[scale=0.40]{figures/history_no_bits.png}}}
  \subfloat[Two games]{\fbox{\label{fig:history_one_bit}\includegraphics[scale=0.40]{figures/history_one_bit.png}}}
 \caption{The number of bits set to one determine the amount of games.}
\end{figure}

\begin{figure} [ht]
 \centering
  \subfloat[Four games]{\fbox{\label{fig:history_three_bits}\includegraphics[scale=0.40]{figures/history_three_bits.png}}}
  \subfloat[Also four games]{\fbox{\label{fig:history_three_bits_b}\includegraphics[scale=0.40]{figures/history_three_bits_b.png}}}
 \caption{Two different history sections can represent the same amount of games.}
\end{figure}

Another alternative of encoding the \textit{history size} was to use the fixed number of bits in the \textit{history section} as a binary encoded decimal number. This way, Figure (\ref{fig:history_one_bit}) would represent 2 games, Figure (\ref{fig:history_three_bits}) 11 games, and Figure (\ref{fig:history_three_bits_b}) 25 games. This approach, however, leads to a couple of problems with our desired strategy representation (this representation is covered in section \ref{subsec:strategy}). With a \textit{history section} of \textit{n} bits, it is possible to represent up to $2^n$ \textit{history sizes}. The number of genes required for a strategy, therefore, is $2^{2^n}$. This means that we either let \textit{n} be small or we clamp \textit{n} to a specific range. Both options didn't seem to be good solutions; that is why the former representation was chosen.


\subsection{Chromosome's Strategy Section}
\label{subsec:strategy}
The \textit{strategy section} of an individual's chromosome contains a fixed number of bits that specify the move to make given the last opponent's moves. The size (in bits) of this section is defined by the maximum history size an individual can hold. For a history size of \textit{n}, $2^n$ bits are required to represent strategies. Figure (\ref{fig:strategy_section}) shows a graphical representation of how a strategy looks like. The strategy bits (B0 to B7, in this example) take the values 0 for \textit{Defect (D)} and 1 for \textit{Cooperate (C)}. \textit{B0} (bit 0) is defined as the move to play given that the last three moves of the opponent were D, D and D. \textit{B1} for the moves C, D, D. \textit{B2} for the moves D, C and D. And so on for each \textit{Bx}.

Often times to describe an individual, instead of showing the entire chromosome, we will show a simplified version where only the active genes are actually seen. For example, an individual with a history section of 2 bits and a strategy section of 16 bits could look like this:
\begin{center}
 101010111010101000.
\end{center}
The history section (10) states that this individual will take into account 2 games from the opponent's history. Therefore only bits 0, 1, 2 and 3 from the strategy section (1010111010101000) are significant to this individual's strategy. The simplified version of its chromosome is:
\begin{center}
 1000.
\end{center}
This strategy could be stated in words as: ``The individual will defect unless the last two games of the opponent were cooperate and cooperate''.

\begin{figure} [ht]
\centering
\fbox{\includegraphics[scale=0.30]{figures/strategy_section.png}}
\caption{A history size of 3 requires the strategy section to have 8 bits. Each bit represents which move to make.}
\label{fig:strategy_section}
\end{figure}

The nature of this representation confers Individuals important properties that we expect to see impacting the algorithm. Because the \textit{strategy section} has to supply enough bits for all possible history sizes, a chromosome with history section of size 1 and another with history section of size 5 will have the same amount of bits. In fact, with this representation, all individuals in the population have the same chromosome length. A chromosome with small history size will have juast as many bits as one with big history size. We define the bits, or genes, that are used for a strategy as \textit{active genes} and the unused ones as \textit{hidden genes}.

The crossover operation acquires the ability to place active genes into the inactive genes of newly bred individual; preserving those (now inactive) genes for when the individual reproduces or mutates. This way specific strategies (good or bad) are able to survive in between generations ``inactive'' as hidden genes.

Mutation also acquires another ability besides the intended one. Mutation in the history section allows a chromosome to activate or hide specific genes in the strategy section. And mutations in the strategy section, particularly in the hidden genes, allows to have mutated strategies that will be put to the test in future generations (instead of the generation where the mutation occurred).

\section[Testing the Algorithm: I just want to cooperate]{Testing the Algorithm: I just want to cooperate\footnote{Answer to questions 3.}}
To test the algorithm, we begin setting simple parameters to see if we can reach a desired result. We let the algorithm run for 1000 generations, with a population of 100 individuals, each playing a total of 50 games against all other players. We set the ``payoff matrix'' as ($S=0, D=0, C=1, T=0$).

% We set the ``reward'' parameters as shown in Table (\ref{table:table_cooperate}).
% \begin{table}[ht]
% \centering
% \begin{tabular}{| c | c | c | c |}
%   \hline
% 	    & Cooperate & Defect \\
%   \hline
%   Cooperate & 1, 1      & 0, 0 \\
%   \hline
%   Defect    & 0, 0      & 0, 0 \\
%   \hline
% \end{tabular}
% \caption{}
% \label{table:table_cooperate}
% \end{table}

The maximum fitness an individual could have is $99 \times 50 = 4,950$. As Figure (\ref{graph:graph1}) shows, the fittest individuals find the ``just cooperate'' solution in just a few generations. While the whole population catches up in about 150 generations.

\begin{figure}[ht]
  \begin{center}
    \input{figures/fitness_cooperation.tex}
    \caption{All individuals rapidly find that cooperation is the way to go.}
    \label{graph:graph1}
  \end{center}
\end{figure}

The spike drops in the population occur because individuals mutate cooperate bits into defect bits, or because previously hidden defect genes get copied over to active genes. In average we have two types of fittest individuals, one type with the simplified chromosome ``1111'' and the other with ``11111111''; which represent the same strategy.


\section[Behaviour of Long Running Algorithms]{Behaviour of Long Running Algorithms\footnote{Answer to questions 5.}}
\label{sec:long_running_behaviour}
This time we try running the algorithm for 4000 generations to see if we are able to find patterns or the emergence of interesting individuals. The population size was set to 100, each individual playing against all other individuals, with 50 games played by each pair. Mutation rates were low (0.02) while crossover rates were high (0.8). The reward parameters are set as normal ($S=0, D=1, C=3, T=5$) Figure (\ref{graph:graph2}) shows the combination of three different variables from the obtained results. 

The average fitness is related to how individuals are adapting as a population to the ever changing environment. Whenever an individual increases its fitness, it does it at the expense of other individual's fitness. As a consequence every individual is trying to catch up to the fittest individual. In this sense we can see that for every steep increase in the fitness of the fittest individual, an equivalent steep decrease is expected (i.e. generation 480, 550 and 1100). The only way individuals have to increase their fitness for extended periods of time is to do it all together. We can see this effect from generation 3500 to 4000: the average fitness is growing just as the fittest individual is doing.

An interesting discovery, with our representation, is that populations with shorter history sizes tend to evolve more chaotically, while populations with long history sizes tend to behave symmetrically (they all increase or decrease their fitness together). From generation 0 to generation 2,440, all the fittest individuals have history sizes of 2 and 3, with only very few exceptions. But from generation 2,441 long history sized individuals start to dominate the scene. In fact, from generation 2,441 to generation 4,000 the fittest individuals only have history sizes above 5. This is a sample of 7 random fittest individuals from generation 3,500 to 4,000:
\begin{center}
10111000101000111011110101011100,\\
10111000101000111011110101011100,\\
10111000101000111010110001011100,\\
10111000101000111010110001010100,\\
10111000101000111010110001010100.
\end{center}
As we can see a pattern is emerging in this long fellows! Apparently we have found building blocks, or schemata, that represent high fitness individuals in the later stages of the algorithm: 
\begin{center}
1011100010100011101**************,\\
*********************110*0101*100.\\
\end{center}

\begin{figure}[ht]
  \begin{center}
    \input{figures/fitness_long_term.tex}
    \caption{Individuals have a tough time trying to stay at the top.}
    \label{graph:graph2}
  \end{center}
\end{figure}

Another interesting observation is that throughout 1,263 generations (from 1,071 to 2,334) only one strategy dominates:
\begin{center}
 0100.
\end{center}
This happens because there is a low mutation rate, and our algorithm in probably working with the hidden genes. But just as soon as a mutation was able to hit one of the history bits, bigger individuals started to dominate. In fact, we can see that this particular strategy is still lingering around the building blocks of the long history sized individuals.

\section[Variations in the Algorithm]{Variations in the Algorithm\footnote{Answer to question 6.}}
The program that implements this particular genetic algorithm was built taking into consideration the fact that we might need to constantly change parameters. The main issue with the analysis of variable parameters is: How do we keep consistency in between runs? We tackle this issue by using constant random seeds in between runs. This means that in between two program executions, if the random seed is the same, and the parameters don't change, we should end up having the exact same results.

Let's try to analyse the results we got in Section (\ref{sec:long_running_behaviour}) by changing some parameters. Only one parameter will change for each variation, leaving all other parameters unmodified.


\subsection{Mutation Rate Variable}
One interesting parameter to change is the mutation rate. While having individuals that spontaneously develop strategies can be useful, if the individuals mutate too fast we end up having some stability problems. Mutations allow us to avoid getting stuck with specific ``solutions'', or local maximums. Mutating too fast, on the other hand, doesn't allow the population to adequately adapt to the changing environment. Strategies are not able to find an equilibrium because the environment is unpredictable. Figure (\ref{graph:graph_mutation}) shows the effects of having a high mutation rate of 0.1. The difference between the average fitness, the maximum fitness and the minimum fitness is more noticeable.

An interesting observation is that the individual
\begin{center}
 0100
\end{center}
appears again but this time only for 155 generations (from 1,877 to 2,032). We can see that in this small period of time the average fitness of the population is in between 8,000 and 10,000, just as in the original run. The difference here is that because of the mutation rate, the algorithm gets easily ``detached'' from this solution. A side effect of this feature is that schemata are not easy to find; they are constantly created and destroyed by mutations.


\subsection{Population Variable}
Smaller population sizes mean that the competition between individuals diminishes. This translates to populations that get more easily comfortable with average equilibriums. Figure (\ref{graph:graph_population}) shows that after finding a suitable equilibrium around generation 2,500, individuals don't feel like changing much. This is a random sample from individuals from generation 3,000 to 4,000:
\begin{center}
00000110100000010101110000101010\\
00000110100000010101110000101010\\
00000110100000010001110000100010\\
00000110100000010101110000101010\\
00000110000000010101110000101010.
\end{center}
We can definitely spot schemata in these individuals, but they are not as good as the ones we found in the original run. Considering the fact that we halved the population, individuals should be able to reach half the fitness of the original individuals from Section \ref{sec:long_running_behaviour}). These big individuals were able to barely outperform the original individuals with small history sizes, and were not able to reach the equilibrium found by the original individuals with big history sizes.

It's also worthy to note that most of these individuals were able to incorporate the Tit-for-Tat strategy. But ``to defect'' is still dominating ``to cooperate''.
Coming back to our fellow ``0100'' individual; this variation was also able to find a small period of time were this strategy dominated the population. From generation 523 to 962, this is the fittest individual (including some of its variations here and there). If we compare the average fitness (around 4,500) of the population at this time with the fitness (around 9,000) of the original population when this same individual was found, we prove empirically that our statement about performance in between the two variations is correct.


\subsection{Crossover Rate Variable}
The crossover operation is considered the most important operation in genetic algorithms; it allows to explore existing and new solution spaces \cite{grefenstette96}. Lowering the crossover rate too much can lead to the undesirable effect of solution stagnation. On the other hand, a high crossover rate can discard high performance solution spaces too early. This time we have adjusted the crossover rate to a value of 0.4; Figure (\ref{graph:graph_crossover}) shows the results.

The low crossover rate was able to allow a good solution space to fully develop: the population's average fitness almost always stays in between 10,000 and 12,000. In Section (\ref{sec:long_running_behaviour}) we saw that the average fitness managed to steadily stay above 12,000 after generation 3,500 where the fittest individuals with big history sizes managed to develop. In comparison, here the average fitness was never able to go above 12,000 and fittest individuals with big history sizes started to constantly appear after generation 1,500 (2,000 generations earlier). It's surprising to see that individuals with the maximum history size allowed (size 6) were able to become the fittest individuals regularly after generation 2,500.


\section[The Neighbourhood Approach]{The Neighbourhood Approach\footnote{Answer to question 7.}}
The usage of neighbourhoods allows the genetic algorithm to develop different solution spaces at the same time. This means that different neighbourhoods are able to find fit individuals with completely different strategies. Our experiments show that neighbourhoods tend to develop more slowly in comparison to the all-to-all interaction. But, on the other hand, stabilize more when one good strategy is found.

An interesting observation is that good solution spaces, or schemata, are much harder to define globally, but easier locally\footnotemark. But if we compare these good local schemata to the schemata in Section (\ref{sec:long_running_behaviour}), few similarities are found.

Our chromosome representation has also an interesting effect on how neighbours interact. Because of the existence of hidden genes, individuals that were determined to be unfit by its neighbourhood may appear in some other neighbourhood in later generations. This is explained by the fact that hidden genes may become active by a random mutation or by a favourable crossover.

Because of this ``gliding effect'' individuals have, finding a stationary state where all neighbourhoods had an (almost) different strategy was not possible. Although, we were able to find states were some strategy, in particular, was present in multiple neighbourhoods that were not adjacent.

Throughout each run, a clear pattern did not emerge. It was possible, nevertheless, to see that as soon as a neighbourhood found a fittest individual with average fitness above all other individuals in the population, it stated to disseminate descendants considerably fast into other neighbourhoods. This dissemination of genetic code was able to reach almost all neighbourhoods sooner or later.

\footnotetext{We define the global context as the overall equilibrium found in the entire population, while the local context as the equilibrium found in single neighbourhoods.}


\section[Discussion]{Discussion\footnote{Answer to question 8.}}
The algorithm was able to demonstrate that, in a specified time interval, it was possible to find equilibriums in the population. These equilibriums, nonetheless, were always fragile. They are easily disrupted by mutating individuals and crossovers. We were also able to find that the ``disruptability\footnotemark'' of good individuals is directly proportional to the history size of its chromosomes. 

An important, and still unanswered question is ``What is the winning strategy to choose if we are to select only one?'' We were expecting the genetic algorithm to find this answer, but we only got solutions that work well for a particular population. The way fitness gets calculated impacts the results obtained. Our fitness function was always changing, and so it is expected that the solutions constantly change too.

\footnotetext{We'll define disruptability as ``how easy it is to be disrupted''.}


\section[Conclusion]{Conclusion\footnote{Answer to question 9.}}
A genetic algorithm was developed to try to find good strategies to the Iterated Prisoner's Dilemma. In one instance, different parameters were tested to see if the algorithm behaved correctly and, in a second, we observe how it reacted to change. A different chromosome representation than the usual one was presented. It consists of chromosomes of fixed lengths with different sections that describe the history length and the strategy to play. This representation granted individuals some interesting characteristics that had an impact in the tests.

We were also able to confirm the initial statement which said that defection would dominate cooperation. An individual was found in almost all variations of the algorithm with this characteristic. This individual was
\begin{center}
 0100.
\end{center}
The strategy this individual plays is one which always defects unless the last two plays of the opponent were defect and cooperate, in that order. With this strategy the population was able to find an equilibrium at the expense of a relatively low average fitness.

The representation that we used also shed some light in regards to the history size. Genetic code that chooses a small history size develops faster but is more easily disrupted. Genetic code with big history sizes appear at much later stages of the algorithm, but once developed, it's hard to disrupt them; they remain in the population for longer periods of time. 


%Appendix
\pagebreak
\appendix
\appendixpage
\section{Graphical Representation of Different Parameters}

\begin{figure}[ht]
  \begin{center}
    \input{figures/modified_mutation.tex}
    \caption{Results with high mutation rates (0.1).}
    \label{graph:graph_mutation}
  \end{center}
\end{figure}

\begin{figure}[ht]
  \begin{center}
    \input{figures/modified_population.tex}
    \caption{Results with half the population (50 individuals).}
    \label{graph:graph_population}
  \end{center}
\end{figure}

\begin{figure}[ht]
  \begin{center}
    \input{figures/modified_crossover.tex}
    \caption{Results with low crossover rate (0.4).}
    \label{graph:graph_crossover}
  \end{center}
\end{figure}

%Bibliography
\pagebreak
\begin{thebibliography}{9}

\bibitem{axelrod80}
  Axelrod, R. (1980). \emph{Effective Choice in the Prisoner's Dilemma.} Journal of Conflict Resolution, 24(1), 3-25.

\bibitem{fogel93}
  Fogel, D. B. (1993). \emph{Evolving Behaviors in the Iterated Prisoner's Dilemma.} Evolutionary Computation, 1(1), 77.

\bibitem{grefenstette96}
  Grefenstette, J. J. (1986). \emph{Optimization of Control Parameters for Genetic Algorithms.} Systems, Man and Cybernetics, IEEE Transactions on, 16(1), 122-128.

\bibitem{mitchell96}
  Mitchell, M. (1996). \emph{An introduction to genetic algorithms.} Cambridge, Mass. : MIT Press, c1996.

\end{thebibliography}


\end{document}
