\section{Experiments}
\label{sec:experiments}

\subsection{Experimental setup}
To find out whether we could improve on our simple rule-based policy with a evolved neural network, we ran two experiments with the NEAT algorithm.
In the first experiment, a population of 20 was chosen with 10 species within the population. Each member of the population was pitched against a default agent which focuses mainly on getting control points and getting ammunition when the ammo of an agent is zero as well as having some basic combat behavior. To create agents of several difficulty levels, the default agent was altered so that only 1, 2 or 3 agents would move. Each member of the population was evaluated by battling three times against each of the three agents and calculating the average score. We planned to add stronger opponents as the policy improved, but unfortunately this did not happen.

Another experiment was run with a population of 60. The population of 20 was trained for 350 generations and the population of 60 was trained for 130 generations. For a network with 44 inputs and 17 outputs, one would expect a larger population, since more networks are possible. However, having a larger population results in more computation per generation resulting in less generations in the same amount of time. 

% Bla bla bla connections and nodes xor/polegames
For the experiments, the probability of adding a connection is $0.05$ and the probability of adding a node is $0.03$. The effect of this is that innovations are protected from changes brought about by altering the structure of the network. However, due to the low probabilities, the chance of such innovations coming about are small which would require a larger amount of generations for innovations to be found. The mutation of weights was set to $0.2$ with a weight of $0.9$. The combination of these two factors ensures that the current network is improved before the structure of the network is altered to allow new innovations. This is because altering the network happens over a larger number of generations whilst the mutation of the weights is likely to happen more often. For a full overview of the used parameters see appendix \ref{app:parameters}.



% Bla bla bla evalueren van beste uit de generatie tegen agents en averagen
To evaluate the population of each generation, the best chromosome of the generation was pitched multiple times against each opponent agent. The average of these games were used. Another possibility was to use the average fitness of each generation. However, this would appear more skewered due to possible exploration leading to unwanted innovations. This is especially the case when considering larger networks such as the one needed for the domination game.


\subsection{Results}
As can be seen in figure \ref{fig:fitness20} and \ref{fig:fitness60}, learning was not very effective. The fitness of the first few generations seem to rise quickly from $0.3365$ in the first generation to $0.49775$ in the 14th generation. After this the fitness seems to fluctuate around the average fitness, which is $0.4449$. 

\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/pop20exp}
\caption{Fitness over 350 generations using a population of 20}
\label{fig:fitness20}
\end{figure}

In this graph, not much learning can be discerned, but this is partially due to the high variance in the signal.
When each five consecutive generations are averaged, a much clearer image arises, see figure \ref{fig:fitnessbins20}. In this image we clearly see the first 150 generations being mostly below average and the last generations being above the average fitness. This indicates that the agents have learned to play the game a bit better.

\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/pop20bin5}
\caption{Fitness of bins containing 5 generations each using a population of 20}
\label{fig:fitnessbins20}
\end{figure}

The winner of the final generation got the average scores as seen in table \ref{tab:scores}. Against a single default agent, the trained agent is capable of winning, but only just. When observing the strategy of the winning chromosome, it is clear that the agent applies a ``suicide-strategy'' in which all the agents directly move to a control point. This kind of strategy is successful against opponents that are not capable of good combat, such as the single default agent which can only shoot once every turn.

\begin{table}[ht]
	\centering
\caption{Results of the winning chromosome of generation 350 against each opponent}
\begin{tabular}{ll}
\hline
Opponent & Average score \\
\hline
agent1:& 520 \\
agent2:& 468.5 \\
agent3:& 388.5 \\
\hline
\end{tabular}
\label{tab:scores}
\end{table}

In figure \ref{fig:fitness60} and \ref{fig:fitnessbins60} we see the same pattern as in the population of 20. The first few generations have a steep learning curve after which nothing new is learned in the first 100 generations.

\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/pop60exp}
\caption{Fitness over 130 generations using a population of 60} 
\label{fig:fitness60}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/pop60bin5}
\caption{Fitness of bins containing 5 generations each using a population of 60}
\label{fig:fitnessbins60}
\end{figure}

% Problems
% * Too many inputs/outputs resulting in too many possible networks
% * Too low probability for adding connections and nodes (probabilities used were for networks with 2/3 inputs)
% * Evaluation done over too small amount of games (variance in game-score too high)
% * Population too small for proper speciation
% * 