\section{Analysis}
\label{sec:analysis}

Based on our tournament results, and successive improvement of the rule-based agent upon its predecessors in multiple stages of development, we can conclude that the sets of actions and input we defined are effective. The agents we created won nearly every match against other teams' agents with a thousand points to zero. This was done with our simple rule-based system.

In the hope of creating a better strategy, we applied the NEAT algorithm to the problem, as described above. Looking at figure \ref{fig:fitnessbins20}, we see that there is a slight improvement of performance in the population over time. However, as listed in table \ref{tab:scores}, the learned agents still have trouble to even defeat the agent that was provided as default with the DG.
When we stopped the algorithm due to time constraints of the project, the agents learned to move to control points succesfully, but did not do much else.

Despite the success of our rule-based system, the learning system that was built with the same state-action representation fails to come even remotely close to the performance of the rule-based system.
There are several issues that we expect have caused this, but they all stem from the same problem: the evaluation of the fitness functions takes a long time, thus only a very limited amount of games could be played. This means that the population size of the genetic algorithm had to be kept somewhat small. This means that less variety is possible within the population, and thus that it stays stuck in local minima more often. 

Furthermore, only a small number of generations could be run. Even for simple problems like that of learning a XOR function, NEAT can take many generation to converge to the optimal solution \cite{NEAT}. It is quite natural that for our problem, which is far more complex, the strategy does not yet show any particularly intelligent behaviours.
%Seeing that after a few hundred generations the algorithm has built some complex networks, despite the small increase in performance,
We think that one of the main issues with our approach is the variance of the fitness function. Since we can only play against a limited number of parasites, which also happens on different randomly generated maps against agents with random elements, the fitness function is somewhat unstable. In one generation the fitness function could say that a certain strategy is good, leading it to create more offspring in the next generation. However, if it is only because of noise that it got a high fitness, and it's true average fitness is actually lower, this chromosome will push away parts of the actual better strategy. The noise in the fitness function can thus create strong instability in the convergence of the genetic algorithm, which is also what we believe to see in the above graphs.


Lastly, different parameter settings that influence the crossover and mutation rates could also heavily influence the results. Again however, with the time restraints it was impossible to do any testing on this.
It is good to note that it does not seem as if the limited nature of our representation is causing any issues. Basic behavior like grabbing ammunition, which should improve performance quite significantly, is theoretically learnable within our framework. However, the agents in the last generation had yet to touch on that. 

