%!TEX root = /Users/dejean/Documents/ITU/2aar/MAIG-E2011/exam_project/report/simmar_report.tex
\section{NEAT}
\subsection{Method}
NeuroEvolution of Augmenting Topologies (NEAT), originally developed by Kenneth Stanley \cite{neat_original}, is an advanced method for making neural networks that can evolve in topology as well as weights across generations. There are three challenges when doing this that NEAT deals with.

The first is sensible crossover between genes with differing topologies. NEAT solves this problem by encoding a chromosome neurons and the connections as genes. 

Each gene that is created is given a universally incrementing, unique integer number as a marker. This marker can then be used at any time to identify shared genes between two chromosomes by looking at which genes have identical markers.

Crossover-reproduction can then take place by randomly picking one of the genes in each shared pair to carry on to the offspring, as well as all the non-shared genes of the most fit parent.

The second problem that NEAT solves is the protection of evolving topologies that have yet to mature through speciation.

The markers on individual genes are again used to determine how similar two given chromosomes are by looking at how much of their topology has a shared origin as well as the average difference in their connection weights, and if within a preset threshold, the two are considered to be of the same species. 

Species size is controlled by dividing the fitness of individual chromosomes with the size of their species, then comparing the average fitness of a species with the average fitness of all species to determine whether the species grows or shrinks. 

The entire population is then replaced by letting a pre-determined percentage of the best chromosomes in each species mate until they've filled out the new size of their respective species.

The final challenge is discouraging topology complexity so that between two equally performing chromosomes, the least complex one will prevail. 

NEAT accomplishes this simply by having all chromosomes in the population starting out without any hidden neurons. This way, any addition in complexity will immediately have to justify its inclusion through increased fitness.

\subsection{Implementation}

Instead of writing the source code for the NEAT method from scratch, we used an existing framework called Another NEAT Java Implementation (ANJI) \cite{anji}. 

This allowed us to proceed directly to integrating the method into our Mario agent, and also offered us some powerful extra features to help with analysis, such as extensive and persistent logging of evolutionary progress.

The framework was integrated into Mario at two points. The first one was the agent itself, which is given a neural network at creation to query for actions. The other was a static wrapper function for the Mario game as a whole, intended to keep the game context open and usable by ANJI for the duration of an evolutionary run. 

This was necessary as the design of ANJI required it to run on top of Mario to control chromosome testing. Since ANJI also handles storage and loading of all neural networks that are created, the NEAT agent itself does not by default contain any neural network at all. 

As a result, the desired neural network must be loaded through ANJI and then passed on to the NEAT agent before play begins.

Evolution works by running ANJI as normal, with a custom fitness function specified in the properties for the evolutionary run. 

This function determines the fitness of a particular chromosome by taking a pre-determined list of levels and attempting each a number of times at pre-specified difficulty levels, and recording the fitness of each attempt as calculated by Mario's internal fitness function. 

The function then returns the average fitness of these attempts as the fitness of the chromosome.

\subsection{Parameters}

Once implemented, the settings used by ANJI remained unchanged for the rest of development. We settled on using the default properties for speciation, mutation and topology that were included for ANJI's XOR example, as they represented the most typical form of NEAT that we were familiar with. 

The only exceptions were the settings for input size, output size, population, generation limit and fitness function to use. These were naturally changed to reflect our NEAT agent.

The fitness is ultimately calculated by Mario's internal fitness function. This function is given a SystemOfValues object when called that contain weights for each event in the game that contributes to fitness. 

For evolution, we used the default values defined in the subclass MarioCustomSystemOfValues. While these values were not optimized for the goals that we had set for the NEAT agent, we felt that they would evaluate fitness sufficiently well to guide evolution in the right direction.

We arbitrarily set the maxium fitness value of our fitness function to 15000 as our best estimate of what would be the upper limit of what could reasonably be achieved in a good play-through. 

The primary use of this value is setting thresholds for when a chromosome is considered to represent a solution, and setting stopping criteria for evolutionary run when a chromosome reaches optimal fitness. 

Since our aim was to instead have ANJI continue evolution until reaching the generation limit and then report the best-performing chromosome, we did not feel that it was a priority find a more accurate value for the maximum fitness at this time.

\subsection{Learning}

\subsubsection{First Evolutionary Run - MarioNEATfirst100}

Our initial approach was to simply have the neural network take a set of four binary inputs, each indicating whether one of the four cells adjacent to the agent contained an enemy or obstacle, as well as directly output a set of five binary values from this as the actions that the agent should return. 

The primary goal with this run was to evaluate how well NEAT worked with Mario at any level at all.

For this run, the fitness test consisted of a single attempt at level seed 0, with a difficulty level of 0. The run used a population of 100 and had a generation limit of 100 as well, from which the name of the run was derived.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/neat_learning_fitness_first100.png}}
	\caption{NEAT agent learning, fitness}
	\label{neat_learning_fitness_first100}
\end{figure}

As a result of the evolution, our NEAT agent was able to finish the level that it had evolved on, though it quickly became stuck on any other level or difficulty. 

Analysis of its behavior revealed that the agent had just learned to constantly return the move right and jump commands.

Since a brief absence of the jump command after finishing a jump is required before another jump can happen, the agent also learned to skip the jump command in response to the presence of terrain and obstacles in such a way that the particular level it evolved on made it briefly do so reasonably soon after every jump. 

On any other levels, the specific combination of inputs were not presented by the level at the right times, so the agent quickly became stuck, unable to jump over obstacles blocking its way.

\subsubsection{Second Evolutionary Run - MarioNEATfinal50}

Iterating further upon our approach to the input, we came up with a new set of six binary inputs that would give a better awareness of the level, as well as awareness about what the agent was capable of doing. 

The inputs consisted of binary evaluations of the following questions:

\begin{enumerate}
	\item Can the agent jump, or if it is jumping already, can it ascend higher?
	\item Is there an obstacle at the same height as the agent in any of the five cells ahead of it?
	\item Is there a bottomless pit in any of the five columns ahead of the agent?
	\item Is there a bottomless pit or an enemy that cannot be stomped underneath the agent's current position?
	\item Can the agent shoot a fireball and is there an enemy lined up for a hit in front of him?
	\item Is there an enemy at the same height as the agent in any of the three cells ahead of it?
\end{enumerate}

The test for this run was broader than the first one. A set of three level seeds, 13, 42 and 115, would be used, with each one being attempted on both difficulty level 0 and difficulty level 1, for a total of six attempts. 

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/neat_learning_fitness_final50.png}}
	\caption{NEAT agent learning, fitness}
	\label{neat_learning_fitness_final50}
\end{figure}

Since the maximum and average fitness had reached a plateau early in the first evolutionary run, the generation limit was reduced to fifty for this run.

The NEAT agent showed obvious improvement after this run, not only being able to complete the levels it evolved on, but also levels that it had not enountered before.

The agent did not manage to complete any levels on the harder difficulties, but did manage to make decent progress on several ones.

There were still a few stumbling points for it. The points at which it gets caught up are dead ends in the level layout. The agent does not make use of planning at all, and as a result it never realizes that it must backtrack to get out of the dead end and proceed. 

The agent also have trouble dealing with the vastly increased amount of enemies on difficulty level 1. The strategy it has learned consists of frequent jumping just like in the first run, with the addition of rapidly firing fireballs by virtue of starting with the ability to do so. 

This is extremely effective at making up for the agent's unwillingness to otherwise attempt to evade enemies, but cannot keep up with the numbers on the higher difficulty, and the ability to shoot is quickly lost entirely due to touching enemies.

\subsection{Conclusion}\label{neat_conclusion}

There is still room for optimization in our NEAT agent, primarily in the set of inputs. The most beneficial change would be to increase the agent's awareness of enemies around it so that it can dodge out of the way. 

Taking inspiration from the REALM agent \cite{realm}, one method of doing this is creating new inputs representing the presence of enemies in a given quadrant of the game screen with the assumption that the agent is always in the middle. 

Another point that could be optimized is providing input about enemies directly below mario to allow for more deliberate stomps, rather than just coincidental ones. 

Yet another is to implement inputs that can direct mario toward collectibles such as coins and power-ups, and also destructible blocks.

Moving away from the input representation, further optimization could perhaps be achieved by tweaking the weights in the fitness calculation to encourage the behavior of the agent along more specific paths.

Lastly, the parameters for ANJI can be tweaked to decrease the delta by which weights are modified, making for more fine-grained mutations.