\chapter{Theory and Methodology} \label{chap:theorymethodology}
In this chapter the theoretical fundamentals and concepts the solution is based on will be explained. The focus is on Potential Fields, Evolutionary Algorithms, Multi-Agent Systems and Multi-Objective Optimization.

\section{Potential Fields} \label{sec:pf}
Potential Fields, also called Artificial Potential Fields (APF), is a method originally used for maneuvering robots between obstacles \citep{khatib1986real}. 
In a topological space it creates attracting and repelling fields, typically centred around a point. They can be thought off as magnetic charges working on a charged particle, each field attracting or repelling. The sum of all the fields, given the particles position, determines which direction it moves. Figure \ref{fig:repelling_pf} shows a repelling Potential Field, where the repelling force is stronger closer to the center of the field.

\begin{figure}[h]
	\centering
	\includegraphics{img/image006.jpg}
	\caption[A repelling Potential Field]{A repelling Potential Field \citep{safadi2007}.}
	\label{fig:repelling_pf}
\end{figure}

Potential Fields have mostly been used to control robots because of their topological nature. However, they have also been used in other domains where the problem can be solved by simulating topology. Potential Fields have also been used for the deployment of mobile networks \citep{zavlanos2007potential} and RTS games \citep{hagelbackmulti}.

The force (attractive or repulsive) from a Potential Field decreases the further away from the center you get. It can decrease in different ways: linearly; exponentially or discretely. The different variations represent different wanted behaviours. For example a discretely decreasing field could be used to keep the particle outside a strict boundary, useful for avoiding obstacles because you do not want to be affected by the obstacle unless you are close enough to crash. A linearly decreasing field on the other hand is better used when several units are to be considered at once. When deciding where to move you will want to consider all enemy and friendly units and give more weight to the units that are closer to you. This is useful because enemy units close to you are more likely to be able to attack you, and friendly units close to you more likely to be able to protect you. However enemy units that are further away might be able to attack you if they move closer and thus have to be taken into consideration as well. 

When representing Potential Fields the strength value from each Potential Field can be assigned to each pixel. An example of how this could look is shown in Figure \ref{fig:two_pf}, where both a negative and a positive linear Potential Fields are shown. A unit on a pixel position will move towards the pixel with the highest value surrounding it. Of course, one can increase the granularity and let several pixels represent one cell in a grid. This is actually a good way to lessen the computational power needed to do computations with them \citep{hagelbackmulti}. One does not have to represent such a grid explicitly. One can use the distance between the agent and the fields to create a result vector. If multiple units are using the same fields, e.g. if the fields are set by a centralized intelligence, representing them in a grid makes sense. The grid would be calculated beforehand and each unit would only need to check the values of the spaces closest to it. If each unit has their own Potential Fields calculating them this way would be impractical, as it would mean each unit would have to represent the whole grid and then decide what to do. Using the relative distance would be the best choice. 

\begin{figure}[h]
	\centering
	\includegraphics{img/pf-influencemaps.png}
	\caption{Two potential fields with strength values shown.}
	\label{fig:two_pf}
\end{figure}

Potential Fields have been applied to RTS games as shown by \citet{hagelbackmulti,sandberg2011evolutionary}. There are several advantages of using Potential Fields, they handle dynamic domains well and easily produce the behaviour wanted in these games \citep{hagelbackmulti}. Path-finding in a dynamic environment can be very hard. The most common path finding solution, the A* algorithm, struggles with this. Because of the nature of Potential Fields dynamic environments does not give a Potential Fields algorithm any extra work \citep{sandberg2011evolutionary}. Micromanagement (see Section~\ref{sec:micromanagement}) is a good example of wanted RTS behaviour that's possible with Potential Fields. Moving units away from enemy units, which is sometimes needed, can be done by placing negative fields on these units. Similarly making your units attack a weak target the Potential Fields can be designed to be strong on weak units. 

\section{Evolutionary Algorithms}
Evolutionary Computation (EC) is a general term used for a group of stochastic search techniques who are loosely based on the Darwinian principle \textit{Survival of the Fittest}, which they utilize by emulating evolution to optimize and improve their solutions. EC groups several techniques, the main types being Genetic Algorithms (GA), Evolutionary Strategies and Genetic Programming. All of these are also classified by the term Evolutionary Algorithms (EA), which have mechanisms in common such as reproduction, mutation, recombination and selection. An EA has a population of encoded solutions which can be manipulated by the mechanisms mentioned before, and evaluated by a fitness function \citep{floreano2008bio}. EA requires both a fitness function as well as an \textit{objective}, which represents a more high-level goal than the fitness function. This will be presented in Section~\ref{sec:MOO}. \todo{fitness/objective}

An encoded individual in an EA population is called a \textit{genome}, and is often represented as a string of bits, the encoding plans or blueprints for these individuals are called \textit{genotypes}. The genotype is composed of several chromosomes, each of which represents a parameter for the solution. The genotype is turned into a phenotype by taking the chromosomes and decoding them into real values that can together form a testable solution candidate. 

The cycle of life in an EA is depicted by Figure \ref{fig:ea_cycle}. The cycle on the figure starts in the bottom left corner as a set of genotypes, before the first step of evolution the first generation is often randomly generated. Once translated into phenotypes, the candidates have their fitness evaluated by testing it within the specific problem domain. The fitness of each individual determines their chance of staying in the population, and the chance of being selected as parents to breed new individuals into the next generation. Once the breeding is completed, the new genotypes are brought into the population and the cycle continues onto a new generation of individuals. 

\begin{figure}[h]
	\centering
	\includegraphics{img/ea_cycle.png}
	\caption[The basic EA cycle]{The basic EA cycle \citep{keith2010evoalg}.}
	\label{fig:ea_cycle}	
\end{figure}

\todo{Write about fitness, and maybe add a hint about objectives.}

Evolutionary Operators (EVOPs) are techniques that are aimed at generating populations with high fitness, the three major EVOPs for EA are selection, recombination and mutation \citep{coello2007evolutionary}. 

Selection decides how individuals of a population are drafted for parenthood depending on their fitness. There are several techniques available for EA implementation, one of which is tournament selection. In tournament selection the individuals are drafted into smaller groups, or tournaments, to compete against each other for parenthood using their fitness values. The size of these tournaments are chosen manually and determines the selection pressure of the EA. Selection pressure is how difficult it is to be drafted for parenthood. The higher the selection pressure the more the individuals with high fitness are favoured, as a result the convergence rate in EA is largely determined by the selection pressure. The tournaments are held until the number of parents wanted has been reached \citep{floreano2008bio}.

Elitism is the act of preserving an entity to the next generation without changing it. You do this if you want to preserve the best individuals in a population without risking that they are drastically changed during mutation or recombination. Elitism is an important part of Multi-Objective Optimization (see Section~\ref{sec:MOO}).

The Recombination EVOP focuses on how to best combine the different chromosomes of each parent to produce a new genotype that ideally takes the best properties from each partner genotype. Recombination in evolutionary algorithms is crossover, which cuts the parent genotypes at one or more given points and recombines the parts into a new child. In Figure \ref{fig:crossover} single-point crossover is shown. The two parts of the bars parents and children represent the chromosomes. The crossover operation switches part of each chromosome to create two new children with a part of each parent \citep{floreano2008bio}.

\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/ea_crossover.png}
	\caption[Single-point crossover]{An example of a single point crossover done by cutting genotype segments and recombining pieces of each parent.}	
	\label{fig:crossover}
\end{figure}

The mutation EVOP is the process of randomly changing the genome. Mutation can change one or several parts of the genome in a random fashion. How it is changed depends on the representation of the geneotype. In Figure \ref{fig:ea_mutation} we show an example of mutation where the genotype is a series of bits and it is mutated by inverting a single bit on a random place in the bitstring. Changing several parts of the genome at once would make it change drastically. Mutating too rarely, on the other hand, could make the EA converge towards a local optima.

\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/ea_mutation.png}
	\caption{Single bit mutation}
	\label{fig:ea_mutation}	
\end{figure}

Choosing the right combination of EVOPs and their parameters is crucial for the performance of an EA. There is no domain-independent answer that is more correct. It depends on the problem at hand and how an EA represents that problem, and there is room for creativity and intuition when tuning the parameters. The parameters of an EA is mutation chance, crossover chance, selection mechanism, population size and the number of generations. 

\section{Multi Objective Optimization} \label{sec:MOO}
Earlier we mentioned how an EA require both an objective and fitness function. The difference between these two can be vague at times. Objectives are high-level goals in the problem domain which describe what you want to accomplish. While fitness functions work in the algorithm domain to measure how well a particular solution manages to accomplish these goals, this is done by assigning a value to that solution that reflects the measured quality. \\

\begin{mydef}
\label{def:dominance}
\textbf{(Pareto Dominance \citep{coello2007evolutionary})} A vector $\textbf{u} = (u_1,...,u_k)$ is said to \textbf{dominate} another vector $\textbf{v}=(v_1,...,v_k)$ (denoted by $\textbf{u} \preceq \textbf{v}$) if and only if \textbf{u} is partially less than \textbf{v}, i.e., $\forall i \in \{ 1,...,k\}, u_i \leq v_i \wedge \exists i \in \{ 1,...,k\}:u_i < v_i$.
\end{mydef}

When a problem has multiple conflicting objectives, it becomes increasingly complex to represent the overall quality of a solution by a single fitness function. We can define the problem of multi-objective optimization as the search for a vector of decision variables which are optimized to yield balanced and acceptable results for all the objectives. The clue to this is to find good compromises (or \textit{trade-offs}) to satisfy all the objectives evenly, the goal being to find the \textit{Pareto optimum}, which is defined as: \\

\begin{mydef}
\label{def:pareto}
\textbf{(Pareto Optimality \citep{coello2007evolutionary})} A solution $x \in \Omega$ is said to be Pareto Optimal with respect to (w.r.t) $\Omega$ if and only if there is no $x^{\prime} \in \Omega$ for which $\textbf{v} = F(x^{\prime}) =  (f_{1}(x^{\prime}),...,f_{k}(x^{\prime}))$ dominates $\textbf{u} = F(x) = (f_{1}(x),...,f_{k}(x))$.
\end{mydef} 

Saying a vector $v_{1}$ dominates vector $v_{2}$ means that it performs better over all the objectives as Definition~\ref{def:dominance} states. This can be illustrated by looking at the solutions in \textit{objective space} by plotting them in a coordinate space where each objective is a dimension as shown in Figure~\ref{fig:paretofront}. The outer solutions in this space who do not have both their coordinates exceeded by another solution form collectively what is called the Pareto front.

\begin{figure}[H]
	\centering
	\includegraphics[width=\linewidth]{img/moo_paretofront.png}
	\caption[Non-dominated vectors in objective space]{Non-dominated vectors in objective space, collectively called the Pareto front \citep{deb2000fast}. This objective space has two objectives and thereby two dimensions.}
	\label{fig:paretofront}	
\end{figure}

Since the definition of a multi-objective problem suggests it is not possible to have a single, globally optimal solution, the ability of EA to produce a large variety of candidate solutions is very beneficial. This has motivated many Multi Objective Evolutionary Algorithms (MOEAs) to be suggested. The primary goal of these MOEAs is to utilize EAs ability to generate multiple Pareto-optimal candidates in a single run. %There are numerous MOEA solutions, each either improving the solution, the effectiveness of the algorithm or both \citep{coello2007evolutionary}.

The problem of evaluating the performance of a Micromanagement AI can be viewed as a multi-objective problem, there are several factors to consider when evaluating the performance of the AI, and it will sometimes be difficult to compare two solutions as they might excel at different Micromanagement techniques (later discussed in Section~\ref{sec:micromanagement}). A multi-objective approach will make it possible to optimize the Micromanagement solution for several techniques at once, so the complex behaviour that is micromanagement can be approximated.

\subsection{Nondominated Sorting Genetic Algorithm (NSGA-II)} \label{sec:nsga2}
One of the most adopted Multi Objective Genetic Algorithms (MOGA) is the improved non-dominated sorting genetic algorithm (NSGA-II) which is an elitist approach that does not require any additional user-defined parameters. NSGA-II combines the population of adults $R_t$ and the population of children $Q_t$, and sorts them according to their non-domination rank as shown by the pseudo code in Figure~\ref{fig:nsgasort}. The rank is based on what pareto front the individual belongs to in accordance to who the individual dominates and who it might be dominated by.

\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/nsga_sort.png}
	\caption[Non-dominated sort algorithm used in NSGA-II]{Pseudo code of the fast non-dominated sort algorithm used in NSGA-II \citep{deb2000fast}}
	\label{fig:nsgasort}
\end{figure}
\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/nsga_operator.png}
	\caption[NSGA-II selection operator]{The selection operator as presented by \citep{deb2000fast}.}
	\label{fig:nsgaoperator}
\end{figure}
\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/nsga_crowding.png}
	\caption[NSGA-II crowding distance assignment]{Pseudo code for the crowding distance assignment \citep{deb2000fast}.}
	\label{fig:nsgacrowding}
\end{figure}
\begin{figure}[h]
	\centering
	\includegraphics[width=\linewidth]{img/nsga_loop.png}
	\caption[NSGA-II main loop]{Pseudo code for the NSGA-II main loop \citep{deb2000fast}.}
	\label{fig:nsgaloop}	
\end{figure}

NSGA-II attempts to promote diversity and good spread in the objective space by looping through the population and assigning a \textit{Crowding Distance} metric to each solution as shown in Figure~\ref{fig:nsgacrowding}, this reflects how close it is to its neighbouring solutions and keeps the population diverse by making the algorithm more likely to explore solutions from lesser clustered objective space.

When creating the mating pool NSGA-II uses a selection operator $\ge _n$ which rewards objective fitness and spread by looking at the non-domination rank and crowding distance. The logic behind the operator is shown in Figure~\ref{fig:nsgaoperator}. It then continues by selecting the $N$ best solutions for $P_{t+1}$ according to the operator ($N$ being the population size). Binary Tournament selection and recombination EVOPs are then applied to create a new offspring population $Q_{t+1}$. 

The main loop of NSGA-II as shown in Figure~\ref{fig:nsgaloop} is repeated until a user defined condition is met, or manually terminated.

