%!TEX root = /Users/dejean/Documents/ITU/2aar/MAIG-E2011/exam_project/report/simmar_report.tex
\section{XCS}
\subsection{Method}
The basic idea of Accuracy-based Classifier Systems (XCS) is to find a set of condition-action rules (classifiers) that are maximally general through reinforcement learning and niche genetic evolvement.

The version of XCS chosen for this project is the later revision by Wilson \cite{xcs_generalization}, a development of his original description \cite{xcs_original}. The software implemented for this project follows the guidelines given in \cite{xcs_description}.

Figure \ref{xcs_overview} shows an overview of the XCS system. Detectors feed the system with input from the environment in the form of a condition. Effectors feed the environment with output in the form of an action.

Conditions are binary with wildcard placeholders. Each bit can take the values $0$, $1$ or $\#$, where $\#$ is a wildcard. $\#$ can be substituted by either $0$ or $1$. 

In XCS, A classifier has a number of associated properties:

\begin{itemize}
	\item $p$, the prediction
	\item $\epsilon$, the prediction error
	\item $F$, the fitness
	\item $exp$, the experience; the number of times this classifier has belonged to an action set since it's creation.
	\item $ts$, the time stamp for the last time the GA component was run on an action set this classifier belonged to.
	\item $as$, an estimation of the average size of action sets this classifier has belonged to.
	\item $n$, the numerosity; the number of micro-classifiers this classifier represents.
\end{itemize}

The XCS system itself consists of three basic components:

\begin{itemize}
	\item Performance
	\item Reinforcement
	\item Discovery
\end{itemize}

The system maintains five principal data structures:

\begin{itemize}
	\item The population $[P]$, the population of currently active classifiers 
	\item The match set $[M]$, the set of classifiers whose conditions match the last given input
	\item The prediction array $[PA]$, the prediction of the possible actions, based the match set 
	\item The action set $[A]$, the set of classifiers that whose action has been chosen for selection
	\item The previous action set $[A]_{-1}$, the action set from the previous iteration
\end{itemize}

There are also a number of parameters used by different parts of the system to configure the behaviour of the algorithm. These are covered in section \ref{xcs_parameters}.

In the performance component, the condition is matched to the conditions of the classifiers in $[P]$, forming $[M]$. If the number of different actions in $[M]$ falls below a certain threshold, covering is performed, meaning that classifiers that ''cover'' the missing actions are inserted into $[P]$. 

Then, $[PA]$ is formed as the fitness-weighted averages of the predictions of the classifiers in $[M]$. The action with the highest prediction in $[PA]$ is chosen as the next action to send to the environment. The classifiers in $[M]$ that advocates this action form $[A]$.

In the reinforcement component, the properties of the classifiers that have been selected in $[A]$ are updated. If the problem is single-step, the update is performed on $[A]$ using the current reward, if it is multi-step, the update is performed on $[A]_{-1}$ using the discounted previous reward.

The discovery component (the GA) component is run when the average time since it was last run exceeds a given threshold. Two classifiers from the action set are chosen as parents, two children are produced and crossover and mutation is possibly performed. If the problem is single-step, the GA operates on $[A]$, if it is a multi-step problem, the GA operates on $[A]_{-1}$.

The main differences between the original description of XCS and the version used here are:

\begin{itemize}
	\item The covering criterion is changed to simply assuring that a certain number of possible actions are represented in $[P]$, usually all possible actions.
	\item The GA step is performed on $[A]/[A]_{-1}$ rather than the $[M]$. This gives the algorithm better performance.
	\item The calculation of the accuracy measure $\kappa$ uses a power law function as described in \cite{xcs_getreal} rather than an exponential function.
	\item The MAM (''Moyenne adaptive modfi\'e\'e'') technique is not used in the classifier fitness update.
	\item The idea of \emph{subsumption} is introduced as an optional procedure in the reinforcement and discovery components. In short, if condition A is logically subsumed by condition B, it does not improve the system's performance and can be deleted. While subsumption was actually implemented in this project, experiments with this was not performed due to time constraints.
\end{itemize}

\subsection{Representation}
As mentioned in section \ref{introduction}, the state space of the Mario environment is potentially huge, and to condense it into a form that affords a level of information for an agent to perform well, while keeping computational cost and time consumption feasible for the learning phase poses a big challenge.

For this project, realising that being overambitious would complicate matters while trying to implement and use the XCS algorithm correctly, it was decided to stick to a very simple input representation.

The idea was to basically try to improve on the behaviour of \emph{ForwardJumpingAgent}, adding more observations, but still with an emphasis on advancement/obstacle avoidance rather than maximising the score through collecting coins and powerups and killing enemies.

Also, the focus was on creating an agent that would be able to perform well on the default difficulty 0 and possibly difficulty 1.

The final inputs for the XCS-based Mario controller agent, \emph{XCSAgent} are:

\begin{enumerate}
	\item $CAN\_ASCEND$, indicates whether Mario is able to jump, either initially getting off the ground or adding momentum to an ongoing jump for longer/higher jumps. Similar to the corresponding input for \emph{NEATAgent}.
	\item $OBSTACLE\_AHEAD$, indicates whether there is a level obstacle ahead of Mario. Similar to the corresponding input for \emph{NEATAgent}.
	\item $PIT\_AHEAD$, indicates whether there is a pit ahead of Mario. Positive if all cells in a certain range is $0$. Similar to the corresponding input for \emph{NEATAgent}.
	\item $SHOT\_AIMED$, indicates whether Mario is in fire mode and there is one or more enemies within a certain range ahead of Mario. Similar to the corresponding input for \emph{NEATAgent}.
	\item $ENEMY\_AHEAD$, indicates whether there is one or more enemies in the BLAH cells ahead of Mario. Similar to the corresponding input for \emph{NEATAgent}.
\end{enumerate}

As the chosen inputs are only observations of what is in front of Mario or Mario's own state, it seemed unnecessary to include the possible variations of moving and jumping left. Furthermore, moving up is only relevant to levels with ladders, meaning levels with higher difficulty than 1, so this action was not included either. Lastly, moving down means descending a ladder or ducking, both situations that was not deemed necessary on difficulty 0.

The final outputs are:
\begin{enumerate}
	\item $RIGHT$, move right.
	\item $RIGHT\_JUMP$, move right and jump.
	\item $RIGHT\_SPEED$, move quickly right. Also fires a fireball if Mario is in fire mode.
	\item $RIGHT\_JUMP\_SPEED$, move quickly right and jump. Also fires a fireball if Mario is in fire mode.
\end{enumerate}

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/xcs_overview.png}}
	\caption{XCS overview}
	\label{xcs_overview}
\end{figure}

\subsection{Parameters} \label{xcs_parameters}

The parameters of the XCS system was set as follows:

\begin{itemize}
	\item $N=400$, maximum population size
	\item $\beta=0.2$, learning rate
	\item $\alpha=0.1$, accuracy distinction rate
	\item $\epsilon_{0}=0.1$, error threshold
	\item $\nu=5$, fitness update power
	\item $\theta_{GA}=25$, GA firing threshold
	\item $\chi=0.8$, crossover probability
	\item $\mu=0.02$, mutation probability
	\item $\theta_{del}=20$, deletion threshold
	\item $\delta=25$, fitness fraction for deletion
	\item $P_{\#}=25$, wildcard probability when covering
	\item $p_{explr}=0.5$, exploration probability
	\item $\theta_{mna}=4$, covering threshold
	\item $p_I, \epsilon_I, F_I=0.000001$, default values for new classifiers
\end{itemize}

Most of these values are the commonly used suggestions from \cite{xcs_description}. Experiments were made with values below and above for each of these, but changes had minimal impact on the convergence pattern seen in figure \ref{learning_winrate_1000}.

The only parameter that differs significantly is $N$. The suggested values are 800 or 1600, but a larger population did not have a positive effect on performance.

The rewards were as follows:

\begin{itemize}
	\item Moving forward, 10
	\item Jumping when facing an obstacle and jumping is possible, 100
	\item Jumping when facing an obstacle and jumping is possible, 100
	\item Shooting when possible, 1000
\end{itemize}

From experiences with the NEAT agent, it seemed that the strategy learned with a similar (small) set of inputs was to move forward as fast as possible, shoot whenever possible and dodge the possible obstacles/enemies observed in the given observation.

\subsection{Learning}

Figure \ref{learning_winrate_1000} shows the win rate as a function of 1000 learning evaluations. As can be seen from the graph, the rate tops at around 250 evaluations and then tapers slowly off. 

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/learning_winrate_1000.png}}
	\caption{XCS Agent learning, win rate}
	\label{learning_winrate_1000}
\end{figure}

Experiments with evaluations above 1000 did not show a later increase in win rate. Thus, the stop criteria for the XCS learning was set as 250 evaluations.

Figure \ref{learning_populationsize_250} shows the population size (number of classifiers/rules) as a function of 250 learning evaluations. After the first 50 evaluations, the size generally fluctuates between 45 and 65. 

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/learning_populationsize_250.png}}
	\caption{XCS Agent learning, population size}
	\label{learning_populationsize_250}
\end{figure}

Figure \ref{learning_populationsize_250} shows the fitness as a function of 250 learning evaluations. The fitness function used is the same as for the NEAT agent, except the value is not used in the learning process, only as a performance measure. The results are very noisy, indicating a high degree of randomness in the performance of the XCS agent.

\begin{figure}[htp]
	\centerline{\includegraphics[width=0.8\columnwidth]{images/learning_fitness_250.png}}
	\caption{XCS Agent learning, fitness}
	\label{learning_fitness_250}
\end{figure}

\subsection{Conclusion}

Many of the points noted in section \ref{neat_conclusion} are also valid here, as the representation is largely the same. The XCS agent tries to follow a jump-and-shoot strategy, gets stuck in dead ends and doesn't notice enemies that are not right in front of him. The noisy fitness results are most likely due to dying often in these situations.

A difference to the NEAT agent's behaviour is that the XCS agent does not seem to learn the proper timing of jumps when facing high obstacles that require holding a jump action for the right amount of ticks to elevate and move forward properly. This means that it often gets temporarily stuck, jumping up and down in front of an obstacle.

The fairly large population size suggests that there is much room for improving the generalising capabilities of the XCSAgent. The representation is too simplistic, leaving too many cases to be covered by more specific classifiers. 

Activating the subsumption checks during the action set update and GA step might help diminish the number of classifiers.