\label{chapter:gpu}
This chapter aims to provide information about the second architecture that this algorithm was implemented on, namely a GPU. It will start with an overview of the advantages and disadvantages of a GPU. Then the particular implementation of A-CMA on the GPU will be described. Finally, the chapter will conclude with profiling results of the algorithm and the bottlenecks for this particular implementation.

\section{Architecture}
A GPU (Graphics Processing Unit) is a processor optimised for parallel instructions, more specifically for: \emph{single instruction, multiple data}. This optimisation is due to the nature of graphic processing: There are many pixels on which the same kind of operations has to be applied. GPUs consist of more transistors than CPUs, but are less complex.

It is possible to use the GPU for general purpose computations, but to achieve optimal performance the program will have to be altered. A GPU is optimised for a constant stream of data, so if a constant stream of input data can be ensured, the performance is maximum. The key is to alter the program in such a way that this can be achieved. 

The actual GPU used was a Nvidia Geforce GTX 470 with 1280MB RAM. This GPU supports Cuda version 2.0 and has 448 Cuda cores. A schematic overview of both a CPU and a GPU can be seen in figure \ref{fig:cpu_vs_gpu}.

\begin{figure}
	\centering
		\includegraphics[width=1\textwidth]{img/cpu_vs_gpu}
	\caption{A schematic overview of a CPU and a GPU\cite{nvidia}.}
	\label{fig:cpu_vs_gpu}
\end{figure}

This figure cleary shows the completely different nature of both Processing Units: \emph{serial processing} versus \emph{parallel processing}.

\section{Implementation}
The Computer Unified Device Architecture (CUDA) enables the execution of tasks on a compatible GPU. CUDA provides C-bindings as well as MATLAB bindings. The latter was used for this implementation, apart from the reasons stated in chapter \ref{chapter:cpu} this was done because of the ease of use and to make a quick alteration of the CPU implementation possible. With Matlab's CUDA implementation, arrays can be casted to the GPU and supported operations on this array are calculated on the GPU. It is also possible to specify for custom functions to be explicitly executed on the GPU.

Drawing from the classification of the algorithm into the necessary operations, as described in section \ref{sec:cpu_implementation}, it can be concluded that the majority of the algorithm consists of multiplications. The key is to organise the data in such a way that the multiplications needed are as few as possible. This has for instance been done with a constant that had to be multiplied with something two times. The constant was vertically concatenated (becoming a column vector), so that the result of the multiplication was again a vector. In this way, some optimisations have been made.

\section{Bottlenecks}
The GPU implementation was profiled by executing it with a varying number of antennas. A plot of the execution time versus the number of antennas can be seen in figure \ref{fig:gpu_time}. 

\begin{figure}[!ht]
	\centering
		\includegraphics[width=1\textwidth]{img/gpu_time}
	\caption{The time it takes the GPU to execute the algorithm on a set of 1000 input symbols plotted versus the number of antennas.}
	\label{fig:gpu_time}
\end{figure}

Note that the execution time does not vary a lot. The conclusion is drawn that the movement of data to the appropriate GPU cores is much more costly than the actual multiplications itself. To illustrate this, the 'real work' done (multiplications) was plotted as a percentage of the total time spent on the function versus the number of antennas. This is plotted in figure \ref{fig:gpu_time_percentage}. As can be seen, more and more time is spent on 'overhead' like moving data and processing the loop. Note that the latter operation is done on the CPU.

\begin{figure}[!ht]
	\centering
		\includegraphics[width=1\textwidth]{img/gpu_time_percentage}
	\caption{The time the GPU spent on multiplications as a percentage of the total time of the algorithm.}
	\label{fig:gpu_time_percentage}
\end{figure}

Finally, it is useful to plot the results of the CPU and GPU test in a single figure. This can be seen in figure \ref{fig:combined_time}. It is clear that for a larger number of antennas it is beneficial to use the optimised GPU implementation.

\begin{figure}[!ht]
	\centering
		\includegraphics[width=1\textwidth]{img/combined_time}
	\caption{The time it takes the CPU and GPU to execute the algorithm on a set of 1000 input symbols plotted versus the number of antennas.}
	\label{fig:combined_time}
\end{figure}
