\label{chapter:cpu}
In the previous chapter, the algorithm that was implemented was briefly explained. This chapter aims to provide information about one of the architectures that this algorithm was implemented on, namely the CPU. It will start with an overview of the CPU with it's advantages and disadvantages. Then the particular implementation of A-CMA on the CPU architecture will be described. Finally, the chapter will conclude with profiling results of the algorithm and the bottlenecks of that implementation.

\section{Architecture}
The Central Processing Unit (CPU) of a computer executes instructions of a computer program and acts as a controller of the other components in a computer. A CPU is optimised for sequential operations like for-loops. The actual CPU used was an Intel Core 2 DUO E6850 @ 3.00 GHz.

\section{Implementation}
\label{sec:cpu_implementation}
An implementation of A-CMA was written in Matlab. Matlab was used, because it provides efficient and easy support for matrix operations, which are frequently used in this algorithm. Also, Matlab has extensive support for profiling, thereby providing an easy method to test the performance of the algorithm. A drawback of using Matlab is that a lot is done under the hood. There is no direct control of all operations that are executed on the CPU, because Matlab optimises a lot. However, there was still enough control on what was executed to get meaningful results.

Implementing the algorithm for the CPU in Matlab is fairly straightforward. It just comes down to generating input data, applying the formula to this input data, and then continuing to the next input. Generating input data was done by using the built in Matlab function to generate QPSK modulated signals. The formula was defined as an anonymous function with the input as parameters. This proved to be the most efficient CPU implementation for Matlab. One other approach that was tried was to use actual functions to generate the matrix $B'$, however this proved to be less efficient than generating this inline, as some work needed to be done twice in that approach.

Mathematically, it can be seen that the algorithm has the following operations:

\begin{itemize}
	\item Calculation of $\phi$ and $\phi'$; this consists of one sin, one cos and some simple multiplication and division
	\item Scalar by matrix multiplication (2x); this consists of $N\cdot N$ multiplications
	\item Element wise matrix by matrix multiplication; $N\cdot N$ multiplications
	\item Vector by matrix multiplication; $N \cdot N$ multiplications and $N \cdot (N - 1)$ additions
	\item Vector by vector multiplication; $N$ multiplications and $N - 1$ additions
	\item Scalar by vector multiplication; $N$ multiplications
	\item Exponential of a vector; $N$ times evaluating the exponential function
	\item Vector by vector; $N$ multiplications and $N - 1$ additions
	\item Some simple multiplications and subtractions
\end{itemize}

When looking at these operations, it can be noted that the number of operations depend on the number of antennas $N$. Also, the highest order is $N^{2}$ for the matrix by matrix multiplication. Therefore, the number of operations needed by this algorithm grows quadratically with the number of antennas.

\section{Bottlenecks}
The implementation in Matlab was profiled to get results about the speed and bottlenecks. Because the algorithm depends on a lot of variables, the variables that have effect on the speed of the program were varied. Thus, different tests were run. The variable that has a large impact on the performance is the number of receivers $N$. This is also explained in the section above. Because all vectors are $N$ wide and all matrices are $N\cdot N$, the number of operations required grows quadratically with $N$. The algorithm was run with $N$ having values as powers of 2, starting at 2 and ending at 1024.

The results of the total time each run took can be found in figure \ref{fig:cpu_time}. In this set-up, a message length of 1000 bits was used. For a small number of receivers, the total time needed does not increase much. It is stable until 32 receivers, probably because of some internal optimisations that Matlab applies. However, with more than 32 receivers, the total time quickly increases, ending with a run of over 92 seconds with 1024 receivers.

\begin{figure}
	\centering
		\includegraphics[width=1\textwidth]{img/cpu_time}
	\caption{The time it takes the CPU to execute the algorithm on a set of 1000 input symbols plotted versus the number of antennas.}
	\label{fig:cpu_time}
\end{figure}

Instead of looking at the total time each test takes, it is also possible to see which lines and functions in the script take the most time. As expected, the CPU spent most of its time in the function calls with the implementation of the algorithm:

\begin{lstlisting}[language=Matlab, breaklines=true]
a_prime         = @(theta_k,xv) real( (xv'*( phi_p(theta_k).*M.*exp(M.*phi(theta_k)) )*xv));
theta_k_next    = @(theta_k,xv) theta_k - mu*(abs( exp(phi(theta_k)*nv)'*xv )^2 - R2) * a_prime(theta_k,xv);
\end{lstlisting}

The second function calls the first function in the above algorithm. When comparing the time spent in the different functions, it can be seen that almost all time is spent in the first function. This is the function that applies the matrix multiplication (with $M$), and therefore it is slow to execute. Figure \ref{fig:cpu_time_percentage} shows a plot of the time spent in this function as the percentage of the total time for each $N$. Here, it can be seen that with a small number of antennas, the percentage of the time spent doing the matrix multiplication is quite small. It also varies a bit for a small number of antennas, most likely due to the very small amount of time it takes to compute the outcome of the algorithm. However, the percentage quickly increases to almost 100\% as the number of antennas grows.

\begin{figure}
	\centering
		\includegraphics[width=1\textwidth]{img/cpu_time_percentage}
	\caption{The time spent in the a\_prime function as a percentage of the total time of the algorithm versus the number of antennas.}
	\label{fig:cpu_time_percentage}
\end{figure}
