\chapter{Analysis of Microsoft Accelerator}
\label{non-spreadsheet-tests}
In this chapter, we will look into calculations in Accelerator and benchmark against C\# implementations. We will identify operations from spreadsheets that may be possible to optimize using Accelerator, and look into limitations of the framework. We will try tnesto answer the following five questions:

\begin{itemize}
\item \textbf{Maximum data} What is the maximum amount of data that we can send into the GPU?
\item \textbf{Transfer time} What is the minimum transfer time to the GPU, and how does the amount of data affect the transfer time?
\item \textbf{Single operations} What is the performance impact on single operations?
\item \textbf{Complex operations} What is the performance impact on complex operations?
\item \textbf{Value creation} Is there any performance impact of the different ways to create values?
\end{itemize}

\section{Hardware setup}
All tests have been run on one specific machine. Different hardware setups will ofcourse yield different results for the coming performance tests.

The machine is what would normally be classified as a gaming machine. It is from Hewlett Packard and the model numbers is Z400. The GPU is a NVIDIA GT240, and the CPU is an Intel Xeon W3505. It runs Windows XP 32-bit and has installed DirextX9. The following tables summarizes the hardware specifications.

\begin{table}[H]
\begin{center}
\textbf{NVIDIA GT240}\\\bigskip
    \begin{tabular}{ | l | l | l | p{5cm} |}
    \hline
    CUDA Cores & 96 \\ \hline
    Graphics Clock (Mhz) & 550 MHz \\ \hline
    Processor Clock (Mhz) & 1340 MHz  \\ \hline
    Memory Clock (Mhz) & 1700 MHz GDDR5\\ \hline
    Memory & 1 GB \\ \hline
    Memory Interface Width & 128-bit \\ \hline
    Memory Bandwidth & 54.4 GB/sec \\ \hline
    Bus Support	 & PCI-E 2.0  \\ \hline
    \end{tabular}
    \bigskip

\textbf{Intel Xeon W3505}\bigskip

    \begin{tabular}{ | l | l | l | p{5cm} |}\hline
    Cores & 2 \\ \hline
	Threads & 2 \\ \hline
	Clock speed & 2.53 GHz \\ \hline
	Intel Smart Cache & 4 MB \\ \hline
	Intstruction set & 64-bit \\ \hline
    \end{tabular}
  \caption{Hardware specification}
\label{table:hardware}
\end{center}
\end{table}
The machine further has 4096 MBytes of DDR3 ram installed.

\section{Constructing the tests}
In order to ensure stable results in performance tests, each test case have been constructed to be executed and timed 100 times with randomly generated input data. The results of the GPU and CPU versions of the test cases have been compared and verified to be approximately correct (taking floating point precision problems on the GPU in consideration). The tests have all been built using Visual Studio 2010 release settings and have been executed outside the Visual Studio environment to ensure that no unnecessary monitoring was done.

\section{Test results}
\subsubsection{Maximum data}
There are two important factors to consider when looking at the amount of data we can transfer to and process on the GPU: The maximum texture size, and the amount of memory available. The maximum texture size defines the maximum width and height for a array of floats that we are able to send to the GPU. The memory further limits how many textures and how complex shaders cab be stored.

To get the maximum texture size, we simply sent textures of increasing size into the GPU until it returned and error, which  was around 8000 $X$ 4000. We were not able to find a method for determining the maximum complexity of operations, however, we did find that matrix multiplication is only possible on sizes lower than two 240x240 arrays.

\subsubsection{Transfer time}
Transfer time can be divided into two components: Latency and transfer speed. We define latency as the initial time it takes to transfer data to the GPU. Transfer speed is defined as the time it takes to transfer a single float value. This model is simplified in relation to hardware architecture, however it suits our purposes: $transfertime(x)=latency+x*speed$

To measure the transfer time, we ran the following code on different sizes of \keyword{x}. Note that even though we use the term "transfer time", it might be more accurately described as the overhead for any Accelerator evaluation on a GPU target.

\begin{lstlisting}[caption=evalTarget code,language=CSharp]
evalTarget.ToArray(new FloatParallelArray(x), out result);
\end{lstlisting}

\begin{figure}[H]
\includegraphics[width=1.0\textwidth]{pics/transferlatency.pdf}
\caption{Time from sending a data input to getting it back again}
\label{transfer latency chart}
\end{figure}

Using regression on the data depicted in figure \ref{transfer latency chart}, we derived at a latency of 2,1 ms, and the speed is 1,68E-005 $\frac{ms}{float}$ Note that this is the sum of the transfer time for transferring the data to the GPU and transferring the data back.

\subsubsection{Single operations}
In this section we will look into performance impacts for a few single operations in Accelerator, and compare these with a C\# implementation that does the same job. We will further derive the actual cost of an operation by doing linear regression on the data and look at the difference between the overall time spent and the transfer time that we looked at in the previous section. 

\begin{figure}[H]
\includegraphics[width=1.0\textwidth]{pics/gt240-simpleOperatiosn.pdf}
\caption{Performance tests of single simple operations on the GPU compared to the CPU}
\label{simple single operations}
\end{figure}

Using linear regression, we looked at the slopes of both the C\# version and the Accelerator counterparts of the above functions. $Delta_{Slope}$ in the below mean the difference between the slopes of transfering data and the slope for a whole operation.

Note that it has not been possible to measure the slopes for transferring two constants, but simply multiplied the value of transferring one constant by two. Also note that the sum operation only has half the transfer slope. This is because a sum operation, while alot is transfered to the GPU, only a single constant is transferred back.

This is of course simplifications, and is to some extent inaccurate.

\begin{table}[H]
\begin{center} 
\begin{tabular}{| l | r | r | r | r | r |} \hline
 Operation & $Slope_{C\#}$ & $Slope_{Transfer}$ & $Slope_{Operation}$ & $\Delta_{slope}$\\ \hline
 Add & 6,00E-06 & 3,36E-05 & 5,64E-05 & 2,28E-05 \\ 
 Sub & 6,84E-06 & 3,36E-05 & 4,57E-05 & 1,21E-05 \\
 Mul & 6,82E-06 & 3,36E-05 & 4,54E-05 & 1,18E-05 \\
 Div & 7,81E-06 & 3,36E-05 & 4,57E-05 & 1,21E-05 \\
 Sqr & 8,57E-05 & 1,68E-05 & 2,00E-05 & 3,21E-06 \\
 Sum & 1,69E-05 & 8,40E-06 & 2,04E-05 & 1,20E-05 \\
 If & 1,67E-05 & 3,36E-05 & 2,02E-05 & -1,34E-05 \\ \hline
\end{tabular}
\caption{Test results from single operations}
\label{table:result single operations}
\end{center}

The above gives us an estimate of how well the GPU performs a given operation compared to the CPU also comparing without the overhead of transferring. Most functions will, given enough data, be able to perform faster on the CPU than on the GPU looking only at the slopes. However because of limits on the GPU, it might not always be possible to reach the amount of data needed.

The slope for the SUM operation on the GPU is very close to the slope for the CPU-vesion. This is a general tendency for reduction operations on GPU targets\cite{acceleratorv2-intro}. If we had a slower graphics card or a faster processor, this operation would actually be overall slower, leaving us with no reason at all to transfer such an operation to the GPU.

\begin{figure}[H]
\includegraphics[width=1.0\textwidth]{pics/gt240-mmult.pdf}
\caption{Performance tests of matrix multiplication on the GPU compared to the CPU}
\label{single mmult}
\end{figure}
\end{table}

We found some good performance gains for few single operations. Matrix multiplication was one. It is a more complex function that requires a series of arithmetic operations and the tests results above show that a performance gain is possible on realistic data sizes.

\subsubsection{Complex operations}
In this test case we construct complex operations by building graphs of simple operations. This is done to test how the complexity of an operation affects the time spent on the computation and to mimic possible spreadsheet formulas. For example where A1 has the formula $=B1+C1$ and B1 and C1 have formulas pointing to other cells. These graphs have been constructed by nesting simple arithmetic operation as shown in fig. \ref{nested operations graph}. For each time the graph increase one in size the previous generated graph will be used as the left leaf in a new operation and the right leaf will be the same constant FPA as used earlier.

\begin{figure}[H]
\begin{center}
\includegraphics[width=0.4\textwidth]{pics/nestedGraph.png}
\caption{Graph of nested addition operations}
\label{nested operations graph}
\end{center}
\end{figure}
Similar graphs for subtraction, multiplication, division, and a graph of mixed operation have been used in the test. The graph of mixed operations switches between multiplication, addition, and subtraction starting with multiplication.

\begin{figure}[H]
\includegraphics[width=1.0\textwidth]{pics/gt240-nestedchart.pdf}
\caption{Performance tests on nested mixed operations on the GPU compared to the CPU}
\label{nested operations chart}
\end{figure}

As shown in the above chart (Fig. \ref{nested operations chart}). The CPU time of computing the multiplication and division graphs is low compared to that of the GPU. With only 6-10 nested operations it is possible to outperform Accelerator on large datasets. The \keyword{Power} operations performs very well which is probably because of the use of the constant 2. Multiplication also shows great performance. Based on this we conclude: The more complex an operation is the more potential performance gain is there to get when running it on the GPU.

\subsubsection{Value creation}
Values can be created in several different ways in Accelerator. In this section we will compare the performances of different types of array creation.

\subsubsubsection{Creating arrays}
FloatParrallelArrays can be created in two ways in Accelerator:

\begin{lstlisting}[language=CSharp]
public FloatParallelArray(float f, params int[] shape);
public FloatParallelArray(float[,] values);
\end{lstlisting}

This means that if we want to create an array of constant values, we can fill a two-dimensional array with the same value or simply use the first and tell Accelerator the dimensions we wish for. Test showed that we could get up to 4 times performance gain by creating arrays using the first method compared to filling an array in C\# and creating the FPA with the second method.

\subsubsubsection{Binary operations}
Many binary operations are overloaded to allow easier mass operations with the same constant:

\begin{lstlisting}[caption=Add operation,language=CSharp]
public static FloatParallelArray Add(FloatParallelArray a, float f);
public static FloatParallelArray Add(FloatParallelArray a1, FloatParallelArray a2);
\end{lstlisting}

The methods will give the same result if the array in \keyword{a2} is filled with the value of \keyword{f}. We tested the performance on these, and found no differences in performance, if either a1 or a2 was a constant array created with the first method for array-creation.