\chapter{Performance test of prototype}
The implemented prototype shows performance gains in some areas and performance losses in others. We document the test results and look into possible conclusions.

\section{How the tests were executed}
Each of our benchmarks have run 100 recalculations of a workbook and calculates the average time.

The workbooks uses the \keyword{tabulate(Function, Number, Number)}-function, and each benchmark is executed on a range of linearly or quadratic growing data sizes. 

\subsection{Floating point precision and performance}
As noted previously, Accelerator uses single precision floating point numbers, and CoreCalc uses double precision.

On modern CPUs there are no differences in performance between operating with floats and doubles, except for the division operations. Most current GPUs only support single precision floating point numbers. NVIDIA has earlier implemented double precision on GPUs with the NVIDIA G80 that worked at $1/10$th speed of single precision operations and the new Fermi will support double precision with half the speed of single precision\cite{patterson}.

As this will change drastically in the near future we have decided not to look at how float to double casting is affecting our test results.

\subsection{Hardware}
All tests have been run on the same hardware setup as our earlier tests and we have used the NVIDIA GT240 graphics card.

\section{Results}
We have seen performance gains in our tests and simulations this shows that spreadsheet calculations can be optimised using the GPU, if relatively large data sizes are provided. In this section we will go through which factors we believe to be important when doing such an implementation and look at how much data that is needed.

The results of this benchmark can be found in appendix \ref{appendix}.

\subsection{Built-in function}
For the built-in functions we found that given a sufficiently large input array, and a sufficiently complex function, it will impact performance positively to do the calculations on the GPU. However, as shown in our analysis of Accelerator, very few operations are sufficiently complex or take enough arguments to actually have a positive impact.

Our tests showed that arrays of $96^2$ were needed for a matrix multiplication to show a performance gain. If this is a common scenario for calculations in spreadsheets, it would make sense to spend more time on this kind of operations. However, it seems very tedious to work to with this many cells in a spreadsheet.

It should be noted that the upper limit for the calculations, is also relatively close to the lower bounds where the CPU is faster. In our example it will be possible to create optimization in the range of $[96^2;240^2]$, where after the values have to be split into two arrays and so forth.

\subsection{Sheet defined function}
We tested sheet defined functions in several scenarios, building both more and less complicated SDFs. We converted real-life examples of Monte Carlo simulations from Excel into Sheet defined functions and ran them on CoreCalc. Performance gains were found by using GPUs in this way. However, we found factors that influence the performance of this implementation.:

\subsubsection{Aggregating is slow on the GPU}
In Monte Carlo simulations, aggregating functions are often used. Aggregating values is slower on the GPU than on the CPU \cite{acceleratorv2-intro}, and should be used with great caution. For many simulations it might not make sense to create and calculate the sampling data on the GPU, transfer it back to the CPU and do the aggregation. This is also described in chapter \ref{non-spreadsheet-tests}.

With the NVIDIA Fermi, one would imagine that performance of aggregating functions will be improved. NVIDIA Fermi promises more shared memory and much faster atomic operations to access shared memory, which gives a better foundation for reduction operations and thereby aggregate functions.\cite{fermiwhite}

\subsubsection{Random data needed to be transferred}
When doing Monte Carlo simulations, we transfer random data from the CPU to the GPU. This increases the total time spent because of the larger transfer time. It would probably improve performance of Monte Carlo simulations if random numbers were simply generated on the GPU instead of generated on the CPU then transferred.

A pseudo-random number generator is possible to create on the GPU, and future releases of Accelerator 2.0, are expected to support this\cite{acceleratorv2-intro}. 

\subsubsection{Reducing the amount of constants}
In our implementation we worked on reducing the amount of data needed to be transferred to the GPU. Looking at the derived slopes of benchmarking results in the different Heron-implementations, we see that the intersection between the functions of the computation on the GPU and the CPU, respectively is smaller the less data needed to be transferred. 