\chapter{GPGPU approaches for spreadsheets }
\label{possibilities}
In this chapter we describe different approaches to implementing parallelism using the GPU in spreadsheets while taking test results of Accelerator into consideration. 

As described in \ref{se:gpgpu-spreadsheet} we have not been able to find any previous work on using the GPU for optimizing spreadsheets, but various articles describes approaches with parallelism in spreadsheets using multicore CPU's or High-Performance Computing (HPC). In this chapter we describe our own analysis based on CoreCalc, but also look into how to adapt previous parallelism theories to the GPU.

\section{Single normal built-in functions}
CoreCalc has a range of built-in function like the ones known from Microsoft Excel. Some of these functions, like Matrix multiplication, takes one or more matrices as input should be straight forward to implement using the GPU and if the input is large enough or the arithmetic operations complex enough, based on the test results a performance gain should be possible. Especially matrix multiplication has more effective on the GPU than the CPU on relative low data sizes. 

Simple functions that do not work with matrices such as \keyword{SQRT}, \keyword{SIN}, \keyword{Addition}, \keyword{Subtraction}, \keyword{Division}, and \keyword{Multiplication} are also simple to implement for the GPU, but a performance boost is not expected based on the very small input size of 1-2 arguments and low arithmetic complexity. They are however needed in order to use sheet defined function.

\section{Sheet defined functions}
As described in the introduction, Sheet defined functions are functions defined within a spreadsheet using cells. 

Due to the high transfer latency for the GPU, the arithmetic complexity of an operation is important in order to benefit from the GPU. This is shown in chapter \ref{non-spreadsheet-tests} where we test different single operations and nested operations similar to those produced by sheet defined functions. Even though sheet defined functions are more complex the input data is not necessarily large and could typically be 1-3 arguments which makes using the GPU questionable. On top of that one has to transfer all constants to the GPU as well, meaning every time you write $=C1^2$ or $=C1*10$, the constant $2$ or $10$ will have to be transferred a textures to the GPU.

The potential performance gain will increase however, when using sheet defined functions in higher order functions such as tabulate where the same function is used on a range of input data.

\section{Higher order Map function}
As we concluded in chapter \ref{non-spreadsheet-tests} a rather large amount of data and a complex operation is needed for the GPU to be able to optimise the operation. Therefore you need a quite complex sheet defined function for the GPU to be able to optimize the evaluation on a single function call.
However the same function is often used more than once and in simulations it is not uncommon for the same function to be used 1000+ times. If all of these calls could be constructed into one single call, sending all the input data to the GPU and processing it using the same operation, we expect increased performance. 

CoreCalc includes higher order functions such as \keyword{Map}, \keyword{RowMap}, \keyword{ColMap}, \keyword{Tabulate} which all can be classified as \keyword{embarrassingly parallel problems} since there exists no dependency between each operation and thereby also is suitable for the GPU. Depending on the complexity of the function and the number of times it's used it should be possible to obtain a reasonable performance gain.

\section{When to use the GPU?} 
As already mentioned several times, it is not always a good idea to send the computation to the GPU. In order to estimate which platform is the best suited for a specific computation we need to estimate the execution time of an operation on both platforms on evaluation time.

Both Haman\cite{hamann} and Wack\cite{wack} work on partitioning the dependency graph of a spreadsheet to limit the parallel execution to where there is a potential performance gain. They both use weighted cells (nodes) in the graph and decide based on the total weighting of a partition. As Wack's theory is about distributing the workload to workstations on a network, his model take network latency, speed, distance and other factors into account. Haman uses multiple cores on one CPU and simplifies the weighting to simple numbers. Many of the same principles applies when deciding whether to evaluate an SDF on the CPU or the GPU. Loosely based on their approaches we will first create a simplified model that only applies to SFDs. We use knowledge about the hardware, measured time, input data, and an estimated execution time per operation. 
 
To estimate the execution time, three major approaches are used: Experimental (testing and measuring), probabilistic measurement (based on measurements of small parts), and static analysis that uses constructed models of processor instructions and timings to predict the result. 

Execution time estimation of normal programs is non trivial due to loops and recursive calls that might depend on values that are not known before runtime, but as we don't allow loops and recursive calls in spreadsheets this simplifies the estimation drastically.  Another approach would be to simply run the operation on both platforms the first time it is invokes, and remember what performed the best. However as input parameters might change between calls and because formulas are easily and often changed in spreadsheets, this approach will not only take more time, but will also often be wrong.

For estimation on the CPU we use a simplified model that does not take the architecture, cache or any details of the CPU into account. 
\begin{figure}[H]
\begin{center}
\begin{displaymath}
m*\frac{c}{w}
\end{displaymath}
\end{center}
\textbf{Given:}\\
$m$: Number of operations\\
$c$: Computation time of operation\\
$w$: Number of cores in the CPU\\
\caption{Formula for estimated computation time using the CPU}
\label{WECT formula}
\end{figure}

When using this simple model to estimate execution time of a SDF on evaluation time, $w$ and $m$ are known, but $c$ is unknown as the SDF can contain many operations and conditions. $c$ can however be estimated using a static analysis described later.

When using the GPU we have to expand our model to latency and transfer time:

\begin{figure}[H]
\begin{center}
\begin{displaymath}
k_0 
+ \frac{m*c}{w}
+c*k_1
+m*k_2
+r*k_2
\end{displaymath}
\end{center}
\textbf{Given:}\\
$k_0$: Initial latency of transferring to the GPU\\
$k_1$: Time to transfer one operation\\
$k_2$: Time to transfer one float\\
$m$: Number of operations\\
$c$: Computation time of one operation\\
$w$: Number of cores in the GPU\\
$r$: The result size of the operation\\
\caption{Formula for estimated computation time using the GPU}
\label{WECT formula, GPU}
\end{figure}

This formula can be partitioned into $\frac{m*c}{w}$ being the time of computing the operations on the GPU, $c*k_1+m*k_2$ being the time to transfer the needed data to the GPU, and $r*k_2$ that is the time to transfer the computed result back.

$m$ and $r$ is known at evaluation time of a spreadsheet function. $w$ can be found in the graphics cards specs and $k_0$, $k_1$, and $k_2$ can easily be measured. However $c$ has to be estimated like on the CPU.

\subsection{Estimating execution time}
Estimation the execution time ($c$) can be done by weighting each type of operation with a value and run through all operations to be processed and add together these values. When looking at a conditional statement, one would estimate both the true leaf and the false leaf, the worst estimate will result in a worst case execution time (WCET) and the best case will be the best case execution time (BCET). We'll focus on finding the WCET of a sheet defined function, both for the GPU and the CPU.

First we need to assign a weight to each type of operation, for both the GPU and the CPU. As we have benchmarked the different operations we can derive this weight from the test results. For the CPU this is simply done by using the time of the add operation in the tests, but on the GPU, we have to subtract the latency and transfer time to and from the GPU.

For the GPU we also have to find $k_0$, $k_1$, and $k_2$. However we haven't distinguished between $k_1$ and $k_2$ in our analysis. Taking this into account we assume that $k_2$ includes the time of transferring the operations. Therefore we simplify $c*k_1+m*k_2$ to $c*0+m*k_2$ and end up with only $m*k_2$, leaving out the transfer time of operations. This leaves only the variables known at evaluation time and allows us to estimate the execution time of operations on the GPU. Now we can simply use these two models and static analysis of the SDF to determine which platform to target. On our test setup we only have a single core in the CPU, but this model also takes multicore systems into account to some extent. One factor that is not taken into account is the maximum texture size and memory of the GPU. Exceeding these limits will force us to split the Accelerator call in two or more. 

Due to the scope of this project, we have not looked further into this.
