\section{Previous work on parallelism and GPGPU in spreadsheets}
\label{se:gpgpu-spreadsheet}
We have not been able to find any research or products that explore the use of the GPU to optimize heavy spreadsheets. The topic has been mentioned several times on online forums and in interviews. When it comes to parallelism and spreadsheets a limited amount of research and products exist.

In Andrew P. Wacks PhD dissertation from 1996, \textit{"Partitioning dependency graphs for concurrent execution: A parallel spreadsheet on a realistically modeled message passing environment."}\cite{wack} he describes how to partition the spreadsheet graph in order to split the computation across multiple computers on a network. 
Jens Hamann's Master's thesis from 2010 on \textit{"Parallelization of spreadsheet computations"}\cite{hamann} investigates the possibilities of enabling calculations in spreadsheets to be evaluated in parallel and adapts some of Wacks theories to modern multicore CPU's using CoreCalc.

%Parallelism in spreadsheets
Microsoft Excel 2007 has an option for enabling a multithreaded calculation engine that is described as being able to partition groups of cells that can be parallelized. There's no technical documentation for the design of Excel's multithreaded calculation engine or how it is partitioning the graph.

Hamann mentions the possibilities of using the parallel power of the GPU in spreadsheets. He further mentions Microsoft Accelerator as a library that will enable elegant implementations of such.



%What are people talking about: GPGPU + spreadsheets

As a consequence of there being very little information on GPGPU and spreadsheets we have broadened our search to include internet forums. On an interview about GPGPU with Ian Buck from NVIDIA at www.tomshardware.com\cite{toms} he says that \textit{"We think your spreadsheet might already be fast enough. While video processing was an obvious application to accelerate (...)"}. This quote is questioned several times in the comments. Dr. Drey writes

\textit{ "NVIDIA, saying that "spreadsheet is already fast enough" may be misleading. Business users have the money. Spreadsheets are already installed (huge existing user base). Many financial spreadsheets are very complicated 24 layers, 4,000 lines, with built in Monte Carlo simulations. Making all these users instantly benefit from faster computing may be the road for success for NVIDIA."}. Other comments support this idea.

On the NVIDIA CUDA forum in the topic "CUDA, NVIDIA GPUs and Microsoft Excel"\cite{nvidia-forum} different approaches and reasons for using CUDA in spreadsheets are discussed. A specific computationally heavy sheet is discussed and it is concluded that it is not the kind of job for CUDA or any other GPGPU approach. This is based on the fact that GPU's have an SPMD (single program multiple data) approach, which means it works best when you use a single kernel (operation) on a big data set. The argument continues and a theoretic CUDA-accelerated spreadsheet is discussed. 

This theoretic spreadsheet would have to be able to be sliced into series that run the exact same formula on a big data set, such as a whole column. That formula could then be compiled into a kernel and the data be uploaded as a stream. It is stated in the forum thread that it would have to be an outrageously big sheet (hundreds of thousands of lines) for a simple function to be optimized using the GPU. This statement might have been correct, due to high transfer latency between the GPU and the CPU, and that GPUs at the time of writing (2008) were not as fast and suited for GPGPU as modern GPUs. Also when using complex Monte Carlo Simulations, thousands of lines is not unrealistic.

Challenges such as float accuracy and rounding problems are also mentioned. In the same forum thread Hyun-Gon Ryu. from Yonsei University discusses different approaches for integrating CUDA with Microsoft Excel using VBA. That is not within the scope of this project though.


