\section{Parallel computing and GPGPU}
%Parallel programming in general

Parallel computing has been around since the 1950's and has been implemented in high-performance computing (computer cluster). It has mainly been used for research and the first dual-core processors have reached the public in 2005. 
The reason for the growth in public interest is the physical constraints preventing further advancement in the number of operations a CPU can perform (frequency scaling). Instead of frequency scaling, now more cores are added to CPU's and according to Intel\cite{intel-roadmap}, an 8 core Xeon CPU is announced for 2010.

%GPGPU in general
However, multicore CPU's are not the only modern method of parallelism in computers. General Purpose computation on Graphics Processing Units (GPGPU) is using the GPU, which is designed for computing graphics, for purposes that are normally handled by the CPU. Graphics cards were originally designed to be parallel in order to process each vertex in a 3D model independently. The term GPGPU was first coined in the 2002 paper \textit{"Physically-Based Visual Simulation on Graphics Hardware"}\cite{harris}, however, research in using GPU's for general purpose computation has been around for a while. In 1999 the PixelFlow SIMD graphics computer was used to crack UNIX password ciphers\cite{kedem}, using a bruteforce attack.

Graphics cards manufacturers have lately added more precise arithmetic to the GPU, making it more suitable for non graphics computing such as scientific computing.\cite{fermi} In 2006 both NVIDIA's CUDA SDK and ATI's CTM SDK were made public, thereby making GPGPU possible without detailed expert knowledge of the graphics API.
 
Because GPU's are designed for graphics processing they are very limited in terms of programming possibilities and are only efficient when the problems can be computed using stream processing. They can only process single vertices, but can process multiple vertices in parallel. Modern GPUs have typically more than 100 cores (processors) and the new Fermi model from NVDIA introduces 512 cores\cite{fermi}. This makes GPUs ideal for operations that should be applied to every value of a large dataset.

When writing parallel programs for a 4 core CPU the theoretically maximum performance gain is 400\% (minus some overhead). As GPUs work very differently from normal CPUs and have for more cores the maximum performance gain is theoretically far better. 

\section{Parallel Programming Methods}
There are two main parallel programming methods: Task-parallelism, where each task can be separated and executed on another processor while still communicating with other tasks, and data-parallelism where the data for one task can be partitioned and processed individually.

GPUs do not fully support processors to synchronize with each other and classical parallel programming methods cannot be used. A thread cannot spawn a new thread or send results to other threads. This leads to data-parallelism.

\begin{lstlisting}[caption=An embarrassingly parallel problem, language=CSharp]
for(i = 0; i < N; i++)
{
	r[i] = a[i] + b[i];
}
\end{lstlisting}

Given the above C\# code, the array \keyword{r} will sequentially be filled with the result of the add-operation.

This can be optimised using data-parallelism. For this specific problem the arrays \keyword{a} and \keyword{b} can be divided into smaller chunks, and given several processors, the chunks can be calculated independently and later be collected. This example can be generalized. Using the same algorithm, and substituting $a[i] + b[i]$ with any non-volatile function. Problems such as this that can easily be executed in parallel are classified as embarrassingly parallel problems.

Graphics cards solve many data-parallel problems when doing graphical computations like in game s and modern GPUs typically have over 100 cores\cite{fermi}, but a typical graphics card for games can very well have 200-300 cores.

\subsection{Stream processing}
Stream processing is a programming paradigm that uses data-parallelism. In stream processing you construct an algorithm by defining a kernel, which is a function that can process data and return an output. Kernels that are to be applied on a set of data called a stream. It works well for processing images and videos, but it is not designed for general purpose processing using random data access, control flow or database lookups. GPU's are however designed to be efficient for stream processing.

Stream processing is designed for applications that have a high number of arithmetic operations per I/O and where every element of the stream should have the same function applied to it. This means that an optimal application for the GPU should have a large data set, a high possibility of parallelism, a kernel of high arithmetic complexity to be applied to every element, and minimal dependency between operations. 

%Flow Control
In sequential programming for the CPU it is common to control the flow of the program using loops or conditions such as if/then/else. Such flow control has until recently not been possible on the GPU and is still quite limited. Some recent GPUs allow branching (if/then/else), but not without a performance loss\cite{gpu-gems}.


\subsection{Programming for the GPU}
Some low level APIs allow for general purpose GPU programming. This section will give a short overview of the different possibilities.

\textbf{CUDA}\\
Compute Unified Device Architecture or CUDA is NVIDIA's architecture for communicating with the GPU with standard programming languages. In this case C, but wrappers for other languages exists. It shares a large part of its interface with both OpenCL and Direct Compute, but is only available for NVIDIA hardware.

\textbf{OpenCL }\\
The Open Computing Language is an open standard in the spirit of OpenGL and OpenAL(3D and audio) for writing data-based and task-based parallel applications. It shares a range of interfaces with CUDA and Direct Compute, but is managed by the non-profit technology consortium Khronos Group. OpenCL is not bound to specific hardware, and AMD has decided to support OpenCL instead of its now deprecated Close to Metal API (AMD alternative to CUDA).

\textbf{DirectCompute}\\
As part of the DirectX framework, DirectCompute is a low level API for programming for the GPU. Naturally this too shares a range of interfaces with OpenCL and CUDA.

\textbf{Microsoft Accelerator}\\
For all of the above frameworks, higher level wrappers exists and you can program in Python, .NET, Java, or any other popular mainstream language. Other more specific frameworks that avoids GPU specific programming are being developed. 

Microsoft Accelerator \cite{acceleratorv2-intro} is a high level framework that allows data-based parallelism on the GPU through DirectCompute. 

It was first introduced in the 2005 technical report "Accelerator: Using Data Parallelism to Program GPUs for General-Purpose Uses"\cite{accelerator} by David Tarditi, Sidd Puri, and Jose Oglesby. Their problem statement is that GPUs are difficult for program general-purpose usages. Programmers can convert their programs so they use graphics pipeline operations or they can use APIs for stream processing. The result of their project is Microsoft Accelerator, a library that uses data-parallelism to program GPUs. The idea is that programmers can use a normal imperative language and use the high level API for data-parallel operations without worrying about the GPU. The Microsoft Accelerator library compiles the high level data-parallel operations to optimised pixel shaders on the fly. Their benchmarks show that the speeds of some compiled operations are comparable to hand written pixel shaders, but the performance is typically within 50\% of the hand written shader. 

\subsection{Bottlenecks}
When programming for the GPU the data has to be transferred to the GPU before it can compute the result and the result will have to be transferred back as well. This creates a latency between the CPU and the GPU. Also the bandwidth and memory limitations of the GPU are different than when using the CPU. Later in the project we will identify these bottlenecks.

\subsection{Problems}
Today integer and double operations are only supported on the newest NVIDIA Fermi cards using CUDA, but according to NVIDIA\cite{patterson} this problem will be solved in the near future. The same goes for floating point precision that does not yet match the IEEE754 floating point standard. \cite{harrisIEEE}