In the field of high performance computing, powerful computers are combined with engineering ingenuity to solve computationally hard problems such as weather forecasting, fluid simulation and seismic processing for oil exploration. Computational performance, i.e. how many floating point numbers can be calculated per second (FLOP/s), can be used to estimate how quickly problems can be solved. An intuitive way to achieve higher throughput is to perform calculations in parallel where possible \cite{wilkinson2005}, as exemplified by super computers consisting of thousands of computers merged into a bigger and often expensive machine. But parallelization is also desirable for desktop computers due to limits for the frequency that a computer core can operate at, and for the amount of power it can require.

A recent alternative to super computers and compute clusters is general purpose computations on graphical processing units (GPGPU) \cite{owens2008, stantchev2009}. This computer architecture is cheap, has a low Watt per FLOP/s and currently offers parallelity of up to hundreds of computation cores on a single commodity graphics card. With the OpenCL standard \cite{openclspec}, it is also possible to use any parallel accelerator such as a multicore CPU, IBM and Sony's CELL BE processor or a digital signal processor (DSP), and not only the GPU.

In this chapter, a brief history and current state of GPGPU is presented, followed by an introduction to OpenCL, a platform independent standard for parallel computations. This chapter gives motivation for doing computations on GPUs, and explains how OpenCL provides a standard interface and programming model for GPGPU. 
%OpenCL is used in this thesis, so a good understanding of how it is built up and used is crucial.