Open Computing Language (OpenCL) is an open royalty-free standard for general purpose parallel programming \cite{openclspec} designed to be independent of platform and vendor, wether it be CPU, GPU or other processors. The standard consists of an architecture, an API and a programming language. The architecture is a model of the environment that the computations are performed in, including computational devices such as GPUs and a host system such as an x86 platform that holds the devices. The API is an interface for the host system to build, launch and coordinate parallel computations on the devices. The programming language is a version of the C programming language intended for writing the programs that are executed in parallel on the devices.

A readily available implementation of OpenCL is provided by NVIDIA on their CUDA architecture \cite{cudaprogguide}. There are over one hundred million GPUs sold that can execute CUDA programs. AMD also have support for OpenCL as part of their ATI stream technology \cite{streamreleasenotes}.

\subsection{OpenCL Architecture}

	The OpenCL architecture defines models for the host system and its computation devices, how the parallel computations are enqueued and executed, and the memory layout on the devices.

	\subsubsection{Platform Model}
	
		Figure \ref{fig:opencl_platform} shows the OpenCL platform model. The \emph{host} system is connected to one or more \emph{devices}. The devices consist of one or more \emph{compute units} that have a number of \emph{processing elements}. The processing elements do the actual computations, and they are organized in compute units. An example of a host system is an x86 desktop computer. Typical devices include GPUs, digital signal processors (DSPs), CELL processors and even a multicore CPU.
	
		\begin{figure}[h]
		\centering
		\includegraphics[height=0.25\textheight]{graphics/opencl_platform.png}
		\caption{OpenCL platform model from \cite{openclspec}}
		\label{fig:opencl_platform}
		\end{figure}
	
	\subsubsection{Execution Model}
		
		Figure \ref{fig:opencl_execution} shows the OpenCL execution model. The model is based on the single-instruction multiple-data model where equal operations are performed on different pieces of data. In OpenCL, a \emph{work-item} is a thread that executes a \emph{kernel}. The kernel is written in the OpenCL C programming language, which will be described later. Work-items are organized in one, two or three dimensional \emph{work-groups}, and the work-groups make up an \emph{NDRange} which is an index space in the same number of dimensions as the work-groups. Each work-item has a unique index in this space.
		
		\begin{figure}[h]
		\centering
		\includegraphics[height=0.3\textheight]{graphics/opencl_execution.png}
		\caption{OpenCL execution model from \cite{openclspec}}
		\label{fig:opencl_execution}
		\end{figure}
		
		The execution of parallel kernels, memory transfers and synchronization in OpenCL is organized through \emph{command queues}. Such tasks are \emph{commands}, and inserted into \emph{command queues} to be performed on or with a device. The order of execution can be either synchronous \emph{in-order} or asynchronous \emph{out-of-order}. When in-order, the commands are launched and completed in the order they appear in the queue. When out-of-order, the commands are launched in order, but may not complete in order. The execution environment is defined by a \emph{context} which holds all the objects used during execution, such as devices, memory and command queues. Figure \ref{fig:command_queues} show a context containing command queues that are mapped to devices.
		
		\begin{figure}[h]
		\centering
		\includegraphics[height=0.25\textheight]{graphics/command_queues.png}
		\caption{Command queues and devices}
		\label{fig:command_queues}
		\end{figure}
	
	\subsubsection{Memory Model}
	
		Figure \ref{fig:opencl_memory} shows the OpenCL memory model. The model is closely tied to the platform model. The main memory on the device is the \emph{global memory}, which is together with the read-only \emph{constant memory} read- and writeable from the host system. Global and constant memory may be cached on the device. Each processing element has a \emph{private memory} where data with scope local to individual work-items are stored, and each compute unit has a \emph{local memory} with the scope of individual work-groups. Private and local memory are not directly accessable from host.
	
		\begin{figure}[h]
		\centering
		\includegraphics[height=0.3\textheight]{graphics/opencl_memory.png}
		\caption{OpenCL memory model from \cite{openclspec}}
		\label{fig:opencl_memory}
		\end{figure}
		
\subsection{OpenCL vs. CUDA}

	Historically, NVIDIA's CUDA actually precedes OpenCL. CUDA was initially launched as a host API and programming model for NVIDIA's GPUs. It's popularity made it a defacto standard for GPGPU. NVIDIA has taken this a step further and introduced a GPU architecture dubbed the CUDA architecture, and NVIDIA's OpenCL implementation runs on the CUDA architecture. Even though CUDA is an architecture, the CUDA API and programming model still exists and is heavily used. OpenCL was actually made with CUDA in mind, and the platform model clearly resembles the CUDA architecture. In conclusion, at the current time, OpenCL is an alternative to the CUDA API and programming model. NVIDIA have implemented the OpenCL standard on their CUDA architecture, like AMD has implemented OpenCL on their ATI stream technology.
	
	Some major concepts in OpenCL are analogue to concepts in CUDA. For readers experienced with CUDA, Table \ref{table:opencl_vs_cuda} show CUDA names for corresponding elements in OpenCL.
	
	\begin{table}[h]
	\centering
	\caption{OpenCL vs. CUDA naming}
	\begin{tabular}{| l l |}
		\hline
		\textbf{OpenCL name} & \textbf{CUDA name} \\
		\hline
		\hline
		kernel & kernel \\
		host & host \\
		NDRange & grid \\
		work-item & thread \\
		work-group & block \\
		global memory & global memory \\
		constant memory & constant memory \\
		local memory & shared memory \\
		private memory & local memory \\
		compute unit & stream multiprocessor \\
		processing element & core \\
		image & texture \\
		\hline
	\end{tabular}
	\label{table:opencl_vs_cuda}
	\end{table}

\subsection{Host Programming in OpenCL}

	OpenCL provides a host API for building, launching and coordinating parallel computations on devices. It is also possible to extract platform dependent parameters such as memory sizes, maximum work-item sizes and other platform capabilities. This section explains how to use memory, program and kernel objects from host, and how to perform synchronization of the parallel computations. For a complete reference, see \cite{openclspec}.

	\subsubsection{Using Memory Objects}
	
		A \emph{memory object} is a part of device memory together with its attributes. By creating memory objects, the host allocates memory on device. There are two types of memory objects. \emph{Buffers} are sequential arrays of scalar data types such as integers or floating point numbers, and are accessed as a series of bytes. \emph{Images} are two- or three- dimensional arrays with the purpose of containing images such as textures. An important difference between images and buffers is that images are accessed through \emph{samplers} with defined mechanisms for out-of-range coordinates, interpolation between values and filtering. This section will focus on buffer objects.
		
		Creating a buffer object is performed by the \texttt{clCreateBuffer} function:
		
		\texttt{cl\_mem clCreateBuffer(cl\_context context, cl\_mem\_flags flags, size\_t size, void * host\_ptr, cl\_int * errorcode\_ret)}
		
		where \texttt{context} is the OpenCL context that will contain the buffer, \texttt{flags} are one or more flags defining attributes of the buffer as given in Table \ref{table:buffer_flags}, \texttt{size} is the buffer size in bytes and \texttt{host\_ptr} is an optional host memory pointer that is used depending on the flags given. The function returns a device memory pointer to the allocated buffer. Many OpenCL API calls return an error code \texttt{errorcode\_ret} not equal to 0 if something went wrong.
		
		\begin{table}[h]
		\centering
		\caption{OpenCL buffer flags}
		\begin{tabular}{| p{0.3\textwidth} p{0.65\textwidth} |}
			\hline
			\textbf{flag} & \textbf{description} \\
			\hline
			\hline
			\texttt{CL\_MEM\_READ\_WRITE} & Default flag. Buffer is read and written by kernels \\
			\texttt{CL\_MEM\_WRITE\_ONLY} & Write-only access from kernels \\
			\texttt{CL\_MEM\_READ\_ONLY} & Read-only access from kernels \\
			\texttt{CL\_MEM\_USE\_HOST\_PTR} & Use previously allocated \texttt{host\_ptr} as the storage area for the buffer instead of device memory \\
			\texttt{CL\_MEM\_ALLOC\_HOST\_PTR} & Allocate \emph{new} memory on host instead of device \\
			\texttt{CL\_MEM\_COPY\_HOST\_PTR} & Copy data from given \texttt{host\_ptr} to new buffer \\
			\hline
		\end{tabular}
		\label{table:buffer_flags}
		\end{table}
		
		Reading and writing buffer objects is performed by the \texttt{clEnqueue[Read|Write]Buffer} functions:
		
		\texttt{cl\_int clEnqueue[Read|Write]Buffer(cl\_command\_queue cmd\_queue, cl\_mem buffer, cl\_bool blocking, size\_t offset, size\_t size, void * ptr, cl\_uint num\_events, cl\_event * event\_list, cl\_event * event)}
		
		where \texttt{cmd\_queue} is the command queue that enqueues the read/write operation, \texttt{buffer} is the buffer object of size \texttt{size} and \texttt{blocking} is true/false depending on if the function should be blocking or not. If blocking, the function will not return until reading/writing is done. \texttt{offset} is an offset into the buffer object to write or read from and \texttt{ptr} is the host pointer to be written to or read from the buffer.
		
		All functions that enqueue commands optionally have an associated \emph{event object} returned in \texttt{event} that can be used to query for status of the command or wait for its completion. Further, \texttt{event\_list} can contain a list of \texttt{num\_events} events that needs to be completed before this command.
		
	\subsubsection{Using Program and Kernel Objects}
	
		A \emph{program} is a set of kernels that can be built (compiled and linked) to be executed on specified devices. The kernel is usually defined by a string of code in the OpenCL C programming language, but can also be previously compiled binary. The approach described here takes kernels as code strings. The program is created with \texttt{clCreateProgramWithSource} and built with \texttt{clBuildProgram}. For clarity, their argument lists are not stated explicitly here, but can be found in \cite{openclspec}.
		
		When the program is built, it is possible to create, set arguments to and execute \emph{kernel objects} with the functions \texttt{clCreateKernel}, \texttt{clSetKernelArg} and \texttt{clEnqueueNDRangeKernel}. To create a kernel, a successfully built program and a kernel name is given. The arguments of a kernel are set one by one and can be of any type. Their total number  can not exceed a platform dependent maximum. Kernels are enqueued on a command queue like the buffer read/write operations. When enqueueing a kernel, the size and dimensionality of the NDRange that it will operate on must be given. A \emph{global work size} is given in $n$ work dimensions and defines the total number of work-items in the NDRange. An $n$-dimensional \emph{local work size} is also given such that the size in each dimension evenly divides the corresponding sizes in the global work size. The local work size defines the work-group size. Events can be used for the kernel commands just like with buffer operations mentioned previously.
	
	\subsubsection{Host Synchronization}
	
		When doing parallel processing in OpenCL, it may be a need for synchronization between commands in the command queue. The previously mentioned even system handles fine-grained synchronization between specific commands. Each command have an optional associated event, and other commands can depend sequentially on zero or more events before execution.
		
		For global synchronization in a command queue, the host can use the \texttt{clFLush} and \texttt{clFinish} functions. The former makes all commands in a queue be issued to their associated device, but does not guarantee that they complete before the function returns. The latter also blocks until they have completed, meaning a proper synchronization of commands.

\subsection{Device Programming in OpenCL}

	The kernel that execute in parallel on a device is written in the OpenCL C programming language. This language is based on the C99 standard but with specific extensions and restrictions. The same kernel will be executed in parallel by potentially thousands of work-items in an NDRange. An example of an $N \times N$ matrix multiplication kernel is given in Figure \ref{fig:opencl_kernel}. The kernel is written so that each work-item computes one value of the output matrix product, implying a two-dimensional NDRange of size $(N, N)$.
	
		\begin{figure}[h]
		\centering
		\includegraphics[height=0.2\textheight]{graphics/opencl_kernel.png}
		\caption{OpenCL matrix multiplication kernel}
		\label{fig:opencl_kernel}
		\end{figure}
	
	OpenCL C programming language supports vector arithmetic with integer or floating point vectors of length 2, 4, 8 and 16. A number of built-in functions are provided for scalar and vector math, queries about NDRange dimension and work-item index and local or global synchronization. Some restrictions for C99 apply, and these include no recursion, \texttt{stdio} or external variables. See \cite{openclspec} for details.