\section{Benchmarks}
\label{sec:benchmark}

We selected four common benchmarks right now in our comparison. We may select
more larger benchmarks in the next phase.

\subsection{VectorAdd}

This is a very simple benchmark. It just adds too large vectors together, element
by element. Each vector is represented in memory as a big chunk. So the
computation could be highly parallel. Section \ref{sec:programmingmodel} has the
computation kernel for CUDA and OpenCL. And we didn't have the FPGA
implementation yet.

Although it's a very simple benchmark, it can show the whole processing logic in
both CUDA and OpenCL, including 1) preparing data in host machine; 2) moving
data into device memory; 3) invoke computation kernel to do vector add; 4)
moving data out from device memory. Because the computation is quite simple, and
there is no any data dependency among different tasks, the main performance is
dependent on the memory transportation efficiency and the parallel computation
capability. We have implemented it in CUDA and OpenCL.

\subsection{Matrix Multiplication C = A * B}
This benchmark implements matrix multiplication. It takes two matrices 
A and B and computes their product C. It is a common kernel of many 
imaging, simulation and scientific applications. Matrix multiplication is 
embarrasingly parallel and an ideal candidate for GPUs. We have implemented 
it in CUDA, OpenCL and Bluespec. 

\subsection{RadixSort}

Radix Sort is a well known algorithm for sorting. It has a computation
complexity of O(k * n) for n keys which have k or fewer digits. Because of the
internal characteristics of sorting, there are a lot of data dependencies in
common sorting algorithms. However, by carefully design the algorithm, a lot
of fine grain parallelism could be exposed, which makes it a good candidate
for massive multi-core platforms, including GPU. We used the optimized version
in CUDA SDK, which is based on the algorithm introduced in \cite{gpu_radix_sort}.

\subsection{FFT}
\[
X_k = \sum_{n=0}^{N-1}x_n e^{-\frac{2\pi i}{N}k n} \quad \quad k=0,1,\ldots,N-1
\]
The Discrete Fourier Transform (DFT) is a very common Digital Signal Processing 
technique in which a series of points are transformed between time and frequency domains.
The FFT algorithm divides a N-point DFT into two N/2-point DFTs. 
This process is repeated until the transform is decimated into a series of 2-point DFTs.
ERCBench~\cite{ERCBench} implements a pipeline FFT processor for FPGAs capable of performing back-to-back forward 
or reverse FFTs. The design has adjustable parameters that allow for area, delay, and 
accuracy tradeoffs and optimizations. We developed the corresponding CUDA and OpenCL 
implementations of FFT.


