\section{Evaluation}
\label{sec:eval}
\subsection{Evaluation Methodology}

In our comparison, we run the same benchmark on different platforms, and measure
its performance. Because the CUDA and OpenCL programming model share a lot of
similarity in the computation model, we uses the same algorithm for the two
programming models. But we run the OpenCL in two platforms, including NVIDIA GPU
and Intel CPU. The FPGA programming model is more different than CUDA or OpenCL.
So the implementation for the same benchmark is quite different.

In summary, we have four configurations for each benchmarks, including
\begin{enumerate}
\item CUDA
\item OpenCL on Nvidia GPU
\item OpenCL on Intel CPU
\item FPGA
\end{enumerate}

Regarding the execution time for the benchmark, we defined two metrics, 1)
\textbf{Total Time}, including the time to move data from the host to
the computing device, the computation kernel execution time, and the time to
move data back to the host; 2) \textbf{Kernel Time}, only including the 
computation kernel execution time. In many similar comparison study, only the
\textbf{Kernel Time} is used. We believe the \textbf{Total Time} is more
reasonable to compare the heterogeneous platform with generic platform, because
the data movement is also a heavy task in heterogeneous platform while not in
generic platform.

Due to the time limit, we only have the performance data in some configurations.
And because of the hardware platform available issue, we didn't have the
 performance data on FPGA platform right now. We will report all the
performance data in the final report.

\subsection{Preliminary Result}
\subsubsection{VectorAdd}

We collected the performance data for this benchmark in the previous three
configurations. Different sizes of vectors are used, including 1) 2M
(2*1024*1024), each vector is 8M memory size; 2) 8M (8*1024*1024), each vector
is 32M memory size; 3) 32M (32*1024*1204), each vector is 128M memory size. 

We collected both the \textbf{Kernel Time} and the \textbf{Total Time}. However,
because the computation kernel is too simple, and all the platforms have massive
parallel capability, the \textbf{Kernel Time} for all configuration is less than
1 ms. We only reported the \textbf{Total Time} here in \ref{tab:vectoradd}. In
our experiment, CUDA cannot run correctly in the 32M vector case. It shows the
kernel error.

\begin{table}
\centering
\caption{Total Time of VectorAdd (Unit:ms)}
\label{tab:vectoradd}
\begin{tabular}{|l|r|r|r|} \hline
Configuration & 2M Vector & 8M Vector & 32M Vector\\ \hline
CUDA & 7 & 27 & NA \\
OpenCL on GPU & 9 & 34 & 131 \\
OpenCL on CPU & 6 & 22 & 81 \\ \hline 
\end{tabular}
\end{table}

One interest finding is the generic CPU's is faster than GPU. One reason is the
moving data from main memory to GPU is less efficient than moving data inside
main memory. Considering the real kernel execution time is very small, in real
application design, we need be careful about the data movement time. For
example, if we just use the traditional computation model to do vector add
without OpenCL, we can even do the computation without data movement.

\subsubsection{MatrixMultiply}

We used different input matrix size to measure MatrixMultiply, including 1) 64 x
128, which has 1,048,576 fOps; 320 x 640, which has 131,072,000 fOps; 640 x
1280, which has 1,048,576,000 fOps. Because the input size is not large, the
data movement time should be small. We only report the \textbf{Kernel Time} in
this test, table \ref{tab:matrixmul}.

\begin{table}
\centering
\caption{Kernel Time of MatrixMul (Unit:ms)}
\label{tab:matrixmul}
\begin{tabular}{|l|r|r|r|} \hline
Configuration & 64x128 & 320x640 & 640x1280 \\ \hline
CUDA & 0.04 & 5.004 & 40.98 \\
OpenCL on GPU & 0.04 & 4.57 & 36.06 \\
OpenCL on CPU & 0.08 & 7.61 & 61.67 \\ \hline 
\end{tabular}
\end{table}

GPU shows consistent performance on this benchmark. In all cases no matter the
matrix size and in implementation of CUDA or OpenCL, the performance is about 25
Gfops/s in CUDA and about 29 Gfops/s in OpenCL implementation. At the sametime,
Intel CPU also has a good performance, around 17 Gfops/s. 

\subsubsection{RadixSort}

We used different input element size to measure RadixSort, including 1) 1M eleme
(1024*1024) elements; 2) 2M (2*1024*1024) elements; 3) 4M (4*1024*1024)
elements. All the element are 32bit integers. We only report the \textbf{Kernel
Time} in this test, table \ref{tab:radixsort}.

\begin{table}
\centering
\caption{Kernel Time of Radixsort (Unit:ms)}
\label{tab:radixsort}
\begin{tabular}{|l|r|r|r|} \hline
Configuration & 1M Elements & 2M Element & 4M Element \\ \hline
CUDA & 13.98 & 26.04 & 50.04 \\
OpenCL on GPU & 25.05 & 50.58 & 102.34 \\
OpenCL on CPU & 60 & 160 & 423 \\\hline 
\end{tabular}
\end{table}

GPU on CUDA has the sorting efficiency around 80Melements/s, while GPU on OpenCL
has only about 40Melemnts/s. Because it used the same algorithm, it should
have no such difference. We didn't find the reason yet. The CPU's performance is
not good in this case. We used a different kernel here, and it's possible the
source of the performance issue. We are studying on it.


\subsubsection{FFT}

Due to time limit, we are still setting up the common baseline for FFT, and has
not collected data yet.

