\section{Evaluation}
\label{sec:eval}
\subsection{Evaluation Methodology}

In our comparison, we run all the benchmarks on different platforms, and measure
its performance. As mentioned in section~\ref{sec:benchmark}, the kernels of the
same algorithm on different platforms may not 1 to 1 mapping, it's very hard to
do a precise comparison. But we still report all the data, and explore some
findings from them.

During the evaluation, we run CUDA kernel on GPU platform, and we run the OpenCL
on two platforms, including NVIDIA GPU and Intel CPU. The FPGA kernel is run on 
Altera Cyclone II FPGA ~\cite{Altera:DE2}.

To summarize, we have four configurations for each benchmarks, which are:
\begin{enumerate}
\item CUDA
\item OpenCL on Nvidia GPU
\item OpenCL on Intel CPU
\item FPGA
\end{enumerate}

For the execution time for the benchmark
%, in GPU and CPU platform,  
we defined two metrics, 1) \textbf{Total Time}, including the time to move data
from the host to the computing device, the computation kernel execution time,
and the time to move data back to the host; 2) \textbf{Kernel Time}, only
including the computation kernel execution time. In many similar comparison
study, only the \textbf{Kernel Time} is used. We believe the \textbf{Total Time}
is more reasonable to compare the heterogeneous platform with generic platform,
because the data movement is also a heavy task in heterogeneous platform while
not in generic platform.

It must be mentioned that for very large input, FPGAs do not have enough internal memory 
to accommodate all data, while for small inputs, we do not observe much speed-up on GPUs
since, lesser computation leads to a significant fraction of total time is spent in data movement.
Thus, performance statistics for large inputs on FPGAs was generated by scaling the number of cycles 
required for computation, which can be accurately measured by carefully understanding the design of FPGA kernels. 

%The evaluation method for FPGA

%Due to the time limit, we only have the performance data in some configurations.
%And because of the hardware platform available issue, we didn't have the
% performance data on FPGA platform right now. We will report all the
%performance data in the final report.


\subsection{AES}

We collected the performance data for this benchmark in the previous three
configurations. Different size of key are used, including 1) Key=128, 2) Key=
256. Encode situation and decode situation are measured for each of Key size
respectively.


We collected the \textbf{Kernel Time},  \textbf{Data Movement Time} and the
\textbf{Total Time}. We report these data in \ref{tab:AES}.


\begin{table}
\centering
\caption{Performance of AES (Unit:ms)}
\label{tab:AES}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
Programming Model & Platform & Key Length(bit) & Encode/Decode & Kernel Time &
Data Movement Time & Total Time \\ \hline
CUDA & GPU & 128 & Encode & 35.915 & 6.627 & 42.542\\
     & & 128 & Decode & 38.957 & 6.510 & 45.467\\
     & & 256 & Encode & 45.699 & 6.300 & 51.999\\
     & & 256 & Decode & 48.889 & 6.530 & 55.419\\\hline
OpenCL & GPU & 128 & Encode & 1204.3 & 6.900 & 1211.2 \\
       & & 128 & Decode & 355 & 6.899 & 361.9\\ \cline{2-7}
       & CPU & 128 & Encode & 1761.7 & 3.299 & 1765\\
       & & 128 & Decode & 1831.8 & 3.700 & 1835.5\\\hline
Verilog & FPGA & 128 & Encode  & 155.809 & 14.165 & 169.974\\	
     & & 128 & Decode  & 155.809 & 14.165 & 169.974\\
     & & 192 & Encode  & 184.138 & 14.165 & 198.303\\
     & & 192 & Decode  & 184.138 & 14.165 & 198.303\\
     & & 256 & Encode  & 212.467 & 14.165 & 226.631\\
     & & 256 & Encode  & 212.467 & 14.165 & 226.631\\\hline 
\end{tabular}
\end{table}

One interesting finding for AES is that for overall time CUDA is the fastest,
FPGA is in the middle and the OpenCL is the slowest. Another finding is that
when we run OpenCL on GPU using the 128 key value, the encoding and decoding
time are highly distinctive, which is opposite with our expect and other cases.
We tried to understand the reason under the data, but still cannot find a proper
way to explain it. For all the three different language models on different
platforms, with the key value increasing, the overall time will increase in the
same time.


\subsection{SHA1}
For SHA-1, we use different input message size to measure it performance on
different platform, including from very small 256 Byte, to large image 4M
Byte (4,194,304 Byte). We report the Kernel Time, Data Movement
Time and Total time in this test, table~\ref{tab:SHA-1-cuda} and table~\ref{tab:SHA-1-opencl}


\begin{table}
\centering
\caption{Performance of SHA-1 on CUDA(Unit:ms)}
\label{tab:SHA-1-cuda}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Programming Model & Running Configuration & Random Message Size & Kernel Time & Data Movement Time & Total Time \\ \hline
CUDA & Overlap mode & 256 & 0.04296 & 0.032226 & 0.163818\\
	 &   			& 1024 & 0.042969& 0.033936 & 0.160156\\
	&			& 16384	& 0.123047	& 0.038086	& 0.248047\\
	&			& 65536	& 0.4021	& 0.050781	& 0.541992\\
	&			& 4194304	& 23.768066	& 1.312011	& 26.333008\\\cline{2-6}
	& Direct mode     	 &256	& 0.310059	& 0.034912	& 0.430176\\
	&			& 1024	& 0.323975	& 0.032959	& 0.537109\\
	&			& 16384	& 0.35498	& 0.033935	& 0.571045\\
	&			& 65536	& 0.361084	& 0.042969	& 0.584961\\
	&			& 4194304	& 2.530029	& 0.670166	& 3.905029\\
\hline
\end{tabular}
\end{table}

\begin{table}
\centering
\caption{Performance of SHA-1 on Opencl(Unit:ms)}
\label{tab:OPENCLSHA-1}
\label{tab:SHA-1-opencl}
\begin{tabular}{|c|c|c|c|c|} \hline
Programming Model & Platform & Random Message Size & Kernel Time & Total Time \\
\hline OpenCL	&GPU	&	256	&0.257		&0.382\\
		&	&1024	&0.281		&0.405\\
		&	&16384	&0.326		&0.456\\
		&	&65536	&0.399		&0.522\\
		&	&4194304	&8.871	&12.63\\\cline{2-5}
	        & CPU 		&256	&0.114		&0.149\\
		&		&1024	&0.254		&0.292\\
		&		&16384	&0.323		&0.389\\
		&		&65536	&0.426		&0.583\\
		&		&4194304&7.927		&10.274\\
\hline
\end{tabular}
\end{table}
From the table above, when running CUDA language model on GPU, overlap mode
costs a much more time than direct mode. Comparing CUDA, OpenCL and FPGA running
on GPU and CPU together, for smaller size of data (the data size smaller than
16384), FPGA version performances the best.  For large size of data (the data
size larger than 16384), CUDA version running on GPU performances the best. We
believe it is caused by the data movement overhead, and there is a constant
overhead each data movement kernel call. So when the data chunk is small, the
overhead percentage is larger.


\begin{table}
\centering
\caption{Performance of SHA-1 on FPGA(Unit:ms)}
\label{tab:FPGASHA-1}
\begin{tabular}{|c|c|c|c|c|} \hline
Programming Model & Platform & Random Message Size & Kernel Time & Total Time \\ \hline
Verilog	&FPGA	&	256	&0.003		&0.001\\
		&	&1024	&0.013		&0.012\\
		&	&16384	&0.202		&0.163\\
		&	&65536	&0.804		&0.651\\
		&	&4194304	&51.401	&41.61\\\hline
\end{tabular}
\end{table}

\subsection{FFT}
We use different input element size to measure FFT, including 1) 256 number
point FFT, 2) 512 number point FFT, 3) 1024 number point FFT, 4) 2048 number
point FFT. We report Kernel Time, Data Movement Time and Total Time in this
test, table~\ref{tab:FFT}.


\begin{table}
\centering
\caption{Performance of FFT(Unit:ns)}
\label{tab:FFT}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Programming Model& Platform &	Running Configure (N point FFT)	&Kernel Time&	Data
Movement& Total Time \\\hline 
CUDA & GPU	&256	&7164	&13947	&21111\\
	 &	    &512	&7256	&15628	&22884\\
	 &  	&1024	&7218	&18550	&25768\\
	 &  	&2048	&7307	&23852	&31159\\\hline
OpenCL & GPU & 256	&26400	&118000	&144400\\
	   &	&512	&25100	&117800	&142900\\
	   &	&1024	&24400	&131600	&156000\\
	   &	&2048	&20700	&123600	&144300\\\cline{2-6}
	   & CPU & 256	&seg fault	&NA	&seg fault\\
	   &	&512	&seg fault	&NA	&seg fault\\
	   &	&1024	&61800	&26200	&88000\\
	   &	&2048	&62300	&25000	&87300\\\hline
Verilog	& FPGA	& 256	&1830	&1686	&3516\\
	& 	& 512	&3639	&3463	&7103\\\hline
\end{tabular}
\end{table}
FPGA has the best performance among the three, which achieve three times
smaller total running time compared with CUDA on GPU and OpenCL on GPU, CPU with
the same size of input. However, for large input size ($N\ge 1024$), we run out of resources on FPGA. Data movement time is large for both CUDA and OpenCL,
especially for OpenCL on GPU. Since the for OpenCL, it needs to allocate device
memory and finish computation in the device memory and then copy result back to
host memory, this takes a large overhead. Another possible reason is all the
input size is too small to both CUDA and OpenCL, and the real FFT computation time should
be very small, and all others are just from overhead.


\subsection{Power Estimation}

We tried to measure the precise power assumption of all platforms. Due to some
limitations, we only profile a very draft power estimation. The CPU and GPU's
power is estimated from the computation time and the device power consumption
from the specification. FPGAs the power consumption is estimated using the
toolchains such as Quartus II~\cite{QuartusII}, Xilinx ISE~\cite{Xilinx}.
Table~\ref{tab:power} shows the result.

\begin{table}
\centering
\caption{Power Consumption Statistics}
\label{tab:power}
\begin{tabular}{|c|c|c|c|c|} \hline
Kernel & Configuration & CPU & GPU & FPGA \\\hline 
AES  & OpenCL kernel 128 bit Key 12 MB data & 46.48mW & 13.46 mW & - \\\hline
SHA1 & ERC Benchmark suite: 128 bit key 12 MB data & - & 0.473mW & 141.26 mW\\
\hline
\end{tabular}
\end{table}

For the same computation kernel, GPU is 3 times power efficient than CPU.
However, our estimation shows FPGA uses much more power. We believe its from
the imprecise estimation from the tool chain.

