\section{Introduction}

Rising power dissipation in highly integrated chips has become an increasingly
large problem in recent years for architects, disrupting Moore's Law.
Heterogeneous computing holds the promise of circumventing the power problems
faced by today's traditional, homogeneous multicore systems. However,
heterogeneous systems present several new changes which must be overcome to
fully reap the benefits they offer. One of the major problems is
programmability. Today different languages are used to program different
hardware, such as GPUs (CUDA, OpenCL), FPGAs (Bluespec, Verilog, SystemC) and
DSPs (C, assembly). These differences result in different parallelism models,
which makes it difficult for a programmer to be an expert in more than one class
of devices. On the other hand, it is possible that different hardware and 
corresponding parallelism models are suitable for different applications.

In order to better utilize these hardware platforms, a good understanding and
characterization of these heterogeneous programming models is needed. In this
project, we compare these different parallelism models on different platforms.
Specifically, we measure the performance metrics of running some selected
benchmarks on different platforms, and try to answer the following questions

\subsection{Goals}
\label{sec:Goals}
\begin{itemize}
\item What are the advantages/disadvantages of different parallel programming models?
\item What makes an application suitable for a particular programming model and hardware?
\end{itemize}

Based on the results, we try to provide a model that would help the programmer select the
right heterogeneous programming model to implement an application. In the long run it would 
also help us develop metrics based on which we can build a partioning algorithm for future 
heterogeneous systems which would be able to judge which application should be executed on which platform.

\subsection{Approach}
We started out with doing background reading on heterogeneous systems. We learned that heterogeneous systems 
aim to integrate a variety of processing elements, such as general-purpose cores, GPUs,
DSPs, FPGAs, and custom or semi-custom hardware into a single system. Among these a combination of 
general-purpose cores, GPUs and FPGAs seems to be a popular choice among researchers in this area.
We therefore limited our study to GPUs and FPGAs as their presence is likely in
the future heterogeneous systems. Also, DSPs, custom and semi-custom hardware
are too specific and not useful for applications outside their domain.

We also decided upon the programming models we would be studying for this project. 
Today, the programming models are tightly coupled with the hardware, and so we
selected CUDA and OpenCL for GPUs, and Verilog for FPGAs. Next we selected three applications 
for our experiments which are:
\begin{itemize}
\item AES: Advanced Encryption Standard
\item SHA1: Secure Hash Algorithm
\item FFT: Fast Fourier Transform
\end{itemize}

We ported all three applications to the three selected platforms, collected the performance and power 
statistics. Based on our results we tried to answer the questions listed in
section~\ref{sec:Goals}.
\begin{itemize}
\item OpenCL on GPU (Quadro 600) and CPU (Xeon E3 1245)
\item CUDA on GPU (Quadro 600) 
\item Verilog on FPGA (Altera Cyclone II)
\end{itemize}

\subsection{Challenges}
We came across several roadblocks during the course of this project. Firstly, it
was difficult to get well optimized implementation of the kernels for each
platform, especially FPGAs. This was a major roadblock in our project.
Currently, to the best of our knowledge no benchmarks exist today which provide
optimized implementation of a common kernel on different platforms, which is an
essential requirement for evaluating different heterogeneous hardware. In our
opinion it would be a fruitful exercise to develop a benchmark suite which has
implementations of carefully selected kernels for three or more different
platforms (such as CPUs, GPUs, vector hardware and FPGAs).

Having tried to implement these kernels from scratch ourselves using 
slightly more user-friendly languages such as Bluespec(in place of Verilog), we learned that 
developing finely-tuned implementation on these platforms was a time consuming. Since we had limited time
for this project we collected implementations of the three kernels (AES, FFT, and SHA1) from different benchmark 
suites. The ERCBench~\cite{ERCBench} benchmark suite was most helpful as it provided CUDA and Verilog implementations 
of some of these kernels. Since we picked up implementations from different sources, there was no one-to-one 
mapping of the algorithm used and this might have slightly effected the results we got. 

Another problem we faced was while collecting power statistics. For FPGAs the power consumption
is estimated using the toolchains such as Quartus II~\cite{QuartusII}, Xilinx ISE~\cite{Xilinx}.
These tools use various parameters of the design such as area, clock frequency, etc. to provide an estimate 
of power consumed. This has led to imprecise power analysis in our project which is reflected in our results as well.

\subsection{Findings Highlight}
Based on the performance evaluation and the implementation analysis, we had some
findings in different aspects of the current heterogeneous programming,
including the programming ability issue, the data movement limitation, the
fining tuning for good performance, the parallel task description capability,
the portability of OpenCL and the pros and cons of FPGA. We will detail them in
section~\ref{sec:finding}.

\subsection{Overview of Report}
The rest of this report is organized as follows.
Section~\ref{sec:programmingmodel} gives a short introduction of the
heterogeneous programming models we studied. Section~\ref{sec:platform} and
section~\ref{sec:benchmark} introduce the hardware platforms and software
benchmark we used to evaluate these programming models. Section~\ref{sec:eval}
reports the evaluation results. Section~\ref{sec:finding} details our findings
and section~\ref{sec:conclusion} gives a short conclusion.

