\section{Findings}
\label{sec:finding}

Based on the analysis of the performance data, and background reading, we have several findings during this project. This section details
these findings

\subsection{Poor Programming Ability}

We all are beginners when it comes to CUDA, OpenCL or FPGA programming. After several weeks's
of practice, we think that the current programming models for heterogeneous hardware do not offer
easy programmability. One of the reasons is the programming models for all three of them are quite diverse from one another (especially CUDA and Verilog). This means that an expert in one domain say GPU, would not be able to program FPGAs efficiently. A more unified programming model is required. 

Comparing CUDA, OpenCL and Verilog, CUDA is relatively better. CUDA code
includes kernel code and kernel call in host code. The kernel code is very much like
C code with a few additional keywords. And kernel function call is like a template
code, e.g. $``foo<<< nBlk, nTid >>>(args)"$. We believe the complexity is
acceptable as a domain expert. However, it still requires an understanding of the
underlying architecture and concepts to get good performance. For example, how
to correctly use different kernel keywords like $``\_\_shared"$.

OpenCL shares many attributes with CUDA. But it suffers from
tedious and verbose syntax. Here is a simple example of OpenCL code. In order to do a kernel
function call, it has to do several tedious API calls.

\lstset{language=C}
\begin{lstlisting}[float=[tb]]
//Code for enque data movement and set parameters
status = clEnqueueWriteBuffer(queue,  inputBuffer, CL_FALSE, 0, sizeof(cl_uchar ) * width * height, input, 0, NULL, &writeEvt); 
...
status = clFlush(commandQueue);
status = sampleCommon->waitForEventAndRelease(&writeEvt);
...
status = clSetKernelArg(kernel, 0, sizeof(cl_mem), (void *)&outputBuffer);
status = clSetKernelArg(kernel, 1, sizeof(cl_mem), (void *)&inputBuffer);

\end{lstlisting}

The current FPGA programming using Verilog is not programmer-friendly at all. It is very low level
lanuage and although
it is acceptable in Hardware programming domain, it is
totally unfamiliar to a software programmer. Here is a piece of code in AES high
level component design. It's very hard for a software programmer to understand
and program complex algorithms.

\lstset{language=verilog}
\begin{lstlisting}
module AES_ERC(out,out_valid,out_ready,in,in_valid,in_ready,dec,clk,rst,size); 
	// number of rounds to encryption/decrypt in parameter SIZE

	//localparam NROUNDS = (SIZE == 128) ? 11 : (SIZE == 192) ? 13 : 15;
	//different modes of operation
	localparam MODE_LOAD = 2'b01;
        
	//data ports
	input[127:0] in;  //input data (key or data to encrypt/decrypt
\end{lstlisting}

\subsection{Data Movement Limits Performance}

All the three heterogeneous programming models are host-device models. In order
to perform computation, the data should be moved in at first, and moved out
after processing. So there are two types of data movement, Host to device and
device internal.

Regarding CUDA and OpenCL on GPU, the host to device movement has large theoretical
bandwidth. But it also has some fixed overhead per movement kernel function
call. In order to achieve high bandwidth we need large data chunk. In our
measurement, the 12M byte data in AES of section~\ref{sec:eval} can get
3.6BByte/s as bandwidth, while the 2K data in FFT can only get 0.16GByte/s. The device
internal data movement also has good bandwidth, but programmers need to understand the
attributes of different memories (such as scratchpads which are not coherent and for local use only).

The host to device movement of OpenCL on CPU suffers some overhead even
when the two memory regions are in the main memory. The bandwidth is limited by
memory bandwidth. In our measurement, the AES data can achieve 7.1GByte/s.

FPGA's host to device movement is decided by the total pins available and the
frequency of the IO. Typically it is not a bottleneck. The problem is there is
no large memory inside FPGA, thus it should act like a streaming processor i.e, all the
data moved in should be processed immediately, and moved out. The device
internal data movement is not a problem to FPGA, either. It is only limitted by
hardware resources, like internal memory, logic gates etc.

\subsection{Fine Tuning Needed for Good Performance}

Although in the documentation, CUDA and OpenCL claim it is not hard to program a
correct application, they do require a huge effort to tune the application to get
good performance. In fact many factors impact the performance. One
application may have 100x performance difference before and after fine
tuning according to CUDA tuning guide. Here are some things to keep in mind while programming GPUs using CUDA.
\begin{itemize}
  \item Memory Copy: Large is better
  \item Kernel call: few is better, asynchronous style is better
  \item Kernel code size: large is better. But a large kernel may introduce in
  kernel control flow.
  \item Kernel working size: if it is too small, it has large overhead; if it
  is too large, the data cannot fit into local memory.
  \item Device memory allocation: try to reuse to reduce overhead
  \item Synchronization or Memory fence: few is better
\end{itemize}

Regarding OpenCL on CPU, the performance is highly dependent on whether the
kernel computation can be vectorized or not.

And the FPGA programming should take resource management into account. The
layout is very important. All these need very strong domain knowledge plus
long time effort. For example, it is quite common to spend 6 to 18 months to
design a finely tuned FPGA application.

One possible solution to solve the problem is Auto-tuning. However, because
there are so many attributes that impact the performance, the searching space is
still a problem.

\subsection{Hard to Express Graph Task Dependence}

During the porting and programming, we found not all algorithms are suitable for
CUDA, OpenCL and FPGA. These heterogeneous programming models are suitable for
computation with no data dependence, and sequential task dependence, and regular
data access. They are not suitable for complex data dependence, graph task
dependence and irregular data access.

\begin{figure}[t]
    \centering \includegraphics[scale=0.6]{figure/tasks.jpg}
    \caption{Different Types of Task Structure}
    \label{Fig:tasks}
\end{figure}

Figure~\ref{Fig:tasks} shows three different types of task structure. Each
one is a computation kernel instance. The first (leftmost) one is suitable for CUDA/OpenCL
programming. The second one will introduce a lot of kernel function calls to
performance the task dependence. The last one has independent kernel task
instances. However because each kernel internal has control flows, which may
cause low processing element utilization.

In practice, we found that among sorting algorithms, radix sort and bitonic sort are
suitable because their regular behaviors. However, quickSort is not good since it has
strong fork-join tasks dependence. Another example is BFS (Breadth-First
Search). Although the search can be performed in parallel, the data access patterns are
irregular. As a result, the BFS in CUDA/OpenCL cannot achieve very high
performance gain.

\subsection{OpenCL: Code Portable, Performance Not Portable}

One advantage of OpenCL is its portability. It can run on different GPUs and
generic CPU. However, because of its portability, it cannot take advantages of each
devices special features. Its semantics is something lke the common subset of
all the devices. Furthermore, because of the different devices have much
different internal structure, performance tuning for different device is still a
tedious work.

Another point here is we don't think OpenCL is a good parallel programming
model to program generic CPU. Firstly, we still need move data from the host
memory to the device memory. In fact all the memory is part of the main memory, but
we still need do the host to device movement. Secondly, the performance is
highly dependent on whether the kernel can be vectorized or not. Because CPU
cannot provide the massive processing unit like GPU, if the kernel cannot be
vectorized, the performance will be very poor. In our experiment, the FFT kernel
cannot be vectorized by Intel's OpenCL compiler. As a result, its performance is
3 times slower than the kernel running on GPU. Thirdly, we can only use the
simple synchronization semantics in OpenCL. It's still hard to express complex
synchronization behaviors.

We believe a better choice for parallel programming on CPU is the traditional
parallel programming model plus vectorization, or CPU specific SPMD, like
Intel's ispc~\cite{Intel:ispc}.

\subsection{Pros and Cons of FPGA}

The advantages of FPGA is its power efficiency, and the fact that it has a lot of scope for
customization. But at the same time it has many disadvantages, such as difficult
programmability, error prone,  long time to develop finely tuned application.
Another problem is its cost. If we need large amount of production hardware
processing unit, CPU and ASIC are more cost efficient currently

From application perspective, its suitable for data flow parallelism, because Verilog is 
inherently a dataflow language and thus there would be a one to one mapping. And it is also suitable
for integer and fixed point applicaiton rather than for floating point
applications since floating point processing requires a more of hardware
resources.
