\section{Parallel Computation Models}
\label{sec:programmingmodel}

For this project we identified three programming models which are widely 
used today to program individual heterogenous elements. Here is a brief 
description of each of them.  
%We selected three programming models on heterogeneous platform for study. This
%section shows what's our understanding of these programming models. 

\subsection{CUDA}

CUDA is a general-purpose parallel computing architecture developed by
NVIDIA. Currently it is limited to Nvidia graphics processing units (GPU). Because
of the high computation power of a GPU, it is used in
graphic and 3D gaming domain that use it to do high density computing. As the
computing power of GPU continues to grow, the demand to use it for generic
computing has also increased, where the old graphic programming model is not suitable. CUDA is
targetted to genric prallel computing, and it has several advantages over
traditional general-purpose computation on GPUs using graphics APIs:
(1)Scattered reads - code can read from arbitrary addresses in memory; 
(2)Shared memory - CUDA exposes a fast shared memory region (up to 48KB per
Multi-Processor) that can be shared amongst threads. This can be used as a
user-managed cache, enabling higher bandwidth than is possible using texture
lookups; (3)Faster downloads and readbacks to and from the GPU; (4)Full support
for integer and bitwise operations, including integer texture lookups.

The architecture of CUDA programming is as follows: there
are two kinds of machine in the system, the host machine and the device machine.
Specifically, the host machine refers to the CPU and the device machine refers to the
GPU. The CUDA programming model also assumes that both the host and the device maintain
their own separate memory spaces in DRAM, referred to as host memory and device
memory. In each execution of the application, all tasks will be allocated at the CPU
first, which then spawns the corresponding GPU thread, which allocates memory on the device (GPU). 
Having allocated the memory on the device the host copies the data from host memory 
to the device memory, and then invokes the CUDA kernel which execuetes on the device. Memory
allocation can be linear or as CUDA array. When the device has completed the execution, it will
transfer the data results to the host and the host will get the final results. Kernels in
GPU can only operate out of device memory only. 

Below is a very simple computation kernel in CUDA for vector add. Each
computation kernel will work on only one element in the vector. And the CUDA
runtime will combine these tasks into groups to execute them parallely.

\lstset{language=C}
\begin{lstlisting}
// Vector Add computation Kernel
__global__ void VecAdd(const float* A, const float* B, float* C, int N) {
	int i = blockDim.x * blockIdx.x + threadIdx.x;
	if (i < N)
		C[i] = A[i] + B[i];
}
\end{lstlisting}

\subsection{OpenCL}
Open Computing Language (OpenCL) is another framework for programming
heterogeneous platform. Unlike CUDA, which is specific to Nvidia GPU, OpenCL
can program different kind of platforms, including GPU from different vendors
and generic CPU. There is no special binding between OpenCL to some hardware, so
it's possible to use OpenCL to program FPGA, DSP, etc. In fact, OpenCL only defines
a set of runtime API. Different vendors should implement the runtime support for
their computation device. For example, Intel's OpenCL runtime implements the API
as well as Nvidia's OpenCL runtime implements the API. During the execution,
when a user queries a generic CPU as computation device, and calls the API
\textit{clBuildProgram}, the Intel's OpenCL compiler will be invoked dynamically
to compile the computation kernel.

The basic programming idea is similar to CUDA. An application should be divided
into two parts, the host part and the computation device part. User use a
special language (based on C99) to write the computation kernel, which will be
compiled and executed in the computation device. Users use normal programming
language, like C/C++, in the host part code. This code will look for the
computation device, prepare the data set, move data from host to computation
device, call computation device to execute the computation kernel, and move data
out. Below is the same vector add computation kernel in OpenCL. As you can
see, the computation kernel in OpenCL is quite similar to CUDA.

\lstset{language=C}
\begin{lstlisting}
// Vector Add computation Kernel
__kernel void VectorAdd(__global const float* a, __global const float* b, __global float* c, int iNumElements) {
	// get index into global data array
	int iGID = get_global_id(0);

	// bound check (equivalent to the limit on a 'for' loop for standard/serial C code
	if (iGID < iNumElements)
		c[iGID] = a[iGID] + b[iGID];
}
\end{lstlisting}

Unlike CUDA, the OpenCL has a much lower level API. It is the 
responsibility of user to query the platform, and then choose the right platform. 
The user also needs to load the kernel source code, and use compiling API to compile the 
kernel in runtime. Because of this attribute, programming OpenCL is not 
as convenient as programming CUDA. However, the user has more flexcible to control over 
the execution of an application. For example, it's possible to dynamically 
replace the computation kernel. 

\subsection{Verilog: Dataflow programming model}
Dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture.
Dataflow programming focuses on ``\textit{how things connect}", unlike imperative programming, which focuses on ``\textit{how things happen}".

In imperative programming, a program is modeled as a series of operations 
and the flow of data between these operations is of secondary concern to the 
behavior of the operations themselves, whereas dataflow programming models the program 
as a series of connections with the operations between these connections of 
secondary importance. Thus, a dataflow program is more like a series of operations
ready to be executed as soon as the input data arrives. This is why dataflow languages 
are inherently parallel.
     
Programming languages for FPGAs (such as Verilog and Bluespec~\cite{Bluespec})
qualify as dataflow languages and since recent studies have shown that applications on FPGAs can achieve 
high energy efficiency relative to conventional processors~\cite{Kaufmann:2008}, 
dataflow programming is an important parallel programming model for this project. Here
is a sample verilog code for vector addition. As is evident from the example 
Verilog is a very low level language with operations defined at bit level. 
However, it does provide a lot of flexibility to the programmer.
\lstset{language=verilog}
\begin{lstlisting}
module vectorAdd(A, B, Sum);
	parameter N = 100;
	parameter width = 8;
	
	input [N*width-1:0] A;
	input [N*width-1:0] B;
	output [N*width-1:0] Sum;

	genvar i;
	generate
		for(i = 0 ; i < N ; i = i+1) begin:
			adder #(width)
			add(.a(A[(i+1)*width-1:i*width]), 
			    .b(B[(i+1)*width-1:i*width]), 
			    .sum(Sum[(i+1)*width-1:i*width]));          	 
		end
	endgenerate
endmodule

module adder(a, b, sum);
	parameter width = 8;
	input  [width-1:0] a;
	input  [width-1:0] b;
	output [width-1:0] sum;
        
	assign sum = a + b + ci;
endmodule

\end{lstlisting} 
