\section{Implementation}\label{impl}

As we explained before, the algorithm structure permits a pure data parallel solution (so called \textit{map}) followed by some operations on matrix and vectors. In the map skeleton a potentially very fine grain is exploited and the \textit{Single Instruction Multiple Data} (SIMD) paradigm is used. Further, multiplication between matrices can be implemented in a very efficient way using CUDA as explained in~\cite{CUDA}. Those are the major reasons we choice GPU with CUDA. 

\subsection{The Application: an high level view}

At high level, the interpolation module seems like a producer that uses a virtual channel for sending tasks to a consumer in a sort of \textit{producer-consumer} synchronization like in the code visible in~\ref{alg1}. This is not properly true, in fact there are important differences respect this usual pattern of communication: the producer is not a very producer but only an intermediate node between the real producer (the camera with the laser) and the consumer. Further, the consumer is not a very consumer because it is not an autonomous module but it is guided by the producer itself. Obviously, at this abstraction level is not possible to see the internal behaviour but the run-time support of the channel does it as we will see in the next subsections. It is important to say that for this reason the primitive in~\ref{submit} is a blocking operation.

\begin{algorithm}%[h!]
\caption{ Interpolation Module}
\label{alg1}
\end{algorithm}
\begin{algorithmic}[1]
\algsetup{indent=1em}
\renewcommand{\algorithmiccomment}[1]{/* #1 */}
\REQUIRE A stream of images
\REQUIRE A CUDA device
\ENSURE Receives images from camera and submits them to a consumer for computing the Gaussian parameters
\medskip
\STATE $ch \leftarrow createChannel\ (device)$
\WHILE {true}
\STATE $receiveFromCamera\ (image)$
\STATE $submit\ (ch, image)$\label{submit}
\ENDWHILE
\end{algorithmic}

\subsection{Structure of the Project}

The project can be downloaded by~\cite{PROJECT} in the sub-folder \textit{trunk/Project/CUDAVersion}. It is composed by various files:
\begin{itemize}
\item \textbf{gaussian.h}, \textbf{image.h} and \textbf{image.c} are pure \textit{C} files that contain the data structure and the functions for managing Gaussian images. They are principally taken from the last version of the project and some functions declared and implemented in those files are utilized for debugging purpose (for doing that, they use the \textit{tiff} library).
\item \textbf{channel.h} and \textbf{channel.c} implement the virtual channel touched on in the previous subsection and explained in depth soon.
\item \textbf{interpolator.c} implements the algorithm~\ref{alg1}. It is important to say that in real conditions the interpolation module, as we said before, waits for an image from an external module. Usually this communication is by socket but may happen that, especially in the future, the interpolation module will be executed on the same machine of the camera module changing the way in which they communicate. It is very easy (and needed when it is used in the real world) to permit the communication with a camera module and in the previous versions~\cite{SPM} code for this behaviour is written.
In truth, this file contains code a little different where the interpolation module creates itself the stream. In particular only an image is created initially and it is used for each iteration achieving a very low inter-arrival time (the \textit{minimal} one). We did it  because we wanted to effectuate tests \textit{under stress} of the application.
\item \textbf{kernels.cu} contains the extended C functions for CUDA architecture, i.e. \textit{global} and \textit{device} functions plus some "wrappers" utilized for permitting the invocations from pure C files.
\item \textbf{Makefile} with the compilation options. The file is listed below.
\end{itemize}
\ \\
\lstinputlisting{./makefile.txt}
\ \\
\subsection{Channel Run-Time Support: the pseudo-code}

As we can see in~\ref{alg2}, the virtual channel between the producer (host module) and the consumer (device module) is implemented by a \textit{descriptor} channel \textit{ch} present in \textbf{channel.h} and \textbf{channel.c}. It is composed by a buffer and by a set of fields (like indexes \textit{ins} and \textit{est}) for achieving the expected behaviour. A set of operations permits to create, operate on and destroy it.

Briefly, the images are stored in the buffer and, if the GPU subsystem is not already working, \textit{sequential} kernels over the right image are launched on it. This is required because the majority of the CUDA enabled devices are not able to execute multiple kernels concurrently yet. However it is also necessary because the dependency on Gaussian parameters of the previous image as said in the Introduction. The kernel functions are realized in according to the parallel fashion described above, however we will see them in detail in the next subsection. The other operations performed for completing the Gauss-Newton algorithm are realized using the CULA parallel library~\cite{CULA, CULAAPI}. The testing about the termination of all the involved operations over an image is performed utilizing the CUDA \textit{Events} mechanism~\cite{CUDA}. Notice that in the first iteration the condition in line~\ref{GPUnotworking} will be true because the used CUDA primitive returns \textbf{cudaSuccess} in case \textit{recordEvent} was not called yet~\cite{CUDAAPI}. Anyway, this causes a problem about the correct number of frees holes in the buffer: the right solution is achieved using the \textit{throwed} field in the descriptor.

\begin{algorithm}%[h!]
\caption{Submit Function}
\label{alg2}
\end{algorithm}
\begin{algorithmic}[1]
\algsetup{indent=1em}
\renewcommand{\algorithmiccomment}[1]{/* #1 */}
\REQUIRE An image
\ENSURE Execute the so called \textit{submit} operation
\medskip
\IF [buffer is full] {$ch.frees = 0$}
\STATE $waitEvent\ (finish)$\label{wait}
\STATE $++(ch.frees)$
\STATE $++(ch.est)\%ch.size$
\STATE $<M, D> \leftarrow kernels\ (ch.buffer [ch.est], parameters)$\label{optimization}
\STATE $MG \leftarrow M'\times M$
\STATE $DG \leftarrow M'\times D$
\STATE $delta \leftarrow solve\ (MG, MD)$
\STATE $parameters \leftarrow update\ (delta)$
\STATE $recordEvent\ (finish)$
\ENDIF
\STATE $ch.buffer [ch.ins] \leftarrow image$\label{transfer}
\STATE $++(ch.ins)\%ch.size$
\STATE $--(ch.frees)$
\IF [GPU not working] {$queryEvent\ (finish)$}\label{GPUnotworking}
\IF {$ch.throwed$}
\STATE $++(ch.frees)$
\STATE $++(ch.est)\%ch.size$
\STATE $ch.throwed \leftarrow false$
\ENDIF
\STATE $<M, D> \leftarrow kernels\ (ch.buffer [ch.est], parameters)$\label{kernel}
\STATE $MG \leftarrow M'\times M$
\STATE $DG \leftarrow M'\times D$
\STATE $delta \leftarrow solve\ (MG, MD)$
\STATE $parameters \leftarrow update\ (delta)$
\STATE $recordEvent\ (finish)$\label{kernel5}
\STATE $ch.throwed \leftarrow true$
\ENDIF
\end{algorithmic}

\ \\It is worthwhile to note that a part of the channel descriptor (the buffer) will stay in the GPU subsystem while the rest will be stored in the CPU memory for fastest accesses for fields updating since only the process in execution on CPU will modify them. Further, the operations in lines~\ref{transfer},~\ref{GPUnotworking} and from~\ref{kernel} to~\ref{kernel5} are \textit{not} blocking in such a way it is possible to overlap some of them, e.g. in case of GPU already working, a transfer of a new image in the buffer can start. In fact, we can summarize the behaviour in 2 possible scenario according to a simple cost model. 

We can think about a general module $M$ with \textit{ideal} service time $t_s$ that works on a stream with inter-arrival time $t_a$ as visible in Figure~\ref{modulo}. All the times are mean values and, for simplicity, we can assume small variances. The real service time $t_p$ of the module will be:

\begin{enumerate}
\item equals to $t_s$ when, in average, the module is a bottleneck. This means that, sooner or later, the buffer will be full up and the situation will become the same to the one receiving an image directly. So In this situation, if it is not possible to parallelize the module again, the buffer is not useful.
\item equals to $t_a$ when it is fast enough to process the requests. In this case the buffer is also not useful because if $t_a$ is greater than $t_s$ when an image arrives the last one was processed.
\end{enumerate}

\begin{figure}[t]
        \centerline{
               \mbox{\includegraphics[]{modulo}}
        }
        \caption{General module with ideal service time $t_s$ and real service time $t_p$}
		\label{modulo}
\end{figure}

Anyway, the use of (only) another location in the buffer can help the module to do not belong to the bottleneck situation. In fact, if we overlap the copy of an image with the execution of another one we can achieve a smaller theoretical inter-arrival time with respect to the case in which we copy and execute the task over the images in sequence. To increase again the asynchrony degree is potentially possible but, in practice, it is not worth because much complicated to handle. This was the reason for the dimension of the buffer.

In case the buffer becomes full the algorithm waits for the termination of at least one task: this means that the host process executing the code will be blocked until the event occurs.

Some optimizations are present, e.g. to launch immediately kernels over the next image after the termination of the previous one in case of buffer was full (line~\ref{optimization}).

\subsection{Kernel Functions}

In this subsection we are going to describe how we implemented in CUDA the functions computing the  matrix and the vector needed for computing the gradients (lines~\ref{v} and~\ref{m} in Algorithm~\ref{alggn}).
\ \\
\lstinputlisting{./kernels.cu}
\ \\
As we see in the code listed above, the implementation of the CUDA kernel for finding that vectors uses \textit{shared} memory in order to reduce conflicts on the global one achieving a better performance as explained in~\cite{CUDA}. Further, it utilizes another function (\textit{evaluateGaussian}) that is  not possible to call from the host because completely realized for the device (keyword \textit{\_\_device\_\_} is used in the definition). 

According to the paradigm SIMD, the written code is equal for all threads so each of them is able to compute its index using the CUDA support variables \textit{threadIdx.x} and \textit{blockIdx.x}. 

Since the number of needed blocks is finding in this way
\[
num\_blocks = (\ dim\_image\ +\ DIM\_BLOCK\ -\ 1\  )\  /\ DIM\_BLOCK
\]
it is possible that the last block of threads contains threads that cannot work on elements hence a control in this function is performed. If the index is right the thread can gather the right data partition and works on it. 

As we mentioned in the Introduction, each element can be computed \textit{indipendently} from the others in a map approach so synchronization mechanisms between threads are not needed.

The kernel for finding the matrix is analogous so it is not reported here.

We tried to improve the performance of the kernels using \textit{intrinsic} functions as explained in~\cite{CUDA} but this approach is not possible because the reduced accuracy.

\subsection{CULA Primitives}

According to the Algorithm~\ref{alggn}, gradient matrix and gradient vector must be found now. After that we have to solve a linear system and update the Gaussian parameters. For doing all that, it is necessary:
\begin{enumerate}
\item to multiply the transpose of the matrix calculated in the last passage with the matrix itself (line~\ref{mxm} in~\ref{alggn}).
\item to multiply the transpose of the matrix calculated in the last passage with the vector (line~\ref{mxv} in~\ref{alggn}).
\item solve the linear system
\end{enumerate} 

We implemented those operations in the project using functions of CULA library~\cite{CULA, CULAAPI}. It is important to point out how the library works for explained some planning choices. 

First of all, CULA is based on \textit{CUBLAS}~\cite{CUBLAS}, a CUDA implementation of the algebraical interface BLAS, an high quality "building block" routines for performing basic vector and matrix operations. Anyway, CUBLAS is not intrinsically sufficient because in the Gauss-Newton algorithm it is also necessary to solve a linear system. Instead, CULA offers not only the three level BLAS, but it also contains other primitives for performing factorization and solve linear system in an efficient way. Having said that, we can conclude that CULA is a good way for not having more than one library. 

Further, it is possible to use those functions by the so called \textit{Device} interface. It permits to the users operate directly in GPU memory as we must do in our case for performance benefits. In fact, it will be very costly if we must copy data from GPU to CPU memory space before calling the library routines.

As explained in~\cite{CULA}, when providing data to CULA routines the manner in which data is stored  must be in \textit{column-major order}. If we look at the code of the kernels we can recognise that, as familiar for many C programmers, data is stored in row-major order. At first sight, this should appear inefficient in terms of performance due to the necessary transposition operation for achieving agreement about the two different methods. Likely, in our specific case this requirement is used in a very useful way because a matrix in row-major order is \textit{already} the transpose of the matrix that should be stored in column-major order. At this point, the operations $(gradient_m)^{T} * gradient_m$ and $(gradient_m)^{T} * diff_v$ in Algorithm~\ref{alg1} become respectively in this real implementation $gradient_m * (gradient_m)^{T}$ and $gradient_m * diff_v$ like in Listing reported below.
\ \\
\lstinputlisting{./cula.txt}
\ \\
Finally, another CUDA kernel (purely sequential) is launched in order to update the Gaussian parameters in time for the next iteration.