\section{Introduction}  

The algorithm we are going to implement in a parallel fashion using NVIDIA CUDA for GPUs~\cite{CUDA, ARCH} is the Gauss-Newton algorithm for non linear least square interpolation.
In this particular case we studied an implementation of the algorithm for three dimensional interpolation of images representing Gaussian beams. By interpolation it is possible to monitor the output of a high definition camera with a precision down to micrometers, this operation requires lots of computation nonetheless. This constitute the main reason for a parallel version of the program.

The environment where the algorithm is used is explained in~\cite{SPM}.

The sequential version of the algorithm was developed in summer 2009 for interpolation of images of laser beams captured by digital cameras. A parallel version using MPI standard~\cite{MPI} was made in 2010~\cite{SPM}. The idea was to implement both a farm and a map version of the algorithm with target multiprocessor and cluster architectures. As expected, the result was clear: communications in cluster at a certain point were a limit so multiprocessor achieved best performance~\cite{SPM}.

The aim of this project is obvious: we want to realize the map version of the algorithm using CUDA in such a way is possible to compare the result with the previous one and, at the same time, we become familiar with this framework.

\subsection{Algorithm}

The algorithm is the well known Gauss-Newton algorithm for non linear least square problems.
As the name says this algorithm does not perform perfect interpolation but tries to reduce as most the square error between the interpolated function and the real data.
Linear methods as polynomial interpolation can be solved in a single step operating on every point (pixel) only once, non linear methods are iterative and so tends to converge after a certain number of steps. We can see the algorithm in~\ref{alggn}: it has to work for each image in the stream generated by a camera. 

It is important to point out some interesting facts. Ideally the algorithm could loop for an infinite number of iterations but since we are dealing with finite arithmetic the algorithm will surely converge after a finite number of steps. Consider that, since we are monitoring a slowly changing environment, one iteration is sufficient in order to converge to a result with desired precision.

\begin{algorithm}%[h!]
\caption{Gauss-Newton Algorithm}
\label{alggn}
\end{algorithm}
\begin{algorithmic}[1]
\algsetup{indent=1em}
\renewcommand{\algorithmiccomment}[1]{/* #1 */}
\REQUIRE An image with $N$ pixels
\REQUIRE The Gaussian parameters
\ENSURE The interpolation of the image
\medskip
\FORALL{$i\ \mid\ 0\leq i< N$}
\STATE $diff_v[i] \leftarrow image[i] - evaluateGaussian\ (parameters,\ image[i])$\label{v}
\STATE $gradient_m[i][] \leftarrow computeGradients\ (parameters,\ image[i])$\label{m}
\ENDFOR
\STATE $gradient_1 \leftarrow (gradient_m)^{T} * gradient_m$\label{mxm}
\STATE $gradient_2 \leftarrow (gradient_m)^{T} * diff_v$\label{mxv}
\STATE $parameters \leftarrow parameters + LUSolve\ (gradient_1,\ gradient_2)$
\end{algorithmic}

\ \\As we can observe in the pseudo-code and pointed out in~\cite{SPM} the algorithm has different phases each of them prone to a parallel version. In particular the first two phases need only a pixel of the image for working so a parallel version for both can be well achieved using a \textit{map} approach. Generally speaking, operations in lines~\ref{mxm} and~\ref{mxv} are multiplications matrix-matrix and matrix-vector so there are well know ways to bring them parallel. The last one, i.e. the resolution of a linear system, is a little bit difficult but there exist libraries for doing that having high performance. We will use the CULA library for this purpose~\cite{CULA, CULAAPI}.

It is worthwhile to say that the algorithm requires the Gaussian parameters found at the previous step so \textit{dependency} between elements of the stream is present.

\subsection{Why CUDA}

GPUs have evolved from hardware executors of graphical pipeline to programmable parallel processors. Actually, GPUs are very parallel and powerful subsystems where are present many homogeneous processors with a large amount of floating-point processing units that are became attractive for solving not only graphics computations. 

Initially, \textit{General Purpose computation on GPU} (GPGPU) approach was used. It involves programming GPUs using a graphics API and graphics pipeline to perform non graphics tasks. Nowadays, every laptop or desktop computer is usually made by a CPU (in turn either multiprocessor or multi-core) and a GPU. So they are an heterogeneous multiprocessors that can combine the power of both the subsystems to achieve excellent performance.

\textit{Compute Unified Device Architecture} (CUDA)~\cite{CUDA, ARCH} is a scalable parallel programming model (SIMD based) and software platform for the GPU and other parallel processors that allows the programmer to use a language based on C/C++.

The CUDA architecture offers very high performance in case application has a regular pattern and there is very high exploited parallelism over data. So CUDA may be not applicable in many cases, e.g. irregular structures or where communication impacts (\textit{stencil}). Looking at the algorithm above, we can recognise that CUDA paradigm can be used.

In addition to all those aspects, it is important to say that since last few years an incremental improvement to the core architecture, possibly including new features, is made (different \textit{compute capability} of devices~\cite{CUDA}). In practical terms, this means that "hardware" limitations, e.g. number of available registers per block, are not more constraints for some computations. 

Besides this, also from a software point of view improvements are present with various algebraical libraries now usable.

As we will see at the end, this project will be an example on what is possible to do nowadays with CUDA and GPUs.
