\section{Results and Conclusions}
\subsection{Results}

The computation is on stream even if dependency is present so the service time is the principal parameter used for measuring the performance. We remember that all the effectuated tests are made \textit{under stress} like mentioned in the Implementation section.

\begin{figure}[t]
        \centerline{
               \mbox{\includegraphics[scale=0.7]{grafico1}}
        }
        \caption{Service time of the application varying data set and GPU}
		\label{gr1}
\end{figure}

The results are visible in Figure~\ref{gr1}. First of all, it is important to explain the characteristics (reported in Appendix A of~\cite{CUDA}) of the three CUDA GPUs utilized for testing the application:

\begin{enumerate}
\item \textbf{GeForce 9400}: compute capability 1.1, 16 CUDA cores 
\item \textbf{GeForce GTX 285}: compute capability 1.3, 240 CUDA cores 
\item \textbf{GeForce GF100}: compute capability 2.0, 512 CUDA cores 
\end{enumerate}

As expected, the worst performance is achieved with the first one. At the first sight, this should be obvious because this GPU is the worst in term of compute capability and parallelism degree. Both this factors create a limit in performance. 

In fact, the low number of registers for this compute capability limit the number of simultaneous \textit{active warps} inside the CUDA cores during the execution for both the written kernels. This implies that the efficiency of the GPU is \textit{at maximum} about $67\%$ according to the tool \textit{CUDA GPU Occupancy Calculator} so the performance is already limited. 

Further, when the data size is large many \textit{blocks} are created by the program so the low parallelism degree  becomes a problem because all the CUDA cores are no more able to execute all the \textit{warps} created by the CUDA support in response to that many \textit{blocks}.

The second GPU improves the performance in a very good way achieving a service time that is almost the half of the previous one. The big impact of this challenge is due to the (very) higher number of CUDA cores that permit to execute in parallel a number of \textit{warps} that is bigger than before. In addition, the more powerful hardware succeeds where the previous one had failed: only one kernel now is not executed efficiently ($75\%$ of occupancy in GPU).

Finally, to launch the application on the last GPU brings to have more or less the same service time. In spite of the achieved maximum efficiency for both the kernels due to the 2.0 compute capability, the performance is only slightly better than before. This means that there are causes that stop the scalability of the program. The reasons could be \textit{centralized points} that become bottlenecks when the high parallelism degrees involved. In fact, in this particular case, we recognize that the kernels use the \textit{shared memory} in order to reduce conflicts on the \textit{global} one only during the loading of the Gaussian parameters. As we have explained above, this is the only way in this program to use that type of memory. In all the other cases \textit{global memory} accesses are needed and, in case they are very frequent, the memory will become a bottleneck limiting the performance.

It is worthwhile to note an important thing: all the effectuated considerations on the performance are about our piece of code. This should be clear given that is not possible to know the implementation details of the CULA primitives.
\subsection{Conclusions}

In the Introduction, we said about abstraction supported by CUDA. Obviously, those mechanisms are very useful but, as we have noticed during this project, they continue to stay at very low level. This forces the programmer to have a rich background because he has to handle many things, e.g. explicit allocation in memory supports. 

Anyway, an advantage is the portability of the code and the natural scalability in response to a change in the infrastructure below. This means that is not needed to modify the code if we change GPU and the performance will adapt itself in easy way in case of improvement in hardware characteristics. Obviously, this happens until is possible as we know from the theory and as we have noticed in the results.

Looking at the behaviour of the application in~\cite{SPM} we notice that the maximum performance was achieved with MPI over a multiprocessor architecture. The service time in that case was about 100 milliseconds. Instead, if we utilize a CUDA architecture it is more than four time bigger so, in case high performance is needed and there are no constraints about cost, is more convenient to use the first solution.

An important topic could be to explain why this gap is present. From~\cite{CUDA} we know that GPUs above have a frequency cores varying from 550 MHz to less than 1 GHz. Instead the Intel E5420 has a frequency equals to 2.5 GHz so the latter is much faster and this could be advantageous in terms of seconds.