%% What do we experiment: single GPU, single CPU using OpenMP and
%% single CPU using MPI. Effect of loop splitting over the three
%% implementations

%% What do we use for doing the experiments: HYDRA, NASA

In this section, we validate the performance improvement delivered by
the first simple loop splitting technique. Our aim is the study of an
optimal loop splitting strategy to be applied in a fully-automated
optimizing compiler. For this purpose, we study the effects of loop
splitting on two different architectures.

We used a CFD simulation software developed at Rolls-Royce for the
simulation of turbomachinery engines, called HYDRA. Performance
studies of HYDRA have been reported in \cite{op2-lcpc}.  We apply the
loop splitting technique to six loops resulting from the simulation of
a standard CFD test case, called NASA Rotor 37.

For the simulation, we use a triangular mesh with approximately 2.5
million edges and the simulation is based on double-precision floating
point operations. The studied loops all iterate over an edge set
(edges or boundary edges), and they follow the described CFD loop
pattern: they compute a contribution which is then applied to the two
adjacent vertices (same contribution). We study four loops and we
report here the size in byte of indirectly and directly accessed
datasets because of its relevance with respect to 
the loop splitting technique:
\begin{itemize}
\item Accumulation over edges (Accu), which access 680 bytes of
  datasets indirectly for each iteration. The incremented datasets are
  416 bytes in size per iteration.
\item Contribution on edges (Edgecon), which accesses indirectly 528
  bytes and directly 24 bytes for each iteration. The incremented
  datasets are 384 bytes in size per iteration.
\item Viscous or smoothing fluxes calculation (Vflux), which accesses
  indirectly 808 bytes and directly 24 bytes for each iteration. The
  incremented datasets are 96 bytes in size per iteration.
\item Inviscid flux calculation (Iflux), which accesses indirectly 328
  bytes and directly 24 bytes. The incremented datasets are 96 bytes
  in size per iteration.
\end{itemize}
The involved user kernels for the described loops can be as complex as
including up to 600 double precision floating point operations.

We performed experiments using an NVidia GPU C2070, running the
original and the split versions of each loop. Similar results are also
shown in~\cite{op2-lcpc} and we do not expect relevant
differences. Unlike previous contributions, in this paper we run the
same experiments on an dual 6-core Intel Westmere X5660, to understand
the applicability of the studied technique on different
architectures. Table~\ref{tab:system-specs} gives details of the used
architectures. The aim of the comparison is to understand if loop
splitting is a desirable feature also on architectures with large
caches.

\begin{table}[h]\small
\centering
\caption{Single node CPU system specifications}
\begin{tabular}{cccc} \hline
Node  	   	    & Cores/node   & Mem.   & Compiler [flags]\\
System		    & (Clock/core) & /node  & \\\hline
2$\times$Intel      & 12 [24 SMT]  & 24 GB  & IFORT 11.1 \\
Xeon X5650     	    & (2.67GHz)	   & 	    &  -openmp -O2 -parallel \\
(Westmere)	    &		   &	    & \\\hline

Tesla C2070 	    & 448 	   & 6.0 GB & pgfortran 12.2 -O4\\
		    & (1.15GHz)	   &	    & nvcc 4.1 -O3\\\hline
\end{tabular}\label{tab:system-specs}
\end{table}\normalsize 


\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/CUDA-1-noblock}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  GPU. This version of the loops is not tuned w.r.t. the block size.}
\label{fig:cudanoblock}
\end{figure}
\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/CUDA-1}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  GPU. This version of the loops is
  tuned w.r.t. the block size.}
\label{fig:cuda1}
\end{figure}


For the GPU experiments we maximized the partition size in order to
maximize parallelism on each SM on the NVidia GPUs. On the CPU we
tested two partition sizes for every loop: 128 and 512. We expect loop
splitting to influence data locality, and the choice of two highly
different partition sizes permit us to study this effect. To maximize
performance and stability of execution times we used the Intel thread
affinity support in {\it scatter} mode, using the two Westmere nodes
as a single 12-core processor. As we execute in parallel, only
partitions that do not share data, on different threads, expect this
to maximize main memory bandwidth. The results are for experiments
with parallelism degree (number of OpenMP threads) equal to 1, 2, 6
and 12. We do not show the remaining degrees for space reasons.

Figures~\ref{fig:cudanoblock} and~\ref{fig:cuda1} show the results for the
baseline and split versions of the studied loops. The two graphs
differentiate in the fact that the first one does not optimize
w.r.t. the CUDA block, while the second one does. As it can be
noticed, the impact of loop splitting is higher when the CUDA block is
not optimized. This means that loop splitting alleviates the slowdown
given by a wrong choice the CUDA run-time configuration parameters
for the studied loops. The maximum reported improvement of loop
splitting is 34.5\% when the CUDA thread block is not optimized, and
22.5\% when the CUDA thread block is optimized.


For the CPU experiments we show results in Figure~\ref{fig:openmp1}
(1 thread), Figure~\ref{fig:openmp2} (2 threads),
Figure~\ref{fig:openmp6} (6 threads), and Figure~\ref{fig:openmp12}
(12 threads). As expected, the impact of loop splitting is smaller on
CPUs compared to GPUs. We can notice that we obtain some performance
improvements for the Vflux and Iflux loops with small parallelism
degrees. This is due to the large number of bytes required for each
iteration of these two loops compared the size of incremented
datasets; thus, splitting these loops highly improves the two increment
loops as we can now run them with a large partition size compared to
cases in which large datasets need to be incremented. By increasing
the parallelism degree and by scattering the threads over the two
multicore nodes we amortize the impact of loop splitting for the two
loops as more threads access less and different data. From these
results we can conclude that, except for some larger and specific
cases when using small parallelism degrees, a fused version of loops
is always to be preferred. An optimizing compiler should do its best
to maximize loop fusion, e.g. by using mesh-independent techniques
(e.g. see~\cite{op2-cf}) or by applying sparse tiling~\cite{sparsetiling}. 

\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/openmp-1proc}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  CPU with 1 OpenMP thread. The graph shows execution time of each loop
  in its original and split version}
\label{fig:openmp1}
\end{figure}

\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/openmp-2proc}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  CPU with 2 OpenMP threads. The graph shows execution time of each loop
  in its original and split version}
\label{fig:openmp2}
\end{figure}


\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/openmp-6proc}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  CPU with 6 OpenMP threads. The graph shows execution time of each loop
  in its original and split version}
\label{fig:openmp6}
\end{figure}


\begin{figure}[t]\centering
\includegraphics[width=\columnwidth]{graphs/openmp-12proc}
\caption{Loop performance of NASA Rotor 37 with 2.5 million edges on a
  CPU with 12 OpenMP threads. The graph shows execution time of each
  loop in its original and split version}
\label{fig:openmp12}
\end{figure}

