Computational Fluid Dynamics (CFD) applications using unstructured
meshes dominate the workload of many industrial and academic HPC
systems. Examples include the modeling of blood flow in the human
body, air flow over aircraft and ocean circulation. The use of
unstructured meshes is often essential to correctly simulate complex
geometries in CFD and they are widely used in those cases in which
structured meshes are unable to provide a suitable modeling
abstraction.


%Performance on a gpu for unstructured meshes is hard to obtain, due
%to pointers
Despite their attractiveness from a simulation viewpoint, unstructured
mesh applications represent a ``hard'' case in terms of realizing computing
performance. This is a result of the extensive 
use of pointers between mesh elements
(e.g., edges to vertices) used to express the mesh structure, which renders
data layout and data movement a complex problem. This performance
issue has been the subject of a number of research
works~\cite{liszt,inspect-execute,CJ2011,op2-lcpc}.

%Why on a GPU is so hard
In this paper we focus on optimizing the performance of unstructured
mesh applications on GPUs. Unlike previous papers addressing this
issue (see~\cite{liszt,CJ2011}), we focus on CFD codes that access
large amounts of data, in which GPU performance is hard to achieve and
CPUs typically deliver better results. This derives from two
well-known limitations of current GPU technology, namely, the limited
size of shared memory and the small number of registers available to
each GPU thread.
 
%Implementations are typically based on achieving data locality using
%shared memory, and coalescing global memory accesses (this is done,
%for instance, in Liszt and OP2)

Existing implementations of unstructured mesh applications introduce
several optimizations for GPUs (such as those from NVidia)
\cite{liszt,CJ2011}. One effective optimization improves data locality
by mapping all or part of the mesh data onto shared memory. This
exploits temporal locality; for instance, when iterating over edges
and accessing vertex data stored in shared memory, two linked edges
will access the same vertex. It also improves spatial locality as data
scattered in global memory due to the presence of pointers (e.g.,
between edges and vertices) is stored in a contiguous chunks of shared
memory. Other optimizations aim to improve global memory access
coalescing, by renumbering the mesh using standard software (e.g.,
METIS~\cite{metis}). Both optimizations are included in two main
software projects for unstructured meshes, namely LISZT~\cite{liszt}
and OP2~\cite{CJ2011}.

%We have evidence that this is not sufficient to get high performance
%for any unstructured mesh applications. In particular, we can
%characterize two main well-known limitations of GPUs and show how
%they can seriously limit performance for an unstructured mesh
%application

Despite the effectiveness of these ``standard'' GPU optimizations on
many small applications and benchmarks (e.g., those in
\cite{CJ2011,liszt}), there are cases in which this is not sufficient
to achieve good performance. An example is industry-strength CFD
simulations that have large loops where each iteration needs to access
a large amount of data. The limitation in size of the shared memory on
current GPU technology forces one to partition the iteration set into
small chunks, whose required data can fit into shared memory. However,
this significantly reduces the performance of a GPU, since the amount
of available parallelism is small and is extremely difficult to
overlap global memory accesses with computation using typical GPU
optimization techniques~\cite{op2-lcpc}.

%To this aim, we have developed a loop splitting technique. This paper
%shows the general algorithm and the requirements to apply this
%technique to any unstructured mesh loop.

To optimize the performance of large loops on GPUs, we introduce a
general loop splitting technique. We consider the case of parallel
loops over the mesh applying some user-defined kernel (typically
called {\it user kernel}). Using this technique we can synthesize an
implementation of a large parallel loop in which the user kernel is
split into multiple functions (sub-kernels) of equivalent
semantics. Each sub-kernel is carefully outlined from the original
user kernel in such a way that its shared memory requirement is
smaller than the original user kernel. Using this knowledge, the
implementation only stages into shared memory the strictly necessary
data for executing each sub-kernel. This permits maximizing the
partition size of the whole loop.

In this paper we do not discuss optimality of loop splitting,
i.e., finding the best cut of a user kernel targeting maximum
parallelism, data re-use between the sub-kernel, and minimizing global
memory traffic. We instead show a implementation of a parallel loop
synthesizing loop splitting, that can be used to approach the optimal
loop splitting problem using standard data-flow analysis techniques.

%We provide experimental results to show how loop splitting is
%actually effective on a real-world industrial application.
We validate the effectiveness of our approach by studying multiple
loops derived from an industrial CFD application for simulating the
air flow in turbomachinery components of jet engines. We report on the
performance of loop splitting when executing on a GPU and a CPU. The aim
is to understand the possible benefits of using this
approach also when dealing with architectures with larger caches. In
addition, we present a discussion of the possible impacts of the
presented technique when using clusters of GPUs and CPUs. In our
experiments we use the software developed by the OP2 project as its
source-to-source translator is based on ROSE\cite{ROSE}, which
provides automatic code outlining that can be used to split the user
kernel into multiple functions.

%We use the OP2 library as a research basis over which developing loop
%splitting. OP2 is an ideal tool for such an effort, thanks to its
%ROSE-based compiler which permits full analysis of user-kernels and
%parallel code.

%Contributions
The main contributions of this paper are the following:
\begin{itemize}
%
\item We present an appraoch to splitting a loop into multiple loops
  each with smaller data requirements. From a single complex
  loop we can derive alternative implementations (through code synthesis)
  by splitting in
  different ways to achieve optimal performance. The technique does
  not require modification of the user code.
%
\item We show experimental results of loop splitting on a complex
  industrial application on two architectures: an NVidia C2070 GPU
  using CUDA and two Intel Westmere X5660 multicores with 12
  processors using OpenMP. Our aim is to show the impact of loop
  splitting on architectures with larger caches.
%
\end{itemize}

The rest of this paper is organized as follows. We discuss related
work in Section~\ref{sec:rw} and the OP2 implementation on GPUs in
Section~\ref{sec:op2}. Loop splitting is discussed in
Section~\ref{sec:split}. Experimental results are presented in
Section~\ref{sec:exp} and we conclude with a summary in
Section~\ref{sec:conc}.
