In this section, we give a brief description of the relevant OP2
interface and implementation, including all the necessary details
required for understanding the contributions of this paper. The
interested reader can refer to the user manual~\cite{OP2-User} and
other references giving a full description of the
implementation~\cite{CJ2011}.

The OP2 approach to the solution of unstructured mesh problems
involves breaking down the problem into four distinct parts: (1) sets,
(2) data on sets, (3) connectivity (or {\it mapping}) between the sets
and (4) operations over sets. This leads to an API through which any
mesh or graph can be completely and abstractly defined. Depending on
the application, a set can consist of nodes, edges, polygonal faces or
other elements. Associated with these sets are data (e.g., node
coordinates, edge weights, etc.) and mappings between sets that
define how elements of one set connect with the elements of another
set.  In this paper, we omit a full description of the OP2 interface
for space reasons. Instead, we focus on the parallel loop
implementation.

The use of an \emph{active library} provides application programmers
with the ability to express complex abstractions through an API,
analogous to classical software libraries, but with the benefit of
compiler support to optimize those abstractions accordingly. An
application written once using the API, which is hosted in C/C++
or FORTRAN, can be translated using the source-to-source compiler
tools provided to deliver performance portability across a diverse
range of multi-core and many-core architectures, including CUDA, MPI,
OpenMP, and, in progress, OpenCL.
%
In this section we briefly discuss the library API, its compile- and
run-time infrastructure before presenting the main contributions of
this paper.

\subsection{Mesh Declarations}

\begin{figure}[t]
\begin{center}
\includegraphics[width=0.5\columnwidth]{fig/examplemesh}
\caption{Example mesh} \label{fig:mesh}
\end{center}
\end{figure}


\noindent We discuss the OP2 programming model by coding up the simple
unstructured mesh depicted in Fig.~\ref{fig:mesh} that consists of
vertices $v_1$ through $v_6$, edges $e_1$ through $e_{10}$, and five
triangles (unnumbered in the figure). The programmer first defines the
topology of the mesh by declaring sets and the relations between these
sets. Declaration of these sets using the C API can be detailed as
follows\footnotemark: \footnotetext{For clarity of exposition, we omit
certain non-essential parameters in all the API calls. Details of the API
are documented elsewhere~\cite{OP2-User}.  Also, {\it pml} in the
library calls stands for ``parallel mesh library'' which is used in
this version for double-blind review.}
\begin{lstlisting}
op_set triangles	= op_decl_set (5);
op_set vertices = op_decl_set (6);
op_set edges = op_decl_set (10);
\end{lstlisting}
\noindent To relate the vertices to which each edge is incident, 
the programmer defines an array encoding the relation and informs 
the library of the sets to which it is applicable:
%Carlo: the rewriting of the array below is Paul's *strong*
%suggestion. Do *not* change it back.
\begin{lstlisting}
int map [][2]	= {{1, 2}, {1, 4}, {1, 3}, 
  {1, 6}, {2, 6}, {4, 3}, {3, 6}, {4, 5},
  {3, 5}, {6, 5}};
op_map edgesToVertices	= 
  op_decl_map (edges, vertices, 2, map); 
\end{lstlisting}
Position $i$ in \code{map} corresponds to edge $\lfloor \frac{i}{2}
\rfloor$.  The first and second parameters in \code{op_decl_map}
state the source and destination sets, respectively, while the third
parameter defines the dimension (or cardinality) of the relation. The
next task of the programmer is to associate data to the sets of the
mesh over which the parallel computation operates. Assume that each
vertex in this example contains two double-precision floats
(e.g., representing their coordinates) and that each edge contains one
single-precision float (e.g., representing their weight). A dataset is
declared on a set through an array of the required size, including an
appropriate initializer as follows:
\begin{lstlisting}
double coordinates [6][2] = {...};
op_dat vertexData = 
  op_decl_dat (vertices, 2, coordinates);

float  weights [10]     = {...};
op_dat edgeData   = 
  op_decl_dat (weights, 1, edges); 
\end{lstlisting}
\noindent The second parameter in \code{op_decl_dat} informs the
library of the cardinality of the dataset per element of the
set. Generally, the implementation does not operate directly on the
passed array, since certain targeted back-end architectures utilize
different memory spaces. For example for the CUDA back-end
implementation of the \code{op_decl_dat}, the input data array is
transferred from host to device memory, where it will reside for the
rest of the computation, unless the user explicitly requires a copy
back to the host through a call to the \code{op_get_dat}
function. This means that there is no data transfer between
host and device, or vice-versa, during the computation.

In effect, the data of a program is \emph{sliced} into two subsets:
that which belongs to the library (the mesh datasets) and that which
belongs to the sequential program segments.
% Paul's suggestion
%The consequence is subtle: an update to a mesh dataset in a
%sequential code block is silently ignored by the library.
%
To initialize an \code{op_dat}, the user first initializes a
user-level array, i.e., an array outside the library logical
space. Then, the \code{op_decl_dat} transfers the array data to
library logical space, where it is initialized. This means that the
user is now no longer able to modify the data encapsulated in an
\code{op_dat} without using a library call.

\subsection{Parallel Loops in OP2}
\noindent All the numerically intensive computations in the
unstructured mesh application can be described as operations over
sets. Within an application code, this corresponds to loops over a
given set, accessing data through the mappings (i.e., one level of
indirection), performing some calculations, then writing back
(possibly through the mappings) to the data arrays. The OP2 API
provides a parallel loop declaration syntax (\oploop) which allows the
user to declare the computation over sets in these loops. The
computation per element is defined by the user by providing
\emph{kernel} functions---written in regular C/C++ or Fortran syntax---which 
operate on a single element (i.e., they are \emph{element
  wise}) of a given set, the so-called loop \emph{iteration set}.

Let us suppose that, in our running example, we wish to increment the
coordinates of each vertex by the weight of each edge incident to it,
and also calculate the maximum weight across all edges. Below are
functions implementing this functionality per vertex:
\begin{lstlisting}
void update (double coordinates [],
       double coordinates [], 
       float weights []){
          coordinates[0] += weights[0];
          coordinates[1] += weights[0];
} 
void maxWeight (float *weights, double *max){
  if ( *weights > *max )  *max = *weights;
} 
\end{lstlisting}
The \oploop declaration is then used to specify the data access
pattern for the computation where parallelization is achieved: 
\begin{lstlisting}
op_parallel_loop (update, edges, 
    op_arg_dat(coordinates, 0, map, OP_INC),
    op_arg_dat(coordinates, 1, map, OP_INC),
    op_arg_dat(weights, -1, OP_ID, OP_READ));

op_parallel_loop (maxWeight, edges, 
    op_arg_dat(weights, -1, OP_ID, OP_READ),
    op_arg_gbl(&max, OP_MAX));
\end{lstlisting}
Each parallel loop call takes $n+2$ arguments: the first is a pointer
to the user kernel function; the second is the relevant iteration set;
there are $n$ remaining arguments where $n$ is the number of
parameters in the user kernel and argument $i$ corresponds to
parameter $i - 2$.  Furthermore, each argument either relates
to a mesh dataset (\code{op_arg_dat}) or to a reduction
(\code{op_arg_gbl}).\\

\noindent For a dataset argument, the parameters convey the following
information to the library:
\begin{itemize}
\item Access modality of the corresponding parameter. There are four
  cases: read only \code{OP_READ}, write only \code{OP_WRITE}, both
  read and write \code{OP_RW}, or increment \code{OP_INC}. We
  indicate \code{coordinates} to be incremented inside \code{update}
  through \code{OP_INC}, whereas \code{weights} is a read only value
  and its access is described through \code{OP_READ}.

\item Whether the data is accessed via an indirection. Either the data
is \emph{directly} linked to the iteration set, i.e., it is associated
to the iteration set used by the loop, or it is \emph{indirectly}
accessed, in which case a mapping must be supplied. For instance, in the
first parallel loop above, \code{coordinates} is an indirect dataset since it
is attached to \code{vertices}, but the mapping \code{map} determines how
vertices can be retrieved while sweeping through edges. On the other hand,
\code{weights} is directly accessed and the mapping is the identify function,
indicated by \code{OP_ID}. When all datasets are directly accessed,
the parallel loop is denoted as {\it direct}, otherwise it is {\it indirect}. Thus, the first
parallel loop is indirect whereas the second parallel loop is direct.

\item If the argument is indirectly accessed, which element of the
  relation should be utilized. For instance, since there are two
  vertices per edge, the value $0$ in the first \opargd
  specifies the first vertex. When there is no mapping, a sentinel
  value of $-1$ is used, as for the \opargd in the second
  parallel loop.
\end{itemize} 

The cases of read plus write (\code{OP_RW}) and increment
(\code{OP_INC}) when used in tandem with indirect access represent
special cases. The indirect \code{OP_RW} is exclusively used to
handle loops where read and write target are disjoint. This means that
the user must guarantee this property by using/building appropriate
maps. Loops with conflicting reads and writes have unspecified
behavior. For indirect \code{OP_INC} the library instead guarantees
consistency by preventing data races, as discussed in the
Subsection~\ref{subsec:indirect}. This is the only modality which can
incur in a concurrency control problem when using OP2.


% \noindent For a reduction argument, we indicate where to store the
% result of the reduction and the binary associative operator to be
% applied. For instance, the second parallel loop performs a maximum
% (\code{OP_MAX}) and records the computation in a variable \code{max}.

\subsection{Implementation of Parallel Loops}
                                              
\noindent An application written using the library API is parsed
through its compiler and will produce back-end specific code
specializing the implementation of each parallel loop present in the
input program. It will also modify the input program to call the
specialized implementations. We are here interested in the GPU
implementation produced by the OP2 compiler, through CUDA. We modify
this implementation in Section~\ref{sec:split}.
The result of this first compilation
phase is then compiled again using a CUDA compiler (e.g., nvcc or the
PGI FORTRAN CUDA compiler) and linked
against platform specific back-end libraries to generate the final
executable.

For GPUs, the size of the mesh (i.e., of its
datasets and maps) is constrained to fit entirely within the GPU's
global memory. This means that for the non-distributed memory
implementations (i.e., single node back-ends) that we consider in this
paper, the only data exchange between the GPU and the host CPU is for
the initial transfer of data to the GPU and the final results out to
the host CPU. No implicit data transfers are issued by the
implementation.

The compiler parallelizes an \oploop by partitioning its iteration set
and assigning a thread block to each partition. In unstructured
mesh applications the meshes to which a program is applied are not
available at compile-time. The OP2 compiler produces code that: (i)
inspects the actual mesh at run-time, to produce execution
information; (ii) executes the parallel loop using the produced
information. In the next subsections we give details on these two
phases.

How the partitioned data are managed and the execution proceeds depend
on whether the parallel loop is direct or indirect.
% \begin{table}[t]\small
% \begin{center}
% \renewcommand{\arraystretch}{1.3}
% \caption{Direct and indirect loops layout}
% \label{tab:cudaimpl}
% %\centering
% \begin{tabular}{cp{6cm}}\hline
% {\bf Loop type} & Data layout \\\hline
% \hline
% Direct Loops & All dataset with dimension $>$ 1 are staged into shared memory.
% Dataset with dimension 1 accessed directly into device memory (implicit
% coalescing).  \\\hline\hline

% Indirect Loops & All {\it indirectly} accessed datasets staged into shared
% memory. Directly accessed datasets accessed into device memory to save shared
% memory space. For incremented datasets, increments are computed into private
% local thread variables and staged out into shared and device memory. Data
% races are avoided by coloring of mini-partitions and elements inside
% mini-partitions when dataset indirectly accessed and incremented. \\\hline
% \end{tabular}
% \end{center}\vspace{-20pt}
% \end{table}\normalsize

% \begin{figure}[t]
%   \begin{center}
%     \includegraphics[width=4cm]{fig/coloring1}
%  \vspace{-0pt}\caption{Unstructured Mesh after coloring.}
%  \label{fig:mesh:coloured1}
%   \end{center}
% \end{figure}

% \begin{figure}[t]
%   \begin{center}
%     \includegraphics[width=4cm]{fig/coloring2}
%  \vspace{-0pt}\caption{Unstructured Mesh after Coloring.}
%  \label{fig:mesh:coloured2}
%   \end{center}
% \end{figure}


% \begin{figure}[t]
% \begin{center}
% \subfloat[Mini-partition 1]{\includegraphics[width=4cm]{fig/coloring1}}
% \subfloat[Mini-partition 2]{\includegraphics[width=4cm]{fig/coloring2}}
% \vspace{-0pt}\caption{Unstructured Mesh after Coloring.}
% \label{fig:mesh:coloured}
% \end{center}\vspace{-25pt}
% \end{figure}

%\subsection{Partitioning and Scheduling for Direct Loops}
For direct parallel loops the iteration set is divided in partitions
of equal size. Each thread in a thread block works on at most $\lceil
\frac{n}{m} \rceil$ elements of the assigned partition, where $m$ and
$n$ are the sizes of the thread block and partition,
respectively. That is, when the partition size exceeds the number of
threads in the block executing it, the threads operate in successive
steps to cover the entire partition.

This execution model is sufficient to avoid data races because, by
definition, none of the data is accessed indirectly and therefore each
thread can only update data belonging to its iteration set elements.
Thus, it is possible to instantiate a number of thread blocks equal to
the number of partitions. This is obviously limited by the maximum
number of blocks that can be launched in parallel on a GPU.

\subsection{Partitioning and Scheduling for Indirect
  Loops}\label{subsec:indirect}
Also for indirect loops the partitions in which the iteration set
is partitioned are of homogeneous size.
%
Gaining good performance is in this case restricted by the need to
avoid data races between threads. That is, allowing threads to operate
on distinct elements of the iteration set does not guarantee an
absence of data dependencies due to indirect accesses. OP2 supports
data race absence only in the case when an indirectly accessed data
item is incremented. For instance, when iterating over edges, two
threads that are assigned two different edges, but which are connected
by a same vertex, might incur in a data race when incrementing the
data associated to the same vertex.

% Our implementation is based on \emph{coloring} the iteration set. We
% favored coloring over the use of atomic operations (atomics, in
% short), provided by the CUDA model, for two reasons: (i) CUDA compute
% capability 2.0 does not provide atomics for double precision, although
% the compare-and-swap instruction can be used to emulate it; (ii)
% coloring is calculated only once for each loop, and can be subject to
% further optimizations based on the analysis of color numbers and
% partition features.

OP2 performs coloring to prevent data races. There are two levels of
coloring in its implementation: inter- and intra-mini-partition.  The
inter-partition coloring is used to avoid conflicts between the data
shared at partition boundaries. This can happen if two iterations
assigned to different partitions increment a same data accessed
through a map. Since the library ensures partitions with the same
color do not share elements retrieved through a mapping, these can
proceed in parallel.

Intra-partition coloring is needed to prevent threads in the same
thread block from incurring data race conflicts. Again, two threads
assigned to the same partition could otherwise increment the same data
accessed through indirections.

% Consider the implementation of coloring on the indirect loops given
% above.  Assume that the iteration set (\code{edges}) is partitioned
% into two segments: $\{e_1, e_2, e_3, e_4, e_5\}$ and $\{e_6, e_7, e_8,
% e_9, e_{10}\}$. Note that the shared set of vertices of these segments
% are $\{v_3, v_4, v_6\}$. This coloring leads to the mini-partition
% coloring as illustrated in
% Fig.~\ref{fig:mesh:coloured1} and \ref{fig:mesh:coloured2}. Mini-partitions~1 and 2 are launched as
% CUDA kernels serially as they do not have the same color. Within
% mini-partition 1, four colors are needed to avoid thread races since
% all but one of the edges share vertex $v_1$.


\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm]{fig/MemAccess}
\caption{Data staging phase in CUDA kernels. The threads in a block
  coalesce dataset accesses between device and shared
  memory.} \label{fig:coalesced-stagein}
\end{center}
\end{figure}

\subsection{Staging Data into Shared Memory}
%On a GPU one main concern is to avoid non-coalesced accesses into
%global GPU memory when the user kernel executes. Also, the use of
%shared memory permits to maximise data locality for threads in a same
%block when executing a mini-partition.
OP2's compiler optimizes for both temporal and spatial locality by
staging data between global (device) and shared memory, before and
after user kernel execution. 

Temporal locality is achieved when the user kernel accesses multiple
times the same data allocated into shared memory during a
partition execution. Spatial locality is obtained by loading
contiguous dataset regions for each partition, which is a
consequence of mesh locality itself (i.e., when elements in the same
partition are interconnected). Spatial locality also introduces
coalescing of device memory accesses, through a proper thread
coordination scheme, as detailed below. Data staging includes two phases:

\noindent (1) Before the user kernel executes, any dataset read whose
cardinality per set element exceeds one is brought into the shared
memory. This access to the shared memory maximizes parallelism by
mapping successive threads identifiers to successive shared memory
addresses: a thread identified with $i$ accesses the address
$\text{base} + i$, where {\it base} is the base address of the
dataset. This is shown in Figure~\ref{fig:coalesced-stagein}.
%
Unary data accesses in direct loops are already coalesced in global
device memory, because the threads in a block access successive
coalesced global memory addresses. For this case, datasets are not
staged into shared memory.
%
After this first staging phase, all ordered accesses to datasets
performed by threads when executing the user kernel are coalesced in
shared memory.

(2) After the user kernel invocation section is complete, any dataset
written is moved back into global memory from the shared memory, so
that executions on the next color of mini-partitions, or the next
parallel loop, have visibility to the computations. Again, this uses
the same coalescing mechanisms discussed in point (1).

An exception to this rule is when datasets are accessed directly in an
indirect loop.  OP2's implementation avoids staging data with this
kind of access, to minimize the requirements of shared
memory. Instead, indirectly accessed datasets are always staged into
shared memory. Aggressive loop splitting, as described in
Section~\ref{sec:exp}, could require a re-thinking of this strategy
when splitting a loop into many much simpler loops. This should be
subject of future investigations.

Data that is incremented through an indirection has a further staging
phase, that uses local thread variables and is introduced to maximize
the parallelism when executing the user kernel. For each incremented
data value, OP2 allocates a local thread array variable whose size corresponds
to the dimension of the dataset. Each thread uses its own variables to
store the increments to be successively applied to shared memory
variables.  As these local variables are private to the thread, there
is no concurrency control problem, and user kernels can be executed in
parallel by threads. Finally, the threads apply the increments stored
in the local variables to the appropriate shared memory
variables. This step requires concurrency control, as shared memory
data is effectively shared between threads. This phase is executed by
following colors, to avoid data conflicts.

This scheme suffers from the need for additional local thread
variables, which can increase register pressure. An alternative to
this scheme is based in executing user kernels by following colors,
i.e., with a smaller parallelism degree. We show the effects of this
optimization in Section~\ref{sec:exp}.

% \subsection{Implementation of Reductions}
% Finally, the implementation of reductions is achieved by each thread
% block computing a reduced value specific to its mini-partition,
% resulting in $n$ values, typically stored in an array in global memory
% which size depends on the number of mini-partitions that can be
% executed in parallel. For indirect loops, this is equal to the maximum
% number of mini-partitions having the same color. For direct loops,
% it is the number of mini-partitions, as they can all be executed in
% parallel due to the absence of data races.
% %
% % Paul's suggestion (see above)
% % After the CUDA kernel has terminated, the array is copied
% % into CPU main memory;
% The final result is obtained with a loop with bound $n$ steps through
% the array, applying the associative binary operator
% (e.g. \code{OP_MAX}) accordingly and yielding the final result.
