\lstset{
  frame=single,
  morekeywords={loop,over,stage,in,out,end}
}
\begin{figure}
 \begin{center}
\begin{lstlisting}
loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for partition p in P
    loop over edges e in P //par
      tmp = contributions (e)
      update ( e(v1), tmp)
      update ( e(v2), tmp)
    stage out all data for partition p in P
  end loop
end loop
\end{lstlisting}
\end{center}
\caption{Original \oploop implementation.}\label{fig:split-original}
\end{figure}

%General loop splitting problem
The goal of splitting is to reduce the shared memory requirements of
the original code on a GPU. This is achieved by splitting the user
kernel into multiple successive functions (or stages). Each of such
functions must be chosen in such a way that each accesses only a
preferably small subset of the input parameters. Consequently, less
data needs to be allocated to shared memory for each stage. This
results in a smaller overall average shared memory requirement, which
permits fitting more parallel loop iterations into the same
partition. That is, partitions with larger sizes can be allocated to
the same streaming multiprocessor (SM) on a GPU, effectively
increasing the parallelism achievable by threads within the same
thread block. This improves overlapping of global memory accesses, and
specifically targets large CFD loops, as discussed in
Section~\ref{sec:intro}.

\begin{figure}
 \begin{center}
\begin{lstlisting}
loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for contrib
    loop over edges e in P //par
      tmp = contributions (e)
   stage out all tmp data
  end loop
end loop

loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for update1 (tmp)
    loop over edges e in P //par
      update ( e(v1), tmp)
   stage out all data for partition p in P
  end loop
end loop

loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for update2 (tmp)
    loop over edges e in P //par
      update ( e(v2), tmp)
   stage out all data for partition p in P
  end loop
end loop
\end{lstlisting}
\end{center}
\caption{Split \oploop by generating three successive loops. This
  requires modification of the user code.}\label{fig:split-lcpc}
\end{figure}

In this section we show how loop splitting is implemented in OP2 using
the pseudo-code of the loop implementation. We start by showing the
original OP2 implementation of parallel loops (as described in
\cite{CJ2011}), and then we incrementally add splitting. First, we
consider a simple loop splitting technique that takes advantage of a
user kernel property. As discussed, a similar technique was discussed
in \cite{op2-lcpc} but that required an OP2 source to OP2 source
transformation step. Our solution does not require an OP2 source to
OP2 source transformation step. Next, we give the general loop
splitting implementation, and we discuss the assumption and optimality
issues.

\begin{figure}
 \begin{center}
\begin{lstlisting}
loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for contrib
    loop over edges e in P //par
    {
      tmp(e) = contrib (e)
    }
    stage in all data for update1
    loop over edges e in P //par
    {
      update ( e(v1), tmp)
    }
    stage out all data for update2
    stage in all data for update2
    loop over edges e in P //par
    {
      update ( e(v2), tmp)
    }
    stage out all data for update2
  end loop
end loop
\end{lstlisting}
\end{center}
\caption{Split of contribution calculation and vertex updates.}\label{fig:split-contrib}
\end{figure}

Figure~\ref{fig:split-original} shows the original implementation of
OP2, without splitting, as discussed in Section~\ref{sec:op2}. Note
that we have excluded some irrelevant details for the description
here. The iteration set is partitioned and partitions are colored to
prevent data races. Partitions of the same color are executed in
parallel on the GPU (line 2), while partitions of different colors are
serialized (line 1). Every partition is scheduled to an SM and all the
needed data is staged into shared memory (line 3). The threads execute
the iterations of the partition in parallel (line 4) - we assume here
the case in which we are iterating over edges and we increment
adjacent vertices, a common and general pattern in OP2 and CFD
codes. Any other OP2 loops can be reduced to this or to a simpler
pattern along with simple code movement.

For each iteration, a thread computes a contribution (line 5) and then
it applies it to the two vertices (lines 6 and 7). Finally, all
modified datasets are staged back from shared memory to global
memory. Notice that using a given edge, the same contribution is
applied to the two vertices, as it is often the case in CFD
programs. The alternative, where two different contributions are
computed, requires no different support from the point of view of
splitting.
%
\begin{figure}
 \begin{center}
\begin{lstlisting}
loop over colors C //seq
  loop over partitions P in C //par
    stage in all data for contrib1
    loop over edges e in P //par
    {
      tmp1(e) = contrib1 (e)
    }
    stage in all data for contrib2
    loop over edges e in P //par
    {
      tmp2(e) = contrib2 (e)
    }
    ...
    stage in all data for contribN
    loop over edges e in P //par
    {
      tmp2(e) = contribN (e)
    }
    ...
    stage in all data for update1
    loop over edges e in P //par
    {
      update ( e(v1), tmp)
    }
    stage out all data for update1
    stage in all data for update2
    loop over edges e in P //par
    {
      update ( e(v2), tmp)
    }
    stage out all data for update2
  end loop
end loop
\end{lstlisting}
\end{center}
\caption{Split of contribution calculation in multiple functions.}\label{fig:split-general}
\end{figure}

The work in \cite{op2-lcpc} describes a splitting technique in which
the single loop is split into three loops, as shown in
Figure~\ref{fig:split-lcpc}.  Consider that the user kernel can
be split into three phases: (i) 
%\item 
computation of the contribution; (ii)
%\item 
update of the first vertex with the contribution; and, (iii)
%\item 
update of the second vertex with the contribution.
%\end{itemize}
We can thus derive three loops with equivalent semantics of the
original one which share a common dataset associated to edges holding
the contribution.
Unfortunately, this scheme requires us to stage the contributions
three times between global and shared memory, resulting in high
overhead.

In contrast to the scheme in \cite{op2-lcpc}, 
we describe here the case in which the initial loop is not
transformed into multiple loops; the splitting happens as an
alternative code synthesis. In the improved implementation of the
loop, the partition loop body (lines 1 and 2) now alternates the
three phases (contribution calculation, first and second updates).
Unlike the previous splitting technique, the contribution temporary
dataset is kept into global memory and accessed directly during the
computation of a partition. That is, it is not staged between global
and shared memory.

This scheme reduces the shared memory requirements for the loop
because strictly only the data needed in each step is staged into
shared memory. For instance, as the incremented dataset is not used
during the contribution calculation (OP\_INC semantics), it can be
omitted during the initial stage in phase (line 3). Also, after
computing the contribution, all data not required in the update steps
can be simply overwritten, and we can use the whole shared memory for
the vertex data alone (lines 8 and 13).

If the contribution for the two vertices is different, then their
computation falls in the first step.  Note that the temporary data has
now been promoted from a local user kernel variable, to an array
associated to edges. As our target is the reduction of shared memory
requirements, this is allocated into global memory.

Under this new scheme we do not need to allocate the vertex data to be
updated into shared memory while computing the contribution. Also, all
data needed for the contribution need not to be allocated during the
update. This means that the shared memory requirements for the three
steps is smaller than the original loop.

The last version that we describe splits the contribution calculation
in successive loops. We assume that the function {\it contrib} can be
split into multiple functions, say $\text{contrib}_1, \ldots,
\text{contrib}_N$. This can be done, for instance using automatic
support such as that provided by ROSE~\cite{ROSE} for outlining code
sections into functions. The splitting is similar to the previous
case, where at each stage we stage into shared memory only the data
required by each successive contribution function. For instance, we
initially stage in all data for the evaluation of the first
contribution at line 3. The computed temporary data after each
contribution is associated to edges and stored, as above, into global
memory. This temporary data can be produced by a contribution
calculation, and then re-used by a successive calculation, or in the
vertex update step.

The goal of this is to further reduce the shared memory requirements
for each of the contribution calculation, which results in 
larger total partition sizes. Of course this comes at the cost of the
need to manage temporary datasets (associated to edges) 
which are stored in global memory. The resulting trade off 
between reducing shared memory
requirements (thus increasing the partition size) and the added
global memory traffic is what decides the particular best %optimal 
loop splitting strategy. 
Note that this strategy, in general, depends on the application 
characteristics along with parameters of the target GPU architecture. 
In this paper, %we do not discuss how this problem can be solved,
%but 
we present a methodology to address this issue for the case of
large unstructured mesh applications. 
\suggest{However, even if the splitting
methodology is dependent on the unstructured mesh application
semantics, the re-solution of the optimality issue can be re-conducted
to standard compiler theory. That is, optimal loop splitting is a
classical issue that can be approached with known techniques,
typically based on data-flow analysis.}
