In this paper, we have described our study on general loop splitting
techniques for unstructured mesh applications. The aim is the
optimization of the shared memory requirements for large loops, i.e.,
loops accessing large quantities of data for each iteration.

Based on the OP2 implementation of parallel loops over an unstructured
mesh, we derived multiple versions. The first technique is a simple
splitting of the original loop into three loops. It is based on a
common user kernel property in CFD applications in which the same
contribution is applied to multiple indirectly accessed datasets. In
this case, the contribution calculation and the update of the indirect
datasets can be re-mapped to successive loops. However, this requires
user code modification and multiple staging of contributions between
global and shared memory on a GPU.

We have shown that this can be avoided by synthesizing a single loop
implementation for the original parallel loop, but following a split
behavior. The first loop splitting technique alternates contribution
calculation and updates while executing a same partition. As a
consequence, the input user code does not require modification, and
the contributions can be kept into global memory and we can rely on
the L1 cache on a GPU. The general loop splitting technique assumes that
the contribution calculations can be split into multiple successive
functions. The related code synthesis splits the contribution
calculation for each partition further by staging into shared memory only the
necessary dataset for each sub-function in which the contribution is
divided. The key is that each function requires a smaller number and
size of indirectly addressed datasets, to be stored into shared
memory. This permits maximizing the overall partition size, as less
data is required to be allocated on the shared memory at the same
time.

We have presented experimental results for four complex loops for the
first simple splitting on a GPU to validate the efficacy of our
approach. On GPUs, we obtained improvements of up to 34.5\% over the
baseline implementation. We have also studied the effect of loop
splitting for the same loops on a CPU, featuring larger caches, to
understand the strategy that is to be followed by an optimizing
compiler on these architectures. These results demonstrate that,
expect in some corner cases with small parallelism degrees, the fused
version of loops always performs better that the split ones on
CPUs. 
%In particular, the study of the Vflux loop shows us when which
%loop fusion is to be stopped as it no longer provides a performance
%improvement.
