\noindent While a number of works on GPU acceleration have focused on
structured-mesh problems ~\cite{stencil-gpu-1,sblock}, there has been
relatively little work on unstructured mesh codes.

% Our
% approach for GPU acceleration is via the use of an active library and
% as such this work further differs from much GPU optimization research
% where a CPU based application is hand-ported to the GPU platform.
Two main projects target unstructured mesh applications on
diverse target parallel architectures. 
%
Liszt~\cite{liszt} is a domain specific language from Stanford
(embedded in Scala~\cite{scala}) for the solution of unstructured mesh
based partial differential equations (PDEs). A Liszt description is
translated to an intermediate representation which is then compiled by
a dedicated compiler to generate native code for multiple
platforms. The aim is to exploit information about the structure of
data and the nature of the algorithms in the code and to apply
aggressive and platform specific optimizations.  Performance results
from a range of systems (GPU, multi-core CPU, and MPI based cluster)
executing a number of applications written using Liszt have been
presented in~\cite{liszt}.

The OP2 project \cite{CJ2011} provides
a library interface for C/C++ and FORTRAN, and it includes the following:
declaration of an unstructured mesh in terms of sets (e.g., vertices,
edges) and pointers (or maps) between sets (e.g., how edges are linked
to vertices); computation over the unstructured mesh, expressed in
terms of visit of all elements of a set applying some user-defined
kernel. OP2 is an active library that is translated into several
languages, including CUDA, OpenMP, OpenCL, MPI and their combinations
like MPI+CUDA. A restricted version of loop splitting for OP2 loops
was presented in a previous paper \cite{op2-lcpc}. In contrast,
in this paper, we present a generalized loop splitting technique.  
We use OP2 in
this paper because: (i) we can take advantage of its ROSE
compiler infrastructure
which already supports source code outlining~\cite{ROSE};
and (ii) we can easily compare our results to previous attempts at
optimizing OP2. 
\suggest{For instance, a further contribution~\cite{op2-cf}
explores the benefits of loop fusion on multicores and GPUs.}

% The programming of multi- and many-core processors can be performed in
% a straightforward and efficient way using directive-based languages,
% such as OpenMP~\cite{OpenMP} and OpenACC~\cite{openacc}.
% %
% The abstractions provided by our library (in particular the access
% descriptors described in Section~\ref{sec:op2}) can be seen
% as an extension of a directive-based scheme to provide explicit access
% characterization with respect to the unstructured mesh data model. The
% choice of a library is only a syntactic sugar to express such
% abstractions.
%
%Paul's suggestion
% The
% abstractions provided by our library can be seen as an extension to
% such languages to target unstructured meshes, the choice of a
% library being only a synctatic sugar to express such abstractions.
%

% I BELIEVE WE NEED RELATED WORK IN OP2
% In particular, the declaration of data to be associated with the mesh,
% as fully described in the next section, can be seen as a {\it
%   declaration} directive of OpenACC, as we logically transfer a data
% from user to library space at declaration time. This is implemented on
% GPUs by transferring the data from host to device memory, where it is
% mapped during the whole execution unless explicit calls are issued
% from the user to provoke a copy back to the host memory. The access
% descriptors provided by our library to express mesh data accesses
% through indirections (or pointers) can instead be seen as extensions
% of {\it parallel} directives provided by both OpenMP and OpenACC. This
% can be used, as we do in our library implementation, to optimize the
% code synthesis for non-affine array accesses on multicores and GPUs,
% using the well-known inspector/execute scheme \cite{inspect-execute}.


% In contrast to the directives based GPU acceleration approaches such as
% OpenACC~, we enable the user code to be parallelised on various
% target platform by utilizing radically different optimisations, best suited to
% each architecture. As such we believe that our library is able to gain higher
% performance on target processors than via a directives-based parallelising
% compiler. 
