% easychair.tex,v 3.2 2012/05/15
%
% Select appropriate paper format in your document class as
% instructed by your conference organizers. Only withtimes
% and notimes can be used in proceedings created by EasyChair
%
% The available formats are 'letterpaper' and 'a4paper' with
% the former being the default if omitted as in the example
% below.
%
\documentclass{easychair}
%\documentclass[debug]{easychair}
%\documentclass[verbose]{easychair}
%\documentclass[notimes]{easychair}
%\documentclass[withtimes]{easychair}
%\documentclass[a4paper]{easychair}
%\documentclass[letterpaper]{easychair}

% This provides the \BibTeX macro
\usepackage{doc}
\usepackage{makeidx}
\usepackage{xspace}

% In order to save space or manage large tables or figures in a
% landcape-like text, you can use the rotating and pdflscape
% packages. Uncomment the desired from the below.
%
% \usepackage{rotating}
% \usepackage{pdflscape}

% If you plan on including some algorithm specification, we recommend
% the below package. Read more details on the custom options of the
% package documentation.
%
% \usepackage{algorithm2e}

\graphicspath{{figures/}}

% Some of our commands for this guide.
%
\newcommand{\easychair}{\textsf{easychair}}
\newcommand{\miktex}{MiK{\TeX}}
\newcommand{\texniccenter}{{\TeX}nicCenter}
\newcommand{\makefile}{\texttt{Makefile}}
\newcommand{\latexeditor}{LEd}

\newcommand{\ie}{i.\,e.,\xspace}
\newcommand{\eg}{e.\,g.,\xspace}
\newcommand{\etal}{et al.\xspace}
\newcommand{\e}[1]{\ensuremath{\times 10^{#1}}}

%\makeindex

%% Document
%%
\begin{document}

%% Front Matter
%%
% Regular title as in the article class.
%
\title{Efficient Particle-Mesh Spreading on GPUs}

% \titlerunning{} has to be set to either the main title or its shorter
% version for the running heads. When processed by
% EasyChair, this command is mandatory: a document without \titlerunning
% will be rejected by EasyChair

\titlerunning{The {\easychair} Class File}

% Authors are joined by \and. Their affiliations are given by \inst, which indexes
% into the list defined using \institute
%
\author{
Xiangyu Guo\inst{1} \and
Xing Liu\inst{2} \and
Peng Xu\inst{1} \and
Zhihui Du\inst{1} \and
Edmond Chow\inst{2}
}

% Institutes for affiliations are also joined by \and,
\institute{
  Tsinghua National Laboratory for Information Science and Technology\\
  Department of Computer Science and Technology, Tsinghua University, 100084, Beijing, China\\
  \email{\{csgxy123,bly930725\}@gmail.com, duzh@tsinghua.edu.cn}
\and
   School of Computational Science and Engineering, Georgia Institute of Technology,\\
   Atlanta, Georgia, 30332, USA\\
   \email{xing.liu@gatech.edu, echow@cc.gatech.edu}
}
%  \authorrunning{} has to be set for the shorter version of the authors' names;
% otherwise a warning will be rendered in the running heads. When processed by
% EasyChair, this command is mandatory: a document without \authorrunning
% will be rejected by EasyChair

\authorrunning{Xiangyu, Xing, Peng, Zhihui and Edmond}


\clearpage

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\maketitle
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{abstract}
The particle-mesh spreading operation maps a value at an arbitrary
particle position to contributions at regular positions on a mesh.
This operation is often used when a calculation involving irregular positions
is to be performed in Fourier space.  We study several approaches for
particle-mesh spreading on GPUs.  A central concern is the use of
atomic operations.  We are also concerned with the case where
spreading is performed multiple times using the same particle 
configuration, which opens the possibility of preprocessing
to accelerate the overall computation time.  Experimental tests 
show which algorithms are best under which circumstances.
\end{abstract}

%\setcounter{tocdepth}{2}
{%\small
%\tableofcontents}

%\section{To mention}
%
%Processing in EasyChair - number of pages.
%
%Examples of how EasyChair processes papers. Caveats (replacement of EC
%class, errors).

\pagestyle{empty}


%------------------------------------------------------------------------------
\section{Introduction}
\label{sec:intro}

Many scientific applications involve both a set of particles
that can reside at arbitrary locations in space, and a Cartesian mesh with
regularly-spaced mesh points.  Given a set of values, such as velocities,
on the mesh points, it may be desired to find
the interpolated values at the arbitrary particle locations.  This is
called the particle-mesh {\em interpolation} operation.  Mesh points
nearby the particle are used to interpolate the value of the quantity at
that particle.  The inverse operation takes values at particle positions
and contributes them
to values at nearby mesh points.  This is called the particle-mesh
{\em spreading} operation.  The topic of this paper is particle-mesh
spreading.  The operation is a key step in the non-equispaced fast
Fourier transform~\cite{dutt1993fast, ware1998fast}, with applications
including tomography~\cite{schomberg1995gridding}, magnetic resonance
imaging~\cite{twieg1983k} and ultrasound~\cite{soumekh1987computer}.
Particle-mesh spreading is also used in the particle-mesh Ewald summation
(PME) method \cite{darden}, widely used in molecular
dynamics~\cite{gotz2012routine} and
other types of simulations \cite{darve-2005,liu1large}
to evaluate long-range interactions between particles.

In various particle-mesh applications, given quantities located
at particle positions, such
as velocities, forces or charges, are mapped onto a 3D regular mesh.
The spreading contributes to a $p \times p \times p$ region of the mesh
roughly centered at the particle.  The value of $p$ is related to
the order of the (inverse) interpolation function.  Figure
\ref{fig:spreading} illustrates the particle-mesh spreading of two
particles onto a 2D mesh using $p=4$.

\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{mesh_spreading}
\caption{Particle-mesh spreading onto a 2D mesh with $p = 4$.
The solid green circle and red triangle represent two particles.
The mesh points receiving contributions from these particles are
shown with green circles and red triangles, respectively.}
\label{fig:spreading}
\end{figure}

While both particle-mesh interpolation and spreading are important, we
focus on the latter because it is much more challenging to obtain high
performance for the spreading operation.  The reason why these operations
have very different performance characteristics is because data structures
are usually particle based rather than mesh based.  This is due to the fact that
it is easy to determine the neighboring mesh points of a particle,
but not easy to efficiently determine the neighboring particles
of a mesh point, especially when particles can move.
For the spreading operation, a natural parallelization across
particles means that the mesh variables are shared, and locking/waiting is
needed to control access to these variables.
For the interpolation operation, the quantity at each particle is simply
computed by reading the values at nearby mesh points.  This paper focuses
on the particle-mesh spreading operation on GPUs, where large numbers
of threads may be contending for writes on mesh variables.

The simple %na\"{\i}ve
method of parallelizing
particle-mesh spreading on GPUs is to use one thread
to perform the spreading operation for each particle.
As mentioned, this requires using expensive atomic operations
as multiple threads might attempt to update
the same mesh location simultaneously.
Additional challenges arise from the sparse and irregular nature
of spreading, making it hard to achieve load balance and
coalesced memory access, leading to poor performance on GPU hardware.

Previous research on particle-mesh spreading on GPUs attempt to
enhance coalesced memory access and partially avoid the use of atomic
operations~\cite{harvey2009implementation, brown2012implementing}.
In these studies, a preprocessing step is used to create a
mesh based data structure.  Each mesh point can store
a single particle \cite{harvey2009implementation}
or a list of particles \cite{brown2012implementing}.
No atomic operations are needed to perform the actual spreading
operation because a single thread sums the contributions
for a mesh point using the mesh based data structure.
%Atomic operations may still be needed to construct the mesh based
%data structure, but the number of these operations

A number of issues can be raised with the above mesh based approach.
Performance is highly dependent on the number of particles
per grid point.  (The relationship between the number of particles
and the number of grid points is chosen by balancing accuracy and cost.)
For fewer than 1 particle per grid point on average, the mesh based
approach may be inefficient because of the large number
of mesh points not associated with particles.  Also, while
avoiding atomic operations is a good optimization guideline,
on recent GPU microarchitectures, \eg the Kepler GK110,
the atomic operation throughput has been substantially improved,
making particle based approaches more competitive.

In this paper, we are particularly interested in
a new use case for particle-mesh spreading, making it worthwhile
to reinvestigate these algorithms.  In traditional uses of
particle-mesh spreading, the operation is performed once for
a given configuration of particles, where a configuration is
a set of particle locations.  The new use case is to perform
the spreading operation multiple times for the same particle configuration.
This is necessary when the
spreading operation is performed inside an iterative method,
for example, inside the Lanczos algorithm, to compute
Brownian displacements in Brownian
dynamics simulations for the given particle configuration~\cite{liu1large}.
This use case means that it may be profitable to perform some
preprocessing, such as construction of mesh based data structures,
to speed up the overall computation.

The main contribution of this paper is two-fold: 1) propose a new
algorithm for computing a mesh based data structure on GPUs that
is useful when the spreading operation is performed multiple times, and 2)
propose a technique of using GPU warp shuffle operations to optimize the
spreading operation with the mesh based structure.  It is unlikely
that one single spreading method achieves the best performance for all
applications, with different densities of particles relative to mesh
points.  To fully understand when to use what algorithms, we compare
several spreading algorithms using well-selected test cases.  For example,
we will show that particle based approaches are now very fast
on GPUs, given improvements in the speed of atomic operations.

%------------------------------------------------------------------------------
\section{Critique of Existing Approaches}

%Particle-mesh spreading algorithms on GPUs can be divided into two categories,
%distinguished by whether the calculations are particle based or mesh based.
%In this section we discuss how algorithms in these two categories
%are different in terms of implementation and performance.

\subsection{Particle Based Approach}

The simple particle based approach assigns one thread per particle
to perform the spreading operations. Because multiple threads
working on nearby particles may need to update the same mesh points
concurrently, the use of atomic operations is generally necessary.
While this approach may work well on CPUs, it is traditionally
thought to be inefficient on GPUs
where atomic operations are relatively more expensive.

A major advantage of the particle based approach is that it only
needs a simple data structure,
consisting of the list of particles and their coordinates.
The (inverse) interpolation coefficients are computed ``on-the-fly''
using the particle coordinates.

% Theoretically, scaling may not be linear due to contention when there
% are more particles

\subsection{Mesh Based Approach}

The mesh based approach, in contrast to the particle based approach, assigns
threads to mesh points.  This is the approach in the clever
work of Harvey and Fabritiis~\cite{harvey2009implementation} on NVIDIA's
Tesla microarchitecture.  The basic idea is to use a ``gather'' for each
mesh point rather than a ``spread'' for each particle.  The algorithm
consists of three steps.  In the first step, each particle is placed
at the nearest mesh point.  Atomic operations are still needed in this
step, but they are much fewer than in the particle based approach (by
a factor of $p^3$ because particles rather than spreading contributions
are collected at the mesh points).  Each mesh point can hold at most one
particle, so any additional particles are placed on an overflow list.
In the second step, the actual spreading operation is performed at
each mesh point by gathering contributions from particles placed in
the surrounding $p^3$ mesh points.  Since each thread only updates one
mesh point, the use of atomic operations is not needed in this step.
As designed, memory access is coalesced in this step as adjacent
threads update adjacent mesh points.  In the third step, particles
on the overflow list are processed using the particle based approach.
This algorithm follows the paradigm of dividing the computation into a
regular part and an irregular part.  The regular part can be computed
quickly on GPU hardware and hopefully dominates the irregular part.
In this paper, we refer to this specific mesh based algorithm as
the ``Gather algorithm.''

When the number of particles is smaller than the number of mesh points,
the Gather algorithm has much more memory transactions than the particle
based approach.  This may be an acceptable cost if it is lower than the
penalty of using atomic operations.  This was the case for the Tesla
microarchitecture used by Harvey and Fabritiis~\cite{harvey2009implementation},
but on NVIDIA's Kepler microarchitecture where atomic operations can be as
fast as global memory load operations, the extra memory transactions
may outweigh the gain of avoiding atomic operations.

Another potential disadvantage of the Gather algorithm is that
the interpolation weights must be computed multiple times,
once for every particle contributing to a mesh point, rather
than simply once for every particle in the particle based approach.
This is because the interpolation weights for a particle depends
on a particle's position.  In essence, the interpolation
weights are computed $p^3$ times rather than once.  In this paper,
we use cardinal B-spline interpolation (used in the smooth PME
method~\cite{essmann1995smooth}).  For high order B-spline interpolation,
e.g., $p>7$,
the interpolation weights are often computed via a recursive process,
making this cost significant.

We note that when the Gather algorithm for spreading must be
performed many times for the same particle configuration, the result of the
first placement step of the Gather algorithm can be saved and
reused.  In this paper, we have implemented and optimized the
Gather algorithm in order to study its performance for
various test cases.

\subsection{Multicoloring Approach}

In our previous work on Intel Xeon Phi, we parallelized the spreading
operation by using a particle based approach that does not need atomic
operations \cite{liu1large}.  Multicoloring is used to partition the
particles into sets called ``colors.''  Spreading is performed in
stages, each corresponding to a color.  In each stage, a thread is
assigned a subset of the particles of the current color such that each
thread can update mesh locations without conflict from other threads.
This algorithm, however, is not appropriate for GPUs because of limited
parallelism, due to the fact that each thread must be assigned the
spreading operation for many particles.  In essence, the particles
assigned to a thread must be processed sequentially, otherwise
conflicts would occur.  We will not discuss the multicoloring
approach further in this paper.

%%%% below needs to be moved to SEC III
% Although not considered in previous work, the performance of the particle based
% approach can be improved by choosing the right granularity of parallelism.
% The choice of one thread per particle has potentially two problems.
% First, this choice limits the maximum number of threads to the number
% of particles.  Second, this choice has poor memory access patterns
% since it does not guarantee any memory access locality between
% adjacent threads.
% In this paper, we use one warp per particle in the spreading
% operation.  Each warp has a workload of spreading to $p^3$ mesh points.
% This choice of granularity can use more parallel resources and
% also has coalesced memory accesses as all
% threads within a warp will access neighboring memory addresses.

\section{Proposed Mesh Based Approaches}
\label{sec:matrix}

\subsection{Relation to Sparse Matrix Techniques}

When the spreading operation is performed multiple times for the same
particle configuration, it may be worthwhile to separately consider a
preprocessing step and a spreading step such that the spreading step
is as fast as possible, and the cost of the preprocessing step can be
amortized over the multiple spreading operations.  To avoid needing atomic
operations in the spreading step, the preprocessing step generally needs
to compute a mesh based data structure.  The mesh based data structure
computed by the Gather algorithm, however, has two main issues: 1)
it requires performing gather operations on every mesh point even for
mesh points that do not have particles spreading onto them, and 2)
it requires recomputing the interpolation coefficients many times.

In order to make the spreading step as fast as possible, it is tempting
to use a different mesh based data structure where the interpolation
coefficients are stored and not recomputed.  This addresses the second
problem above, but introduces the drawback that DRAM reads would be
needed for the interpolation coefficients.  Although these reads can be
coalesced, the tradeoff between storage and recomputation of interpolation
coefficients must be studied.  To address the first problem above,
we can explicitly store a list of contributions at each mesh point.
This also avoids the need for an overflow list in the Gather algorithm.

The above ideas can be implemented using a sparse matrix.
Each row of the sparse matrix is stored contiguously, and the elements
in a row represent
the interpolation weights for a given mesh point.  Applying
the spreading operation consists of performing a sparse matrix-vector
product (SpMV), where the vector is the quantities at the particle locations
to be spread.

In this section, we describe three mesh based approaches, which we
call {\em single mesh}, {\em group mesh}, and {\em hybrid mesh}.
The single mesh approach uses a data structure identical to
the compressed sparse row (CSR) data structure used in sparse
matrix computations.  We describe a new fast algorithm for GPUs for computing
the spreading operator as a sparse matrix in CSR format.  To further improve
performance, we describe a new group mesh approach.
Finally, for completeness,
we describe a hybrid mesh approach, which is analogous to using
the hybrid sparse matrix format in cuSPARSE for representing the
spreading operator.

\subsection{Construction of Single Mesh and Group Mesh Data Structures}

The spreading approach that we describe here involves precomputing
mesh based data structures.  The efficiency of this preprocessing step
must not be ignored, because it itself is repeated for every particle
configuration, and the number of spreading steps over which it is
amortized may not necessarily be very large.  In this section, we focus
on the efficient construction of the single and group mesh based
data structures.

We first consider a simple data structure to set the stage for a more
complex approach.  As mentioned, the single mesh data structure
corresponds to a sparse matrix in CSR format.  The data structure is
mesh based because rows, which are stored contiguously, correspond to
mesh points, and columns correspond to particles.
Constructing such a sparse matrix on GPUs is straightforward,
and is illustrated in Figure \ref{fig:single-mesh-transpose}.

\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{single-mesh}
\caption{Constructing the spreading operator in the single mesh
data structure with $p = 3$.
The solid blue dot and solid red triangle represent two particles.
For each particle, 9 GPU threads are used to
compute its spreading contributions as well as to
update the rows of the data structure.}
\label{fig:single-mesh-transpose}
\end{figure}

Specifically, constructing the spreading operator in CSR
format consists of three steps:
\emph{Count}, \emph{Scan} and \emph{Collect}.
First, the \emph{Count} step traverses all the particles to count
the number of spreading contributions to each mesh point.
Next, the \emph{Scan} step performs a prefix sum on the counts
to obtain the starting positions of each mesh point in the CSR matrix.
Finally, the \emph{Collect} step computes the spreading contributions from each particle
and inserts them into the rows of the data structure.
Since multiple threads may attempt to update the
same row of the matrix simultaneously, atomic operations are used.
(We rely, in some sense, on the fact that the atomic operations are
not too expensive, as will be shown later.)
In all three steps, we assign $p^3$ threads rather one thread to each particle
to maximize use of parallel resources.

An inefficiency with the above procedure, however, is that
in the Collect step, the threads assigned to
each particle are in one warp, but will update
different rows of the CSR matrix.  The Collect step will
have low performance on GPUs because of non-coalesced memory access.
To promote coalesced memory access,
we group sets of grid points in the single mesh data structure.
Within each group, a stored interpolation weight must identify
which mesh point it is associated with.  This gives the
group mesh data structure.  It is similar it spirit to various
multirow sparse matrix storage formats for GPUs
\cite{oberhuber2010new,koza2014compressed,kreutzer2014unified}.
Figure \ref{fig:group-mesh-transpose} illustrates the group
mesh data structure.

%When the average number of spreading contributions per mesh point
%is small or the distribution of the particle positions is
%non-uniform, the construction of the sparse matrix in the CSR format
%also has the problems of thread divergence and

\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{group-mesh}
\caption{Constructing the spreading operator in the group mesh
data structure with $p = 3$.
The solid blue dot and red triangle represent two particles.
For each particle, 9 GPU threads are used to
compute its spreading contributions
and update the rows of the data structure.
}
\label{fig:group-mesh-transpose}
\end{figure}

The main advantage of the group mesh data structure,
compared to the single mesh data structure, is that it has
better memory access locality and tends to have coalesced memory accesses.
It is expected that constructing and using the group mesh data structure
is more efficient than for the single mesh data structure.
An issue with the group mesh data structure is that the spreading
step once again needs to use atomic operations.
Experimentally, we find that the improved memory access patterns
can make the group mesh approach better than the single mesh
approach, despite its use of atomics.

% \begin{figure}[htbp]
% \centering
% \includegraphics[width=8.1cm]{single-mesh-matrix.pdf}
% \caption{Single Mesh matrix construction example.}
% \label{fig:single-matrix}
% \end{figure}
% Figure \ref{fig:single-matrix} illustrates an example of
% constructing the matrix.
% The array ind\_grid stores the number of spreading
% particles of meshes. The array ind\_base stores
% the results of the prefix sum, where the $i$-th value of the array
% ind\_base represents the starting position
% of the \emph{i}-th mesh point.
% The array $P$ and $Find$ are the stores the CSR formatted matrix.\par


\subsection{Spreading Optimization}
%
With the intermediate results of spreading stored in the sparse
matrix format, the spreading step can be efficiently computed
using  sparse matrix-vector multiplications (SpMV).
While the optimization techniques for SpMV have been
intensively studied on GPUs~\cite{Bell:2009:ISM:1654059.1654078,
Choi:2010:MAS:1693453.1693471},
we apply special techniques
to accelerate the spreading step on GPUs
that utilizes the hardware features introduced in the Kepler microarchitecture.

We define a {\em compute unit} (CU) as a group of threads used
to collect the spreading contributions at a mesh point.
By using more than one thread for a mesh point, thread
divergence is reduced and coalesced memory access is promoted.
This is analogous to why more than one thread is used to
multiply a row in GPU implementations of
SpMV~\cite{Bell:2009:ISM:1654059.1654078}.

Using multiple threads for a mesh point or row, however, requires
the use of atomic operations because
multiple threads within a CU will
update the same mesh point simultaneously.
To avoid the use of atomic operations,
we let a specific thread in the CU collect the sum
using an intra-CU reduction operation.
On the Kepler microarchitecture,
the reduction operation can be efficiently implemented by using
a hardware feature called \emph{warp shuffle}.
Warp shuffle is a new set of instructions that allows threads of a warp
to read each other's registers, providing a new way to communicate values
between parallel threads besides shared memory. Compared to shared
memory communication, warp shuffle is much more efficient.
The throughput of warp shuffle instructions
is 32 operations per clock cycle per multiprocessor
for Kepler GPUs~\cite{nvidia2014programming}.

Figure~\ref{fig:warp-shuffle} illustrates the intra-CU reduction
implemented using warp shuffle instructions.
The figure shows a warp of 32 threads, organized such that
8 threads are assigned to a row (or mesh point), i.e., CU=8.
To perform a reduction operation within a row using 8 threads,
3 iterations of warp shuffle operations are needed, following
the binomial tree algorithm.

\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{warp-shuffle.pdf}
\caption{Illustration of intra-CU reduction using warp shuffle operations.
The reduction across 4 sets of 8 threads is performed in 3 iterations
following the binomial tree algorithm (see text).}
\label{fig:warp-shuffle}
\end{figure}

The performance of the spreading step using the single mesh method
is dependent on the choice of the size of CU.
Here, we describe a heuristic of choosing the size
of CU, which can be expressed as
\[
CU_{optimal}=
\begin{cases}
1& \text{if $N p^3/K^3$ \textless 1} \\
2^t& \text{if $2^t$ $\leq$  $N p^3/K^3$ \textless \ $2^{t+1}$ and 0 $\leq$ t \textless 4} \\
16& \text{if $N p^3/K^3$ $\geq$ 16}
\end{cases}
\]
%where $\frac{N \times p^3}{K^3}$ represents the average number of spreading
where $N p^3/K^3$ represents the average number of spreading
contributions per mesh point ($ASM$).
In sparse matrix terms, $ASM$ is
the average number of nonzeros per row.

To explain the heuristic,
we use CU sizes that are powers of 2
for efficiency of the warp shuffle reduction.
The CU size should also be at least larger
than $ASM$, otherwise some threads will be idle.
When $ASM$ is larger than 16, the heuristic selects
the optimal CU size as 16.  Increasing the CU size from
16 to 32 does not significantly improve the load balance as 16
appears to be fine enough parallelism.  Also, increasing
the CU size from 16 to 32 increases the number
of warp shuffle iterations from 4 to 5.
We have run some experiments to verify the heuristic.
Figure \ref{fig:cu} shows the results.
%
\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{cu.pdf}
\caption{The performance of the spreading step using
the single mesh method with various sizes of CU.
$p =6$ is used in the test.}
\label{fig:cu}
\end{figure}
%

% Let $N_{CU}$ be the size of CU. The total number of
% threads used to compute the spreading for a $K \times K \times K$
% mesh is $K^3 \times N_{CU}$.
% The number of operations per CU to compute the spreading
% is $\lceil{\frac{ASM}{N_{CU}}}\rceil$.
% Thus, the performance of the spreading step using
% the single mesh method can be modelled as
% \[
% T_{single} = C_{1} \times K^3 \times N_{CU} + C_{2} \times \lceil{\frac{ASM}{N_{CU}}}\rceil
% \]
% where $C_1$ and $C_2$ are constants.

%In general, the total number of warps can be used for the spreading step
%is given by
%\[
%N_{warp} = \frac{K^3}{gsize}
%\]

The performance of the group mesh algorithm depends
on the selection of the group size ($gsize$), \ie the 
number of mesh points that are grouped together.
On the one hand, there is more write contention 
when $gsize$ is small. On the other hand,
the total number of warps that can be used for spreading
is smaller when $gsize$ is larger.  We experimentally determined
that an optimal value of $gsize$ is 64
for any average number of spreading contributions
per mesh point. The number may vary on different GPUs.
On the GPU hardware used in our test,
using $gsize = 64$ appears to be an appropriate compromise
between parallelism and memory access conflicts.
Figure~\ref{fig:gsize} shows this result.

\begin{figure}[htbp]
\centering
\includegraphics[width=8.5cm]{gsize.pdf}
\caption{Performance of the spreading step using the group mesh method
with various $gsize$. The tests used $K=128$ and $p=6$.}
\label{fig:gsize}
\end{figure}

%% For the group mesh method,
%% the total number of warps can be using in the spreading step
%% for a $K \times K \times K$ mesh is $\frac{K^3}{gsize}$.
%% The number of operations per warp to compute the spreading
%% is $ASM \times gsize$.
%% Thus, the performance of the group mesh method can be modelled as
%% %
%% \begin{align}
%% %M_{mesh\text{-}group} =&  3 \times (4 + 4 + 8) \times Np^3 + 3 \times 8 \times Np^3 \nonumber\\
%% % &+ 3 \times 2 \times K^3 \nonumber\\
%% % T_{mesh\text{-}group} =&   \frac{M_{mesh\text{-}group}}{B} \times P_{atomic}\nonumber
%% T_{group} = C_{1} \times \frac{K^3}{gsize} + C_{2} \times ASM \times gsize \nonumber
%% \end{align}
%% %
%% %\begin{eqnarray}
%% %M_{mesh\text{-}group} =&  3 \times (4 + 4 + 8) \times Np^3 + 3 \times 8 \times Np^3 \nonumber \\
%% %&+ 3 \times 2 \times K^3 \nonumber\\
%% %T_{mesh\text{-}group} =&   \frac{M_{mesh\text{-}group}}{B} \times P_{atomic} \nonumber
%% %\end{eqnarray}
%% %
%% where $C_1$ and $C_2$ are constants.

\subsection{Hybrid Mesh Approach}
%
In general, the spreading operator can be stored in any sparse matrix format.
For completeness, we consider the cuSPARSE ELL-COO hybrid format,
which is perhaps considered the most efficient matrix
format for SpMV on GPUs~\cite{Bell:2009:ISM:1654059.1654078}.

A two-step preprocessing phase is needed to construct the 
spreading operator in this format.  First, the spreading 
operator is constructed in compressed sparse column (CSC) format.
This is natural because the interpolation coefficients can be
efficiently computed on a per particle basis, and the coefficients
for each particle corresponds to a column in the spreading operator matrix.
The CSC format is then transformed to ELL-COO format using 
the cuSPARSE {\tt csc2hyb} function.  In this function,
the maximum number of spreading contributions
that can be stored in the ELL format, which is denoted by $N_{hybrid}$,
was chosen automatically.

The spreading operation for this format is simply the hybrid SpMV 
operation provided by cuSPARSE.  This operation does not need atomics,
and due to extensive work by NVIDIA
in optimizing SpMV in cuSPARSE, we expect this spreading operation
to be very efficient.

%The spreading step using the hybrid mesh method
%is expected to be much more efficient than that
%using the single and group mesh method, while the preprocessing step
%using the hybrid mesh method is expected to be less efficient
%its complex data structure.

\section{Experimental Results}

\subsection{Test Platforms}

Table \ref{table:gpus} lists the GPUs
used in our tests. Most experiments were conducted on
a NVIDIA K40c with the Kepler GK110 microarchitecture.
For evaluating the effect of using atomic operations,
we also used a GTX 480, based on the earlier Fermi microarchitecture.
CUDA version 6.5 toolkit was used in all the experiments.

\begin{table}[htbp]
\small
\centering
\caption{NVIDIA test platforms}
\label{table:gpus}
\begin{tabular}{l|l|l}
\hline
GPU & K40c & GTX 480 \\
\hline
Architecture & Kepler & Fermi\\
Compute capability & 3.5 & 2.0\\
CUDA cores & 2,880 & 448\\
GPU clock rate & 876MHz & 1,401MHz\\
Memory clock rate & 3,004 MHz & 1,848MHz\\
L1 cache size & 16KB & 16KB\\
L2 cache size & 1,536KB & 768KB\\
Global memory size & 12GB & 1.5GB\\
No. of registers per block & 64K & 32K\\
Shared memory per block & 48KB & 48KB\\
\hline
\end{tabular}
\end{table}

\subsection{Test Problems}

The performance of particle mesh spreading will be problem dependent,
and therefore no single test problem is sufficient, and we expect
that different algorithms will be best for different particle
configurations.  We propose
a class of test problems for particle-mesh problems.  The key
parameter is the average number of spreading contributions for
each mesh point, abbreviated ASM.  To construct problems with
different values of ASM, we use different numbers of particles
ranging from 1000 to 10,000,000, and different mesh dimensions
$K \times K \times K$, with $K$ chosen as 32, 64, 128 and 256.
We also use values of the interpolation parameter $p$ of 4 and 6.
In this paper, we generate random positions for the particles
using a uniform distribution over the mesh.  Nonuniform distributions
will create load balance issues which we do not address in this
initial study.

\subsection{Atomic Operation Overhead for Different Platforms}

In previous work~\cite{harvey2009implementation, brown2012implementing},
particle based approaches were considered less efficient than mesh
based approaches method due to the use of atomic operations. While
this may be true on earlier GPU microarchitectures, the Kepler GK110
microarchitecture has significantly improved performance of atomic
operations~\cite{nvidia2012kepler}.  We are thus interested in
the improvement of the particle based approaches compared to mesh
based approaches on the contemporary GPU hardware.

In this section, we test the particle based
algorithm, and show the overhead of atomic operations by comparing
the execution time of the algorithm itself and a modified version
that replaces atomic operations with normal global memory store operations.
We use both the Kepler platform and the older Fermi platform.
Although the modified version does not generate correct results,
it is useful for determining the performance impact of
atomic operations.

\begin{figure}[htbp]
\centering
\includegraphics[width=8.7cm]{atomic-yesno-kepler-fermi}
\caption{Performance impact of atomic operations in the particle
based algorithm.  Blue circles show performance of the particle
algorithm.  Green crosses show the performance if
atomic operations are replaced by global memory writes.
Top figure shows result using the Fermi microarchitecture;
bottom figure shows result using the Kepler microarchitecture.
The test problems used $K=64$ and $p=6$.}
\label{fig:kfatomic}
\end{figure}

As shown in Figure \ref{fig:kfatomic}, on the Fermi microarchitecture,
atomic operations add a very large overhead to the particle based algorithm.
On the Kepler microarchitecture, the overhead is much smaller, and is only
a small fraction of the overall execution time.

Figure \ref{fig:kfgn} compares the performance of the particle based
algorithm and the Gather algorithm on the Fermi and Kepler microarchitectures.
On Fermi, the particle based algorithm requires more time than the
Gather algorithm, but on Kepler, the Gather algorithm requires more time.
This change is directly related to the improvement in performance
of the atomic operations on Kepler.

\begin{figure}[htbp]
\centering
\includegraphics[width=8.7cm]{gather-naive-kepler-fermi}
\caption{Performance comparison between the particle based algorithm
and the Gather algorithm on Fermi and Kepler microarchitectures.
The test problems used $K=64$ and $p=6$.}
\label{fig:kfgn}
\end{figure}

\subsection{Comparison of Spreading Costs}

In this section we compare the cost of the spreading operations.
For the mesh based algorithms, we do not include the time for
constructing the mesh based data structures.  These will be 
considered separately later in this paper.

\begin{figure*}[htbp]
\centering
\includegraphics[width=18cm]{mesh-spreading-all}
\caption{Performance comparison between the particle based algorithm
and the mesh based algorithm.
The test problems used $p=6$.}
\label{fig:nmesh}
\end{figure*}

Figure \ref{fig:nmesh} shows the timing comparison between particle
based algorithm and mesh based algorithms.  For the mesh based
algorithms, the preprocessing time is not included here, but will
be analyzed in a later section.
We make the following observations from the figure.

1. For small numbers of particles, the particle based algorithm is best.
Threads are less likely to experience contention on atomic writes
when there are fewer particles, which gives this algorithm an advantage
in this regime.  It can be observed in the figures that the slope
of the timing curve for this algorithm (red triangles) increases
very slightly as the number of particles is increased.  This effect
may be due to greater contention due to more particles.

2. Except for small numbers of particles, the hybrid mesh algorithm,
using the cuSPARSE SpMV operation for the hybrid format, is generally
best.

3. The cost of the Gather algorithm is composed of gathering contributions
at each mesh point, and processing the overflow particles (these are
steps 2 and 3 of the Gather algorithm, as explained in Section II.B.).
When the number of particles is much less than $K^3$, there are few
if any overflow particles, and thus the cost of the algorithm
is independent of the number of particles.  For $K^3$
particles or more, the overflow phase adds to the execution time.
The cost of this phase increases linearly with the number of overflow
particles.  Thus there is an expected knee in the timing for the
Gather algorithm, as observed.

% 4. In general, performance Hybrid Mesh $\ge$ Group Mesh $\ge$ Single
% Mesh. Compared to the Single Mesh algorithm, the Group Mesh algorithm
% has hardly branch problems, so when particle number is sufficient, Group
% Mesh algorithm behaves better. The Hybrid algorithm is better than the
% Group Mesh algorithm because it avoids atomic operations.

\subsection{Comparison of Preprocessing Costs}

In this section, we compare the costs of constructing the mesh based
data structures.  From the sparse matrix point of view, transferring
from a particle based data structure to a mesh based data structure
is a matrix transpose operation.  However, note that in particle-mesh
applications, there is no sparse matrix corresponding to the particle based
data structure.

\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{pre-computation}
\caption{Performance comparison of constructing the mesh based data structure 
for different algorithms
($K = 128$ and $p = 6$ was used for the test problems).}
\label{fig:matrix}
\end{figure}

Figure \ref{fig:matrix} shows the overhead of constructing the
mesh based data structures.  The group mesh data structure 
can be constructed the fastest, due to better memory access patterns.
The hybrid mesh data structure (the ELL-COO format) is generally
slowest to construct.  Unfortunately, hybrid mesh spreading 
was the fastest among the mesh based approaches.

% One thing that is noteworthy is that when the number of particles is small,
% hybrid precomputation uses less time than
% the single mesh algorithm. This may due to the overhead of extra steps 
% for counting
% non-zero elements of each row in the sparse matrix single mesh transpose
% operations, when particle number is low, counting and scanning the $K^3$
% rows of the sparse matrix influences the overhead more.

Figures \ref{fig:single} and \ref{fig:group}
show the data structure construction cost for the single mesh method and
group mesh method, respectively, for different
grid size parameters, $K$.  In both cases, when the number of particles
is small, the $O(K^3)$ term of the cost dominates; when the number of
particles is large, the $Np^3$ term of the cost dominates, where 
$N$ is the number of particles.

\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{transpose-single}
\caption{Performance of matrix construction of the single mesh method
under various mesh dimensions. The spreading order $p = 6$.}
\label{fig:single}
\end{figure}

\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{transpose-group}
\caption{Performance of matrix construction of the group mesh method
under various mesh dimensions. The spreading order $p = 6$.}
\label{fig:group}
\end{figure}

\subsection{Spreading Multiple Times}

In applications where spreading is performed multiple times for
the same particle configuration, the cost of constructing the 
mesh based data structure can be amortized.  Here we report
the total cost (preprocessing plus spreading) for 1 spreading
step and 20 spreading steps with the same particle configuration.

\begin{figure*}[htbp]
\centering
\includegraphics[width=18.3cm]{total-without-storage-all-1}
\caption{Comparison of the overall performance for 1 spreading
operation.}
\label{fig:l1}
\end{figure*}

\begin{figure*}[htbp]
\centering
\includegraphics[width=18.3cm]{total-without-storage-all-20}
\caption{Comparison of the overall performance for 20 spreading
operations with the same particle configuration.  The mesh based data
structures are only computed once and are reused.}
\label{fig:l20}
\end{figure*}

Figure \ref{fig:l1} shows the overall performance of different algorithms
when the spreading is performed only once.
As seen in the figure, the particle based algorithm
has the best performance when the spreading is performed
only once or for very small number of times.

Figure \ref{fig:l20} shows the overall performance
when the spreading is performed 20 times.
Several conclusions can be drawn from these figures.
When the number of particles is relatively small,
the particle based algorithm still has the best
performance for the particle-mesh configurations.
When the number of particles is relatively large,
the group mesh method is best. Although the group mesh method
is slower than the hybrid mesh algorithm, the group mesh method
has lower data structure construction cost.
It is expected that the hybrid mesh algorithm is best
if its data structure construction time can be amortized
over a very large number of spreading steps.

When the dimension of the mesh is very small, \eg 32, and
the number of particles is between 10,000 and 100,000,
the single mesh method
has the best performance.

\subsection{Comparison of Reduction Performance Using Warp Shuffle and Shared Memory}

In Tesla and Fermi microarchitecture based GPUs, sharing data between parallel threads can
only be done in shared memory. In the newer Kepler GPU microarchitecture,
NVIDIA introduced a way to directly share data between threads that
are part of the same warp, using so called \emph{warp shuffle}
instructions. By allowing threads of a warp read each other's registers,
the warp shuffle instruction can be used to achieve throughput that is
usually much higher than by using communication through shared memory.

In the single mesh method, we perform an intra-warp reduction when
applying spreading. The reduction is performed by using warp shuffle
optimizations. In this section, we show the performance gain of this
optimization, by comparing the performance of single mesh spreading using
warp shuffle and shared memory reduction.

\begin{figure}[htbp]
\centering
\includegraphics[width=7.5cm]{shuffle-vs-shared}
\caption{Performance comparison of the single mesh method using
warp shuffle and shared memory reduction ($K = 128$ and $p = 6$).}
\label{fig:warpshared}
\end{figure}

Figure \ref{fig:warpshared} compares the execution time of
the single mesh method using two different reduction methods. As
can be seen, the single mesh method using warp shuffle reductions is
never worse than the shared memory counterpart. When the average
number of spreading contributions per mesh point (ASM) is larger than
20, these two versions achieve approximately the same performance. One
explanation for this phenomenon is that, when ASM is sufficiently large,
the shuffle
or shared memory load is hidden by other costs such as warp divergence
or poor cache usage (Figure \ref{fig:cu} tells us one warp
only uses half the cache line when ASM is larger than 20).

\section{Conclusion}

In this paper, we discussed the advantages and disadvantages
of various algorithms for particle-mesh spreading.
We categorized algorithms as being particle based or mesh based.
Those that are particle based generally require atomic 
operations.  Those that are mesh based require the construction
of mesh based data structures.  We introduced single mesh and
group mesh data structures that are related to sparse matrix
data structures.  

Timing tests were used to determine which
algorithms are best for a test set parameterized by the
average number of particles per mesh point.  When only a single
spreading operation is performed for a given particle configuration,
the simple particle based method is fastest.  This is due
to very fast atomic operations on current GPU architectures.  When multiple
spreading operations are performed and the preprocessing costs
can be amortized, the single mesh and group mesh algorithms
are marginally better, for moderate numbers of spreading
operations (around 20).  For very large numbers of spreading
operations, the hybrid mesh approach using the hybrid sparse
matrix data structure in cuSPARSE is fastest.  This is due
to very fast spreading but relatively high data structure construction times.

This paper also introduced the use of warp shuffle operations 
for performing reductions for summing contributions to a mesh
point with multiple threads.  This idea can be extended
to optimize the SpMV operation on GPUs for row-based data structures.


% use section* for acknowledgement
\section*{Acknowledgements}
%
This work was supported by the U.S. National Science Foundation
under grant ACI-1306573, 
the National Natural Science Foundation of China 
(No.\ 61272087, 61363019 and 61073008), the Beijing
Natural Science Foundation (No. 4082016 and 4122039), and the Sci-Tech Interdisciplinary Innovation and Cooperation Team Program of the Chinese Academy of Sciences.

%------------------------------------------------------------------------------
% Refs:
%
\label{sect:bib}
\bibliographystyle{plain}
%\bibliographystyle{alpha}
%\bibliographystyle{unsrt}
%\bibliographystyle{abbrv}
\bibliography{pme}

%------------------------------------------------------------------------------
% Index
%\printindex

%------------------------------------------------------------------------------
\end{document}

% EOF
