\section{Scalability}
\label{scala}

While the scaling results on Keeneland are encouraging,
the measured process counts remain modest compared to 
projections for the exascale.
%
Indeed, at such large counts (conservatively estimated
at a range from one hundred thousand to a million~\cite{iesproadmap}), 
two aspects of the sorting approaches become concerning.
%
These concerns were also identified by Solomonik and 
Kale (S\&K) who implemented a homogeneous radix and sample
sort and measured results up to 4096 processes, as well
as a histogram-based approach which scaled to 32,768 processes~\cite{charmpp}.
%

%%
\subsection{All-to-All Communication}
The first concern is the all-to-all communication involved in the
key exchange, which is likely to overwhelm the bandwidth of the 
system interconnect.
%
While this was not observed at Keeneland's relatively small scale, it
was observed by S\&K at 4096 processes~\cite{charmpp}.
%
However, as shown in their approach, optimizations
of the all-to-all operation such as communication
staging (to reduce message buffer size and network contention 
effects), and overlapping the communication with computation
largely mitigated this concern.

%%
\subsection{Memory Capacity}
The second concern is specific to sample sort and involves
the required memory capacity of the root node (which must
sort all the samples to determine the partitions), that scales
with the sampling rate $s$ times the number of elements $n$ in
random sampling and with $p * (p-1)$ ($p$ is the number of
processors) in regular sampling~\cite{shi_regularsampling_1992}.
%
Again, this was not a limitation at Keeneland's scale, but 
becomes a limiting factor at larger process counts (memory capacity is
exceeded at 8192 processes in S\&K~\cite{charmpp}).
%
Indeed, for this reason, S\&K favor a histogram-based
approach, which iteratively converges on the partitions in favor
of sample-based sorting.


\subsection{Improving Sample Sort}
While this work is primarily concerned with heterogeneity,
it is important to illustrate that sample-based approaches 
are not strictly less scalable than histogram sort (which 
would greatly diminish our results). 
%
We note that the scalability of sample-based sorting can be
increased using recursion, as described in Listing 3.

%%% 
\smallskip
\noindent \textbf{Listing 3: Recursive Regular Sample Sort} \\
\noindent Inputs: $n$ unsorted keys and $C$, the capacity of each of
the $p$ processors.\\
\noindent Output: $n$ sorted keys.
\begin{enumerate}
\item Perform a local sort of all keys on each processor.
\item On each processor, collect $p-1$ random samples from the keys.
\item If the combined size of all samples is less than $C$, collect them
on the root node.  Otherwise, collect them on a subset of $K$ nodes, and recurse,
with the samples as the new keys and $K$ as the new number of processors.
\item Perform a local sort of all the samples on the first processor.
\item Based on the sorted samples, derive the keyspace partitioning (the set
of samples which will act as ``splitters'') by selecting $p-1$ equally spaced
values, representing the highest allowable value on each processor.
(The final processor accepts keys up the maximum representable value.)
\item Broadcast the partitioning to all processors, and use the partitioning
to decide the destination for each key.  The destination is the first
processor such that the key is less than or equal
to that processor's partition point.
\item Exchange keys with other processors.
\item Sort keys locally. This is required since keys are not guaranteed to
arrive in sorted order.
\end{enumerate}

Note that Listing 2 and 3 are essentially the same, with the exception of the
sampling strategy (random vs. regular, respectively) and the use of recursion
in step two. Neither of these changes significantly effect performance on
Keeneland.
%
Regular sampling is used to provide a guarantee on load balance of within
a factor of two of optimal~\cite{shi_regularsampling_1992}, which becomes
more of a concern at scale.
%
With the addition of recursion, sample-based approaches can handle much larger
problem sizes.

%%
Consider the case of $p=100,000$ processors, with a memory capacity of each
processor matching Keeneland (24GB).
%
In a regular sampling scheme, this requires $p*(p-1)*8$ 
bytes (roughly 80GB assuming a word size of eight bytes)
of capacity to store the samples.
%
Choosing $K=40$ results in two gigabytes of samples per each 
node of the subset and one level of recursion, as the $k*(k-1)*8$ bytes 
(only 12KB) are easily handled on one node.

%%
While this example illustrates the potential scalability of sample 
sorting, future work is required to explore the performance tradeoffs of
selecting $K$.
%
Also, further exploration at scale is required to evaluate the
costs of recursion in sample sort versus the required number of
iterations in histogram-based approaches. 
%
Such exploration will be possible pending the availability of 
large-scale heterogeneous machines such as Titan~\cite{titan}.

\section{Conclusions}
Given the advantages of sample sort observed in our
experimental evaluation, we return to our initial 
motivation---which algorithm is the more productive for scientific
computing?
%
In almost all cases, from the socket level to 192 GPUs on Keeneland,
sample sorting exhibits superior performance.
%
However, a pure sampling-based approach has two limitations.

%%%
First, its performance is randomized and varies with irregular data
distributions in a manner which can be difficult to predict.
%
The second limitation centers around capacity requirements for
the local sort of the samples on the
root node, but this can be alleviated using recursion as described in
Section \ref{scala}.
%
Future work is required to explore the tradeoffs and performance of
recursive sample sort at large scales.

%%% Architectural
Still, the benefit of GPUs for scientific 
applications~\cite{spmv,osaka,scatter,s3d,dca,f@h}
is also observed in the sample and radix sorts. 
%
The increased throughput due to memory bandwidth surpasses any overhead costs from
PCIe transfer.
%
This advantage is likely to increase in cases where the scientific
application uses the GPU to calculate the data which is being sorted, as overheads
will be further amortized and overlapped by the scientific computation.
%
The performance improvement may also occur in accelerated processing unit (APU)
architectures (such as AMD's Fusion), provided the throughput-oriented cores retain
high memory bandwidth.
