\section{Approach}
Our approaches are motivated by the example of a distributed scientific
application.
%
In more precise terms, we constrain the relevant problem sizes to those that
are much larger than the memory of a single node but do not exceed
the aggregate memory of all nodes.
%
Furthermore, the use of any bandwidth to external storage is avoided,
as we surmise this limited resource will be used for other purposes
by the application.
%
This problem size for sorting highlights a ``missing middle'' between 
the smaller sizes thoroughly covered by the architecture community and 
larger problems dubbed ``external sorting'' and well studied in the database
literature. 

%%%
Both of our approaches use radix sort as the local sorting primitive.
To distinguish between our two approaches, we therefore refer to them
by their distribution strategy, namely \emph{radix} or \emph{sample}.

\subsection{Approach 1: Distributed LSB Radix Sort}
As radix sort is used extensively in this approach, we first cover it
more formally.

%%%
Radix sort is an iterative process which repeatedly partitions keys,
in a stable fashion, $r$ bits at a time, starting with the least significant.
%
These bits are known as a digit or radix, and $r$ is known as the radix width.
%
$r$ is usually fixed, and, given an input of $n$ $k$-bit keys and values,
radix sort will require $\frac{k}{r}$ passes.
%
We consider the parallel case with a number of processors, $p$.
%
$p$ is, more specifically, the number of shared memory processing domains
(on our experimental platform, both CPUs and GPUs have more than one processing
element).
%
The first approach is illustrated in Figure~\ref{fig:alg_example}
and detailed in Listing 1.

%%% 
\smallskip
\noindent \textbf{Listing 1: LSB Distributed Radix Sort} \\
\noindent Inputs: $n$ unsorted keys and the radix width $r$.\\
\noindent Output: $n$ sorted keys.\\
For each $r$-bit digit, starting with the least significant:
\begin{enumerate}
\item On each processor, compute a local histogram, $H_{\textrm{Local}}$, which
counts the occurrences of each of the $2^r$ possible values of the current
digit.
\item Perform a global, exclusive scan across all ranks' histograms to
compute $H_{\textrm{PriorRank}}$.  For each $i$ (where $0 \le i < 2^r$), 
$H_{\textrm{PriorRank}}[i]$ now contains the number of occurrences of keys with digit
value $i$ on all ``lower'' ranks.
\item On the highest rank processor, perform a local exclusive scan on
$H_{\textrm{Local}} + H_{\textrm{PriorRank}}$ to compute $H_{\textrm{PriorDigit}}$.  $H_{\textrm{PriorDigit}}[i]$
now contains the number of occurrences of all keys with digit value less than
$i$ across all ranks.
\item Broadcast $H_{\textrm{PriorDigit}}$ to all ranks.  Within each rank, 
$H_{\textrm{PriorRank}}[i] + H_{\textrm{PriorDigit}}[i]$ is now the global position where that
rank will start placing any keys with digit value $i$.
\item As each rank may have more than one key containing digit value $i$, we
locally calculate $C_{\textrm{Digit}}$, an accumulating count of the local occurrences 
of keys with value $i$.
\item Explicitly calculate each key's final global position. For the $j^{th}$
key, whose digit value is $i$, its destination position is $H_{\textrm{PriorRank}}[i]
+ H_{\textrm{PriorDigit}}[i] + C_{\textrm{Digit}}[j]$.
\item Perform a global key exchange and a local sort to place each key into 
its calculated final position.
\end{enumerate}

\begin{figure}[!h]
\vspace{1em}
\begin{center}
\includegraphics[scale=0.308]{01-narrative/figs/alg_summary3_cropped.pdf}
\caption{The full radix-based sorting algorithm.
%
This example shows the steps in sorting twelve four-bit keys distributed
among three processors using a radix width, $r$, of two bits.
%
Hence, local storage for each histogram array will be four ($2^r$) entries,
and two passes ($\frac{4}{r}$) are required for four-bit keys.
%
Note that in this figure, the second pass condenses steps 1-4 to 
conserve space.
%
A detailed description of each step enumerated in this figure is provided in
Listing 1.
}
\label{fig:alg_example}
\end{center}
\end{figure}

Radix sort, like other distributed sorting algorithms, has many trade-offs.
One desirable property of the variant discussed here is its linear 
$O(n)$ asymptotic bound for work, given a fixed radix width.  It also
exhibits a high degree of parallelism: keys can be 
processed in a data parallel fashion, and synchronization among processors
is only required for scans during the indexing phase and the key 
exchange.
%
The depth, or length of the critical path, of each pass is also fixed, with a $\log_2 \frac{n}{p}$ term for 
the local histogramming, $\log_2 n$ for the global scan, and $\log_2 p$ to scan
process totals.  
%%%
Furthermore, radix width can be chosen to minimize the number of passes,
decreasing the total depth, increasing arithmetic intensity, 
and decreasing communication among processors, at the cost of larger histograms.

The major drawback to a distributed radix sort is its high requirements 
for memory and interconnect bandwidth.
%
In each pass, keys must be read from memory once and, in the worst case,
may move globally (although this is data dependent).
%
Depending on the width of the key and radix, this could result in 
substantially more data movement than a comparison-based method such
as merge sort or sample sort.


%%%
And, as memory bandwidth decreases relative to CPU throughput,
these data movement constants play an increasing role in the actual runtime
of sorting algorithms.
%
Indeed, Satish et al. make projections that, for some key/radix configurations,
radix sort should be abandoned in favor of merge sort~\cite{intel} for the 
foreseeable future for single node sorting.
%
It remains to be seen, however, if the guaranteed (but high) bandwidth
requirements of radix sort will be preferable over the decreased parallelism of merge sort at
the exascale.
%
In other words, for large enough $n$, the merges used by comparison-based methods
may have similar bandwidth requirements to radix sort, but exhibit less parallelism.

Our second approach, a distributed sample sort, sacrifices many of radix sort's
desirable qualities in order to minimize data movement.
		
			
\subsection{Approach 2: Distributed Sample Sort}
\label{sec:sample}		
In contrast to radix sort, sample sort partitions keys and sorts
them in a single pass, as illustrated in
Figure~\ref{fig:sample_example} and below:

%%% 
\smallskip
\noindent \textbf{Listing 2: Distributed Sample Sort} \\
\noindent Inputs: $n$ unsorted keys and $s$, the number of samples to take  
on each of the $p$ processors.\\
\noindent Output: $n$ sorted keys.
\begin{enumerate}
\item On each processor, collect $s$ random samples for the keys.
\item Gather all the samples on the first processor.
\item Perform a local sort of all the samples on the first processor.
\item Based on the sorted samples, derive the keyspace partitioning (the set
of samples which will act as ``splitters'') by selecting $p-1$ equally spaced
values, representing the highest allowable value on each processor.
(The final processor accepts keys up the maximum representable value.)
\item Broadcast the partitioning to all processors, and use the partitioning
to decide the destination for each key.  The destination is the first
processor such that the key is less than or equal
to that processor's partition point.
\item Exchange keys with other processors.
\item Sort keys locally. This is required since keys are not guaranteed to
arrive in sorted order.

\end{enumerate}

\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.306]{01-narrative/figs/samplesort_algorithm_crop.pdf}
\caption{The distributed sample sorting algorithm.
%
This example shows the sample sort algorithm operating on the same problem as
Figure~\ref{fig:alg_example} including the random sampling (steps 1-3),
partitioning (steps 5-6), and the final local sort (step 7).
%
In steps 1-3, keys are colored based on what processor they originated on.
%
In steps 5-7, they are re-colored based on their destination.
%
A detailed description of each step shown in this figure is given in
Listing 2.
}
\label{fig:sample_example}
\end{center}
\end{figure}

The drawback to distributing keys in one pass is load imbalance.
%
If the keyspace partitioning is not representative of the data, some
nodes will receive far more keys than others and performance will become
bottlenecked as all processors wait on those with more work.
%
In extreme cases, some nodes may be assigned more keys than can fit within
their memory capacity.
%
When this happens, the algorithm must resample, or fail and fall back to an
alternative approach.

%
The algorithm can also fail if the total number of samples exceeds the memory
capacity of the root node.
%
In general, lowering the number of samples will reduce the bandwidth needed
to collect keys on the root node and the time required to sort them.  However, if
the number of samples becomes too low, they are no longer representative
of the actual key distribution, and a load imbalance can occur.
%
On the other hand, when the keyspace partitioning is good, the bandwidth
requirements of sample sort are far less (a constant factor proportional to
the number of passes) than radix sort.

%%%
Another salient feature of sample sort is its use of randomness.
%
This results in randomized behavior, which may be difficult to predict
or reproduce---a serious drawback for some use cases.

%%%
Sample sort also benefits from the relatively low $O(s)$ work required to
collect the samples, $O(s)$ to sort them on the root node (using a local radix
sort), and two additional $O(n)$ operations to calculate the destination ranks
for each key and the final local radix sort on each processor.

		
\subsection{Local Sorting Primitive}
While a full review of sorting algorithms for single processors
is beyond the scope of this paper, we briefly note that Leischner 
et al. recently analyzed the performance of sample sort on a single 
graphics processor~\cite{5470444}, documenting the fastest known 
in-memory, comparison-based sorting.  
%
Merill et al. improve upon this result for some data types by using a radix
sort~\cite{duane} built on top of an extremely efficient prefix sum. 
%
Both of our approaches exploit radix sort as the local sorting primitive,
with the GPU versions directly using Merill's implementation through the
open source Thrust library.
