\section{Introduction}
Sorting is one of the most well-studied problems in computer science,
but recent architectural trends are placing new contraints on sorting
algorithms.
%
Projections from the International Exascale Software Project 
(IESP)~\cite{iesproadmap} indicate that an exascale machine will
have limited bandwidth to long-term storage and will require orders of
magnitude increases in node-level concurrency.
%
In the context of sorting, this has two important consequences.  
%
First, the traditional strategy for sorting very large data---sorting
small chunks separately, writing those to disk, then merging them---will
quickly become bottlenecked by bandwidth to storage.
%
And second, each node will likely have two (or more) types of processing elements
to support the increased concurrency, including those optimized for latency,
throughput, or fixed functions.

%%%
One motivating example for sorting in this context is \emph{in situ} data
analysis in scientific applications.
%
For instance, a scientist may be interested in a basic statistical 
characterization of a dataset such as the Bowley five number summary
(i.e. the quartiles, minimum, and maximum).
%
Sorting is also the primary component of ranking, a parallel primitive used
in many other algorithms.
%
These characterizations (and many other more complex analyses) depend on a
fast sorting algorithm.
%

%%%
Furthermore, an efficient sorting implementation also serves as a useful
benchmark.  
%
Theoretical results for a wide variety of algorithms and
models are known, and sorting's high requirements for
bandwidth can shed light on bottlenecks in an architecture's
datapaths.
%
In addition, the best current implementations are extensively documented
through the annual Sorting Benchmark competition~\cite{sortbenchmark}.
%
These results form an excellent baseline for the comparison of new
algorithms and architectures.
%

%%%
\subsection{Types of Parallel Sorting Algorithms}
While many sorting algorithms have been proposed, our approach was
informed by the three major categories of parallel sorting algorithms 
identified by Blelloch et al.~\cite{blelloch}, briefly summarized here:
%

\paragraph{\textbf{Fixed-Topology}} Fixed-topology refers to sorts which use
a predetermined communication pattern between processors.
%
Popular fixed-topology sorts include bitonic sort and $k$-way mergesort, the 
current most popular technique for external sorting (sorting data which
does not fit into memory).
%
In $k$-way mergesort, keys are divided into chunks which fit into a processor's
memory and individually sorted in parallel, then written back to disk.
%
Subsequently, sets of $k$ chunks are merged recursively.
%
Traditionally, this approach has been very successful on large problems
(including the leaders of the Sorting Benchmark Competition), but 
it relies on high bandwidth to storage from all processors, which is unlikely
to be present in an exascale machine.
%

\paragraph{\textbf{Distribution}} Distribution (or partitioning) sorts select
a subset of the keys (i.e., the elements to be sorted) to determine how data will
be distributed among processors.
%
Well-known distribution sorts include quicksort and sample sort (the foundation
of one of our approaches).
%
Typically, the advantage of distribution sorts derives from their ability
to quickly divide and conquer large problems.

\paragraph{\textbf{Explicit Counting}} Counting sorts identify the occurrences
of each possible value of a key, then rank each element using an indexing
calculation which normally consists of prefix sums.
% 
This distinguishes them from comparison-based sorting, and allows them to have
the improved theoretical complexity of $O(n)$ instead of $O(n\log{n})$.
%
A prevalent counting sort is radix sort, which exploits the binary
representation of data to achieve high performance, especially in 
GPUs~\cite{duane}. 

%%%
In practice, many of the best parallel sorting implementations are hybrids.
%
At a minimum, most approaches tend to have at least two components: a 
local sorting primitive, which is a routine for sorting data small enough
to fit in the memory of a single processor, and a distributed strategy,
which determines how processors will communicate and divide work.
%
Our approaches follow this trend; both use a radix sort as the local
sorting primitive.  The first approach also uses radix sort as its 
distributed strategy, whereas the second uses a sampling approach
for key distribution.

%%%
\subsection{Sorting Criteria}
Before discussing the approaches in detail, it is important
to note some desirable criteria for sorting implementations, which
extend beyond the traditional measures for speed and scalability:
\begin{itemize}

\item \textbf{Randomized Behavior:} Many approaches to sorting employ
random algorithms, which result in randomized behavior. This
makes their performance more difficult to predict and can result,
in the worst case, in extremely inconsistent runtimes.

\item \textbf{Stability:} A ``stable'' sort preserves the relative
ordering of keys with identical values, which is important for some
analyses.

\item \textbf{Load Balance:} In some schemes, processors are assigned
differing numbers of keys or uneven amounts of work.
This can effect performance or result in some
processors completing the algorithm with many more keys than others.

\item \textbf{Sensitivity to Key Distribution:} The performance
of some sorting algorithms varies widely based on the statistical
distribution of the keys.  Although this can manifest through a
load imbalance, it can also impact performance in other ways.


\end{itemize}

