\section{Related Work}

Distributed sorting algorithms have been a topic of study for a number
of years.
%
Bitonic sort~\cite{bitonic_orig,bitonic_kim,bitonic_parallel},
distribution sort~\cite{Nodine:1993:DDS:165231.165247},
radix sort~\cite{Lee2002656},
and sample sort~\cite{huang_chow_sample_sort_1983}
have all been implemented and studied for distributed
memory parallel computers.
%
GPUs, as parallel processors, can be viable
targets for these types of algorithms as well;
see bitonic~\cite{bitonic_gpugems},
radix~\cite{duane},
sample~\cite{5470444},
and hybrid combinations~\cite{gputerasort,Sintorn20081381}.
%
However, these algorithms have generally
been restricted to single GPU, or at best
multiple GPUs within a shared-memory host,
and have not leveraged both the data-level
parallelism in graphics processors and 
distributed-memory parallelism in large-scale
heterogeneous systems.
%
Characterizing the benefits of exploiting both of these levels of
parallelism is one of the main contributions of our work.
%

%%
Other work by Shi and Reif provides theoretical bounds on the number
of samples required for load 
balance~\cite{reif_logarithmic_1987,shi_regularsampling_1992}.
%
These bounds provide a more formal examination of sampling and supplement
our empirical measurements on varying the oversampling ratio.

%%
Our work is closest, however, to the distributed sorting evaluation by
Solomonik and Kale~\cite{charmpp}, who evaluate several 
distributed sorting algorithms (including radix and 
sample-based approaches) on a distributed, homogeneous machine using 
the {Charm++} runtime system.  In this evaluation, Solomonik and Kale 
propose a new histogram-based method for sorting and favor it over
other approaches.  
Interestingly, we arrive at different conclusions about the 
scalability of sample-based sorting, discussed further in 
Section \ref{scala} by exploring two novel extensions
to the algorithm.

Note that preliminary results from this sorting work appeared at the PPAC workshop at IEEE Cluster~\cite{ppac}. This paper includes substantial novel content including a second algorithm and more rigorous CPU/GPU comparison.