\section{Results}

\subsection{Socket Level}
Before incorporating the complexities of heterogeneity, we measured
the performance of two implementations of radix and sample sort on a 
single socket CPU for sorting 128 million key-value pairs as 
shown in Figure~\ref{fig:thrust-omp}.  The first implementation uses 
Thrust along with MPI. The second implementation is a hand-tuned 
OpenMP version.

The goal of this measurement was to assess the tradeoff of using Thrust 
compared to hand-tuned code. We observed particularly poor performance for 
the CPU Thrust implementation of radix sort---some of which is due 
to the overhead of using MPI but the bulk of the overhead is due to the lack 
of specific operations such as binning and merge required for an efficient 
radix implementation. In order to compensate for the lack of these algorithms 
but to still be able to exploit the productivity of using Thrust, the former 
implementation uses a full local sort, which is more expensive.
%
This additional work is compounded by the number of passes for a radix sort;
this results in a substantial decrease in performance with a 11.6$\times$ slow down 
compared to OpenMP.  Performance was much closer for sample sort, with the 
OpenMP version only 54\% faster. 

\begin{figure}[htbp]
\begin{center}
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/thrust-crop.pdf}
\end{minipage}
\caption{A performance comparison of Thrust and hand-tuned OpenMP running
on a single CPU socket (6 cores, 1024MB of sixty-four bit key-value pairs)
for uniform random data.}
\label{fig:thrust-omp}
\end{center}
\end{figure}

We next compared the performance of the OpenMP versions on a single Westmere 
socket to Thrust running on a single GPU.
%
These results, as seen in Figure~\ref{fig:cpu-gpu}, are more varied.

Based on peak bandwidth alone, one would expect the GPU to be roughly 3.9x faster.
%
However, overheads from PCIe transfer and scaling from Amdahl's law substantially
decrease this advantage.
%
In fact, for radix sort, the overhead of transferring data (at every pass) 
over PCIe is enough to negate any advantage from the GPU's faster memory. The GPU 
retains the advantage for sample sort, where it is about twice as fast as the CPU.
%=========================================================================

\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/cpu-gpu-comparison-crop.pdf}
\end{minipage}
\caption{A socket-to-socket performance comparison of the hand-tuned OpenMP CPU
implementations and the Thrust code running on a GPU (1024MB of 
sixty-four bit key-value pairs) for uniform random data.
%
PCIe data transfer costs are included in the measurement.}
\label{fig:cpu-gpu}
\end{figure}

\subsection{Radix vs Sample Sort}

%=========================================================================
\begin{figure*}[htbp]
\centering
%-------------------------------------------------------------------------------------------
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/cpu_radix_sample_uniform.pdf}\par(a) Uniform \end{minipage}
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/cpu_radix_sample_sorted.pdf}\par(b) Sorted\end{minipage}
%-------------------------------------------------------------------------------------------
\caption{\label{fig:rs_scaling}
Scaling of OpenMP radix sort and OpenMP+MPI sample sort across two 6-core CPUs in a single node for uniform and sorted distributions (768MB of 64-bit key-value pairs).  Results are reported as number of threads times the wall-clock runtime; perfect parallel efficiency would be represented by a horizontal line.
}
\end{figure*}
%=========================================================================

We compare the performance and scaling of the CPU radix and sample sort 
on a single node.
%
Since sample sort bins the data based on the selected splitters, it 
performs an additional read and write pass over the data in addition to the
actual radix sort performed on each core.
%
These memory operations, in addition to the time required to perform MPI 
all-to-all communication for distributing data to the appropriate cores, 
accounts for the overhead over the OpenMP radix sort as illustrated in 
Figure~\ref{fig:rs_scaling}.  
%
Four threads appear to be sufficient to saturate the memory bandwidth
of a socket in both algorithms.
%
Further, as we scale to two sockets, the need for radix sort to relocate data
between CPUs after each radix pass introduces communication time that quickly
outpaces the linear overhead exhibited by sample sort; thus scaling across 
sockets is better for sample sort.
%
As such, though radix sort shows better single socket performance, we observe
that sample sort is more appropriate for scaling to large systems.

\subsection{Baseline Parallel Experiment}
Given the advantages of the GPU implementations at the socket level, we next
measure their performance for a small parallel run (24 GB on eight nodes) 
to establish a baseline for varying other parameters (oversampling ratio,
data distribution, etc.).  These results are shown in Figure~\ref{fig:baseline} using a 
radix width of sixteen bits and a sampling rate of 0.01\%.
%
\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/baseline-crop.pdf}
\end{minipage}
\caption{Baseline performance of the distributed GPU sorting algorithms. The
performance of each version is broken down into the major algorithmic 
components.
%
Results are shown using eight fully populated Keeneland nodes (24 GPUs) sorting
24GB of sixty-four bit key-value pairs, with a uniform random distribution.
}
\label{fig:baseline}
\end{figure}
%
First, we observe that sample sort preserves the advantages measured at the 
socket level, and is 3.34x faster than radix sort overall.
%
In general, the gap in performance for each of the components of the algorithms
is roughly proportional to difference in the number of passes (four for radix
sort, one for sample).
%
Also, MPI communication now accounts for a significant amount of total runtime,
about 40\% for radix sort and 33\% for sample sort.

	
\subsection{Problem Size Scaling}
In the next set of measurements, we varied the number of keys per rank in the
baseline configuration to assess the change in performance as the problem size
increased.
%
Results are shown in Figure~\ref{fig:size-scaling}.
%
Execution time scales linearly, which is consistent with expectation.
%
Furthermore, the two operations that constitute the majority of sample sort
runtime, the key exchange and GPU kernel execution, represent a constant
fraction of runtime at all problem sizes---about 33\% and 44\%, respectively.


\begin{figure*}[htbp]
\centering
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/radix-size-crop.pdf}\end{minipage}
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/sample-size-crop.pdf}\end{minipage}
\caption{This figure depicts the linear increase in execution time as problem
size is increased from 8MB to 512MB per GPU using the same configuration as
the baseline experiment (eight nodes, three GPUs per node, radix width of 16 
bits, 0.01\% sampling).
}
\label{fig:size-scaling}
\end{figure*}


\subsection{Effect of Data Distribution}
The performance of some sorting algorithms can vary based on the distribution
of the keys.
%
Using the same configuration as the parallel baseline, we tested four 
additional distributions suggested by Blelloch~\cite{blelloch}.  Results are
shown in Table~\ref{tab:dist} for the following distributions:

\begin{itemize}
\item \textbf{Already Sorted:} Keys are initialized in asending order, with 
rank 0 containing ${ 0 \dots \frac{n}{p}-1}$, rank one with 
${\frac{n}{p} \dots \frac{2n}{p}-1}$ and so on.
\item \textbf{Reverse Order:} The opposite of already sorted data, keys are 
initialized starting at $n-1$ on the first processor and continue in descending order.
\item \textbf{Cyclic Sorted:} Each rank has an identical set of 
keys, $0, 1, \dots \frac{n}{p}-1$
\item \textbf{Cyclic Reverse:} Again, each rank has an identical set of keys,
now in descending order $\frac{n}{p}-1, \frac{n}{p}-2, \dots 0 $.
\end{itemize}

\begin{table}[!t]
\begin{centering}
\begin{tabular}{| c | c | c | }
\hline
Algorithm     & Distribution & Improvement Over Random \\ \hline
\multirow{4}{*}{Radix} & Already Sorted & 1.46\% \\
                       & Reverse Order  & 0.52\% \\
					   & Cyclic Sorted  & 1.94\% \\
					   & Cyclic Reverse & 1.03\% \\ \hline
\multirow{4}{*}{Sample}& Already Sorted & 24.4\% \\
                       & Reverse Order  & 20.04\% \\
					   & Cyclic Sorted  & 5.82\% \\
					   & Cyclic Reverse & 6.84\% \\		   
\hline
\end{tabular}
\caption{GPU Algorithm Sensitivity to Data Distribution}
\label{tab:dist}
\end{centering}
\end{table}

Radix sort performance is roughly constant across all distributions, with 
runtime varying less than 2\%.
%
This is expected;
since radix sort is only considering sixteen bits in each pass, all non-random
distributions are effectively cyclic.
%
That is, there is a cycle in the relevant bits every $2^{16}$ elements, requiring
communication with most other nodes.


%
Sample sort, however, exhibits a substantial performance improvement for already
sorted and reverse order data.
%
Profiling indicates that this difference comes solely from a reduction
in the time required for the key exchange.
%
This improvement is not surprising, since these orderings require less
communication.
%
For already sorted data (assuming a reasonably accurate keyspace 
partitioning), most keys will remain on their current node.
%
Similarly, for reverse order data, each node will communicate primarily
with one other node (the first will communicate with the last, etc.),
minimizing the number of messages and complexity in the communication patterns.
%
In contrast, for random and both cyclic distributions, nodes are required to send
messages to most other nodes.
%


\subsection{Effect of Algorithm Parameters}
\subsubsection{Radix Width}
Recall that the theoretical analysis indicated that the radix width 
is an important parameter for increasing arithmetic intensity and decreasing
data transfer by controlling the number of passes.
%
As the radix width increases, the number of passes decreases, leading to a
reduction in PCIe, MPI, and kernel time due to decreased memory requirements
(in the worst case, keys traverse GPU global memory, a PCIe link, and an MPI
link in each pass).  This decrease is depicted in Figure~\ref{fig:radix-width}
and is proportional to the reduction in the number of passes. The sole limitation
on radix width is the size of the histograms, which grow exponentially as $r$ increases.
Currently, these histograms
exceed GPU shared memory at radix widths greater than sixteen bits.


\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/varying-radix-width-crop.pdf}
\end{minipage}
\caption{This figure captures the effect of changing the radix width.  
%
Runtime decreases substantially as radix width is increased.  
%
Widths beyond sixteen bits become impractical on the GPU due to histogram size, which
exceeds shared memory.
%
Results measured on eight Keeneland nodes with 2GB problem size (256 MB per node)
with a uniform random key distribution. 
%
}
\label{fig:radix-width}
\end{figure}

\subsubsection{Oversampling Ratio}
As mentioned in Section~\ref{sec:sample}, the oversampling ratio has a complex
relationship with performance.
%
Too few samples results in performance loss due to poor load balance, while
too many results in a bottleneck when the samples are sorted on the root node.
%
Using the baseline problem, we varied the total number of samples collected
from 0.01\% to 16\% of the per rank problem size.
%
Results are shown in Figure~\ref{fig:sampling-ratio}.

%%%
Surprisingly, performance is fairly constant at a low number of samples.
%
This indicates that any load imbalance in the number of keys for the final
local sort does not substantially influence overall runtime.
%
Also, as expected, when too many samples are collected, the cost to
gather and sort them on the root node begins to increase dramatically.
%
It is worth nothing that the sampling process can be overlapped with 
the PCIe transfer of keys from the GPU to the host (in preparation for the key
exchange).
%
So, when sampling time becomes large enough, some PCIe transfer time is hidden,
seen in Figure~\ref{fig:sampling-ratio} starting around a sampling rate of 1\%.

\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/varying-samples-crop.pdf}
\end{minipage}
\caption{This figure shows how performance changes when varying the number of
samples.
%
Performance remains relatively unchanged until too many samples are taken, and
the sort on the root node becomes the bottleneck.
%
Also, note that PCIe transfer is overlapped with sampling, and as sample time
increases, more PCIe time is hidden.
%
Measurements taken on an eight node parallel run with 1024 MB of sixty-four bit 
key-value pairs.  
%
}
\label{fig:sampling-ratio}
\end{figure}


\subsection{Weak Scaling}
Next, both algorithms were measured in a weak scaling scenario to 
evaluate scalability.
%
Results of scaling to 192 GPUs (1024MB of keys and values per GPU)
are shown in Figure~\ref{fig:weak-scaling}.
%
The primary concern for scalability in both algorithms is the key exchange,
which consists of three MPI all-to-all collective operations.
%
These collective communications are known to be the bottleneck in many algorithms
and are a substantial portion of the runtime of sorting: 40\% of radix
sort and 43\% of sample sort at the highest node count.
%
Indeed, scaling for most other operations remains fairly flat for both algorithms.
%
The one exception to this is the indexing computation in radix sort, which shows
a marked increase at twenty-four processes. 
%
This increase is likely due to a change in the underlying algorithm used for the MPI
exclusive scan in MVAPICH2.


\begin{figure*}[!t]
\centering
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/radix-weak-crop.pdf}\end{minipage}
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/sample-weak-crop.pdf}\end{minipage}
\caption{This figure shows how performance changes in a weak scaling scenario.
%
Measurements were taken using all three GPUs per node, with 1024 MB of sixty-four bit 
key-value pairs.
%
Key distribution is uniform random. 
%
}
\label{fig:weak-scaling}
\end{figure*}

\subsection{Strong Scaling}
Finally, the strong scaling behavior of the sample-based sorting algorithm
was measured on a six gigabyte problem, shown in Figure \ref{fig:ss}, 
with a sampling rate and data distribution that match the baseline
parallel experiment.
%
With the three process case as a baseline (used because the problem does not
fit into the memory of a single GPU), parallel efficiency ranged between
eighty-three and eighty-five percent, reflecting the costs of transferring
data off node.
%


\begin{figure}[htbp]
\centering
\begin{minipage}[t]{0.48\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/ss-crop.pdf}
\end{minipage}
\caption{This figure shows how GPU sample sort performance changes in a strong 
scaling scenario on a 6GB problem (key-value pairs).
%
Data distribution, sampling rate, and other parameters are consistent with the
baseline parallel experiment.
%
An initial drop in parallel efficiency (to between 83\% and 85\%) is 
observed when going off node (more than three processes).
%
However, using two nodes (six processes) as a baseline, the 
sample-based approach achieves 99\% efficiency to forty-eight
processes.
}
\label{fig:ss}
\end{figure}