\section{Evaluation}

At a high level, our evaluation has two primary aims: to compare the 
performance of the two sorting algorithms and explore how they map to a
heterogeneous architecture.
%
As part of these overarching goals, we examine numerous topics which
extend beyond sorting including the choice of threading models, the
trade-offs of using Thrust, a high-level library for productive 
parallel programming, and the relative impact of the PCIe bus on
scalability.

\subsection{Experimental Platform}

We evaluate the performance of the sorting approaches on the NSF
Track2D Keeneland System~\cite{keeneland2011}, a heterogeneous cluster 
comprising Intel Westmere CPUs and NVIDIA Fermi GPUs.  
%
More details of Keeneland's configuration are available in Table~\ref{tab:hw}.
%
The prototype implementations are written in a combination of several
programming models and threading frameworks including OpenMP, MPI, CUDA,
and Thrust.
%
The implementations operate on 64-bit key-value pairs and take the radix
with, $r$, and sampling rate, $s$, as parameters.
%
Results are presented using a recent release of the Intel toolchain
(v2011.4.191), CUDA 4.0, and MVAPICH2 1.7rc1 running in CentOS 5.5.
%

%%%
It is well-known that sorting is a memory-bound problem.
%
The peak memory bandwidth of each GPU with ECC enabled is about 126 gigabytes
per second, which is roughly 3.9x more than each CPU socket, at 32
gigabytes per second.


\begin{table}[!t]
\begin{centering}
\begin{tabular}{| l | l |}
\hline
Node Architecture     & HP SL390                     \\
CPU                   & Intel Xeon X5660 (Westmere)  \\ 
CPU Frequency         & 2.80 Ghz                     \\
CPU cores per node    & 12                           \\
Host memory per node  & 24 GB                        \\
GPU Architecture      & NVIDIA Tesla M2070 (Fermi)   \\
GPUs per node         & 3                            \\
GPU memory per node   & 18 GB (6GB per GPU)          \\
CPU/GPU ratio         & 2:3                          \\
Interconnect          & InfiniBand QDR (Single rail) \\
Total Number of Nodes & 120                          \\
Total CPU Cores       & 1440                         \\
Total GPU Cores       & 161,280                      \\
\hline
\end{tabular}
\caption{Keeneland Hardware Details}
\label{tab:hw}
\end{centering}
\end{table}

\subsubsection{Keeneland Node Architecture}
Figure~\ref{fig:sl390} shows the Hewlett Packard SL390, the architecture of
a Keeneland node.
%
This dual socket system features a dual 
I/O hub design, which provides a full sixteen lane PCIe 2.0 connection to 
three NVIDIA Tesla M2070 GPUs.
%
Keeneland nodes are connected using single-rail QDR Infiniband in a fat-tree
topology.
%
Theoretical peak unidirectional bandwidths for each of the SL390's datapaths
are shown in Figure~\ref{fig:sl390}.

\begin{figure}[htbp]
\begin{center}
%\includegraphics[\textwidth]{01-narrative/figs/sl390.png}
\begin{minipage}[t]{0.45\textwidth}\centering\includegraphics[width=\textwidth]{01-narrative/figs/sl390.png}\end{minipage}
\caption{Block Diagram of HP SL390.
%
Numbers indicate the peak theoretical unidirectional bandwidth of the
system's datapaths in gigabytes per second.}
\label{fig:sl390}
\end{center}
\end{figure}


\subsection{Implementation Details}
In order to obtain a thorough understanding of performance, several
implementations of each sorting algorithm were studied.
%
All versions are distributed across nodes with MPI, and use the same global
collectives (a series of three all-to-all communications) for the key-value
exchange.
%
The versions vary by:
\begin{itemize}
\item \textbf{Use of the GPU:} In the GPU versions of sample and radix sort,
all operations occur on the GPU with the exception of MPI calls and the
relatively inexpensive addressing calculations they require.

\item \textbf{Use of the high-level Thrust Library:}
%
In some cases, Thrust did not have the specific operations needed 
by the algorithm.
%
In these cases, the Thrust implementation uses a kernel which may perform
extra work.
%
For instance, when binning was required in sample sort, the Thrust
version uses a full local sort since no binning operation is available.

\item \textbf{Use of OpenMP:} Intra-node concurrency can be accomplished by running
an MPI process on each core.
%
While this approach is simpler for the programmer, lightweight OpenMP threads
tend to scale better.

\end{itemize}

The sorting algorithms were instrumented with timing code to characterize total
runtime into the different components of the algorithms.
%
For radix sort, these components include GPU kernel runtime, the indexing
calculations (of $H_{\textrm{PriorRank}}$ and $H_{\textrm{PriorDigit}}$) which are primarily MPI scanning
and broadcasting, the key exchange (MPI all-to-all communications), PCIe
transfer, and time spent on MPI addressing calculations on the CPU.
%
Sample sort has the same components, except for indexing, which is replaced
by the time taken to collect and sort the samples from each processor.
%
This sampling time is composed of an MPI gather operation, a local sort
on the root processor, and an MPI broadcast of the relevant results.

%%%
			