%\section{Why Benchmarks are Insufficient}
\section{Shortcomings of MapReduce Benchmarks}
\label{sec:benchmarks}

In this section, we discuss why existing benchmarks are insufficient for
evaluating MapReduce performance. Our thesis is that existing 
benchmarks are not representative. They capture narrow slivers of 
a rich space of workload characteristics. What is needed is 
a framework for constructing workloads that allows us to select and 
combine various characteristics.
%
%\begin{comment}
%The desiderata for a comprehensive workload-suite for evaluating MapReduce
%performance include; {\em job diversity} -- a diverse set of jobs 
%that allows the evaluator to describe different classes of computations, 
%processing and resource requirements. 
%
%{\em Job-mix diversity} -- the inclusion of mixes of jobs 
%as a first class workload-component to allow an evaluator
%to quantify the implications of consolidating/separating individual 
%MapReduce workloads (streams of jobs). 
%
%{\em Arrival-intensity diversity} -- support for periodic and
%aperiodic job submission sequences and job interleavings as a 
%first-class workload component to allow an evaluator to compare and 
%contrast job scheduling and other cluster management policies.
%
%{\em Rich data-size characteristics} -- covers a wide spectrum of
%data-size distributions at each MapReduce phase rather than emphasizing
%the extrema as is typical in stress evaluations. 
%
%{\em Amenable to workload-dimension scaling} -- allows evaluators to
%play ``what-if'' by modifying one or more of the afore-mentioned
%workload dimensions, e.g., for capacity planning and reasoning about 
%configuration improvements.
%\end{comment}

\begin{table}[t]
\centering
\caption{Summary of shortcomings of recent MapReduce bench\-marks, compared against workload suites (right-most column).}  
\vspace{-5pt}
\footnotesize
\begin{tabular}{p{0.8in}|c|c|c|c|c|c|} 
                   &\textbf{Grid-}&\textbf{Hive} &\textbf{Hi}   &\textbf{Pig}&\textbf{Grid-}&\textbf{WL} \\
                   &\textbf{mix2} &\textbf{BM}&\textbf{bench}&\textbf{Mix}&\textbf{mix3} &\textbf{suites} \\ \hline 
Diverse job types                  &          &       &   $\surd$    &  $\surd$     &  $\surd$    & $\surd$   \\ \hline
Right \# of jobs for each job type &          &       &        &        &  $\surd$    & $\surd$   \\ \hline
Variations in job submit intensity &          &       &        &        &  $\surd$    & $\surd$   \\ \hline
Representative data-sizes          &          &       &        &        &  $\surd$    & $\surd$   \\ \hline
Easy to generate scaled/anticipated workloads &          &       &        & & & $\surd$ \\ \hline
Easy to generate consolidated workloads  & & & & & & $\surd$ \\ \hline
Cluster \& config. independent & $\surd$ & $\surd$ & $\surd$ & $\surd$ & & $\surd$ \\ \hline
\end{tabular}
\label{tbl:mrbenchmarks}
\vspace{-10pt}
\end{table}

Table \ref{tbl:mrbenchmarks} summarizes the strengths and weaknesses 
of five contemporary MapReduce benchmarks -- Gridmix2, Hive Benchmark, 
Pigmix, Hibench and Gridmix3. Below, we discuss each in detail. 
None of the existing
benchmarks provide as much flexibility and functionality as
workload suites. 

\emph{Gridmix2} \cite{GridMix} includes stripped-down versions 
of ``common'' jobs -- sorting text data and SequenceFiles, 
sampling from large compressed datasets, and chains of MapReduce 
jobs exercising the combiner. Gridmix 2 is primarily a 
saturation tool \cite{GridMixRecap}, which emphasizes stressing the 
framework at scale. As a result, jobs produced from Gridmix tend towards 
the jobs with 100s of GBs of  
input, shuffle, and output data. While stress 
evaluations are an important aspect of evaluating MapReduce performance, 
the production workloads in Section~\ref{sec:traces}
contain many jobs with KB to MB data sizes. 
Also, as we show later in Section~\ref{subsec:systemSize}, running a
representative workload places realistic stress on the system
beyond that generated by Gridmix 2. 

\emph{Hive Benchmark} \cite{HiveBenchmark} tests the performance of 
Hive, a data warehousing infrastructure built on top of Hadoop MapReduce.  
It uses datasets and queries derived from those used in 
\cite{PavloSigmod09}. These queries aim to describe 
``more complex'' analytical workloads
and focus on direct comparison 
against parallel databases. 
It is not clear that the queries in the Hive Benchmark reflect actual queries
performed in production Hive deployments. 
%~\cite{PavloSigmod09} and 
%facilitates comparion of MapReduce performance against that 
%of parallel databases.
%Rather, the 
%selection of queries emphasizes ``apples-to-apples'' comparison 
%against parallel databases. 
%These queries represent examples where parallel databases are 
%particularly adept, such as random access through indices, performing joins 
%using partition keys, and so on. 
Even if the five queries are representative, running Hive Benchmark 
does not capture different query mixes, interleavings, arrival intensities, data sizes,
and other complexities that one would expect in a production deployment of Hive.

\emph{HiBench} \cite{Hibench} consists of a suite of eight Hadoop programs 
that include synthetic microbenchmarks and real-world applications -- 
Sort, WordCount, TeraSort, NutchIndexing, PageRank, Bayesian Classification, 
K-means Clustering, and EnhancedDFSIO. These programs are presented as 
representing a wider diversity of applications than those 
used in prior MapReduce benchmarking efforts. While HiBench includes
a wider variety of jobs, it still fails to capture different 
job mixes and job arrival rates that one would expect in production 
MapReduce clusters.

\emph{PigMix} \cite{PigMix} is a set of twelve queries intended to test the latency
and the scalability limits of Pig -- a platform for analyzing large datasets
that includes a high-level language for constructing analysis programs and the
infrastructure for evaluating them. While this collection of queries
may be representative of the types of queries run in Pig deployments,
there is no information on representative data sizes, query mixes,
query arrival rate etc. to capture the workload behavior seen in production environments.

\emph{Gridmix3} \cite{Gridmix3, GridMixRecap} 
was driven by situations where 
improvements measured to have dramatic gains on Gridmix2 
showed ambiguous or even negative effects in production~\cite{Gridmix3}. 
Gridmix3 replays job traces collected via Rumen~\cite{Rumen},
reproducing the byte and record movement patterns, as well as the job submission
sequences, thus producing comparable load on the I/O 
subsystems. 

Although the direct replay approach
reproduces inter-arrival rates and the correct mix of job types and data 
sizes, it introduces other problems. 
For example, it is challenging to change the workload to add or remove 
new types of jobs, or to scale the workload along dimensions 
such as data sizes or arrival intensities. 
Further, changing the input Rumen traces is difficult, limiting the benchmark's
usefulness on clusters with configurations different from 
the cluster that initially generated the trace. For example, the number of 
tasks-per-job is preserved from the traces. Thus, 
evaluating the appropriate configuration of task size and task number is 
difficult. Misconfigurations of the original cluster 
would be replicated. Similarly, it is challenging to use Gridmix3 to explore 
the performance impact of combining or separating workloads, 
e.g., through consolidating the workload from many clusters, or separating 
a combined workload into specialized clusters. 

