\section{Evaluation}
\label{sec:evaluate}

We believe an evaluation of a MapReduce workload suite should demonstrate three things -- that the synthesized workload is actually \emph{representative} (Section~\ref{subsec:representative}), that the workload execution framework has \emph{low overhead} (Section~\ref{subsec:lowOverhead}), and that executing the synthesized workloads gives cluster operators \emph{new capabilities} otherwise unavailable (Sections~\ref{subsec:systemSize},~\ref{subsec:fairScheduler}). We demonstrate all three by synthesizing a day-long workload using the Facebook traces and executing it to identify workload-specific system size bottlenecks and to inform workload-specific choice of MapReduce task schedulers. 


\subsection{Representativeness of the Synthetic Workload}
\label{subsec:representative}

By representative, we mean that the synthetic workload should reproduce from the original trace the distribution of input, shuffle, and output data sizes (representative data characteristics), the mix of job submission rates and sequences, and the mix of common job types. We demonstrate all three by synthesizing day-long ``sFacebook-like'' workloads using the Facebook trace and our workload synthesis tools. 

\vspace{4pt}
\noindent \emph{Data characteristics}
\vspace{2pt}

Figure \ref{fig:syntheticDataCharacteristics} shows the distributions of input, shuffle, and output data sizes of the synthetic workload, compared with that in the original Facebook trace. To observe the statistical properties of the trace sampling method, we synthesized 10 day-long workloads using 1-hour continuous samples . We see that sampling does introduce a degree of statistical variation, but bounded around the aggregate statistical distributions of the entire trace. In other words, our workload synthesis method gives representative data characteristics. 

We also repeated our analysis for different sample window lengths. The results (Figure \ref{fig:syntheticDataCharacteristicsConverge}) are intuitive - when the synthetic workload length is fixed, shorter sample lengths result in more samples and more representative distributions. In fact, according to statistics theory, the CDFs for the synthetic workloads converge towards the ``true'' CDF, with the bounds narrowing at O($n^{-0.5}$), where $n$ is the number of samples~\cite{allOfStatistics}. Thus, shorter sample lengths correspond to synthetic workloads with more representative data characteristics. 

Also, the sampling method could be modified to accommodate different metrics of ``representativeness''. For example, to capture daily diurnal patterns, the sampling method could use day-long continuous sample windows. Alternately, we could perform conditional sampling of hour-long windows, e.g., the first hour in synthetic trace samples from the midnight-1am time window of all days. Other conditional sampling methods can capture behavior changes over different time periods, job streams from different organizations, and the like.

\begin{figure}
\begin{center}
\centering
\includegraphics[trim = 0cm 13.9cm 10cm 0cm, clip, width=9cm]{syntheticDataCharacteristics}
\vspace{-17pt}
\caption{\small Distributions of data sizes in synthesized workload using 1-hr samples. Showing that the data characteristics are representative -- min. and max. distributions for the synthetic workload (dashed lines) bound the distribution computed over the entire trace (solid line).}
\label{fig:syntheticDataCharacteristics}
\end{center}
\vspace{-8pt}
\end{figure}

\begin{figure}
\begin{center}
\centering
\includegraphics[trim = 0cm 13.9cm 10cm 0cm, clip, width=9cm]{syntheticDataCharacteristicsConverge}
\vspace{-17pt}
\caption{\small Distributions of output sizes in synthesized workload using different sample lengths. For fixed-length synthetic workload, the horizontal gap between the min. and max. distributions for the synthetic workload (dashed lines) and the distribution for the entire trace (solid line) decreases by 2$\times$ when the sampling window shortens by 4$\times$.}
\label{fig:syntheticDataCharacteristicsConverge}
\end{center}
\vspace{-18pt}
\end{figure}

\vspace{4pt}
\noindent \emph{Job submission patterns}
\vspace{2pt}

Our intuition is that job submission-rate per time unit is faithfully reproduced only if the length of each sample is longer than the time unit involved. Otherwise, we would be performing memoryless sampling, with the job submission rate fluctuating in a narrow range around the long term average, thus failing to reproduce workload spikes in the original trace. If the job sample window is longer than the time unit, then more samples would lead to a more representative mix of behavior, as we discussed previously. 

Figure \ref{fig:syntheticJobSubmitRate} confirms this intuition. The figure shows the jobs submitted per hour for workloads synthesized by various sample windows lengths. We see that the workload synthesized using 4-hour samples has loose bounds around the overall distribution, while the workload synthesized using 1-hour samples has closer bounds. However, the workload synthesized using 15-minute samples does not bound the overall distribution. In fact, the 15-minute sample synthetic workload has a narrow distribution around 300 jobs per hour, which is the long-term average job submission rate. Thus, while shorter sample windows result in more representative data characteristics, they distort variations in job submission rates. 

\begin{figure}
\begin{center}
\centering
\includegraphics[trim = 0cm 13.9cm 10cm 0cm, clip, width=9cm]{syntheticJobSubmitRate}
\vspace{-17pt}
\caption{\small Distributions of jobs per hour in synthetic workload. Short samples distort variations in job submit rates -- min. and max. distributions for synthetic workload (dashed lines) bound the distribution for the entire trace (solid line) for 1 \& 4-hour samples only.}
\label{fig:syntheticJobSubmitRate}
\end{center}
\vspace{-15pt}
\end{figure}

\vspace{4pt}
\noindent \emph{Common jobs}
\vspace{2pt}

Figure \ref{fig:syntheticCommonJobs} shows the frequency of common jobs in the synthetic workload, expressed as fractions of the frequencies in the original trace. A representative workload would have the same frequencies of common jobs as the original trace, i.e., fractions of 1. To limit statistical variation, we compute average frequencies from 10 instances of a day-long workload.

We see that regardless of the sample window length, the frequencies are mostly around 1. A few job types have fractions deviating considerably from 1. Table~\ref{tab:clusters} indicates that those jobs have very low frequencies. Thus, the deviations are statistical artifacts -- the presence or absence of even one of those jobs can significantly affect the frequency. 

Interestingly, the sample window length has no impact on how much the frequencies deviate. This differs from the data characteristics and submission patterns, where the sample window length has a clear impact on the representativeness of the synthetic workload. Here, we can still increase workload representativeness by synthesizing longer workloads. 

\begin{figure}
\begin{center}
\centering
\includegraphics[trim = 0cm 12.8cm 13cm 0cm, clip, width=9cm]{syntheticCommonJobs}
\vspace{-15pt}
\caption{\small Frequency of common jobs in the synthetic workload as fractions of the frequencies in the original trace. Showing that workloads  synthesized using continuous samples of 15min, 1hr, and 4hrs all have common jobs frequencies similar to the original trace.}
\label{fig:syntheticCommonJobs}
\end{center}
\vspace{-20pt}
\end{figure}




\subsection{Low Workload Execution Overhead}
\label{subsec:lowOverhead}

There are two  sources of potential overhead in our workload execution framework. First, concurrent reads by many jobs on the same input files could potentially affect HDFS read performance. Second, the background task to remove workload output could affect both HDFS read and write performance. 

Ideally, we would quantify the overhead by running the Facebook-like workload with non-overlapping input data or no removal of workload output, and compare the performance against a setup in which we do have overlapping input and background removal of output. Doing so requires a system with up to 200TB of disk space (sum of per-day input, shuffle, output size, multiplied by 3-fold HDFS replication). Thus, we evaluate the overhead using  simplified experiments. 

\vspace{4pt}
\noindent \emph{Concurrent reads}
\vspace{2pt}

To verify that concurrent reads of the same input files have low impact on HDFS reads, we repeat 10 times the following experiment on a 10-machine cluster running Hadoop 0.18.2. 

{\tt \scriptsize
\begin{verbatim}
  Job 1: 10 GB sort, input HDFS/directoryA
  Job 2: 10 GB sort, input HDFS/directoryB
  Wait for both to finish
  Job 3: 10 GB sort, input HDFS/directoryA
  Job 4: 10 GB sort, input HDFS/directoryA
\end{verbatim}
}

Jobs 1 and 2 give the baseline performance, while Jobs 3 and 4 identify any potential overhead. The running times are in Table \ref{tab:HDFSInput}. The finishing times are completely within the confidence intervals of each other. Thus, our data input mechanism imposes no measurable overhead. 

We repeat the experiment with more concurrent read jobs. There, the MapReduce task schedulers and placement algorithms introduce large variance in job completion time, with the performance difference again falling within confidence intervals of each other. Thus, our data input mechanism has no measurable overhead at even higher read concurrency levels. 

\begin{table}
  \begin{minipage}{4cm}
\centering
\caption{Simultaneous HDFS read, showing low overhead.}
\label{tab:HDFSInput}
\begin{tabular}{r l}
  \hline
  Job 1 & 597 s $\pm$ 56 s \\
  Job 2 & 588 s $\pm$ 46 s \\
  Job 3 & 603 s $\pm$ 56 s \\
  Job 4 & 614 s $\pm$ 50 s \\
  \hline
\end{tabular}
  \end{minipage}
  \begin{minipage}{4cm}
\centering
\caption{Background HDFS remove, also showing low overhead.}
\label{tab:HDFSrmr}
\begin{tabular}{r l}
  \hline
  Job 1 & 206 s $\pm$ 14 s  \\
  Job 2 & 106 s $\pm$ 10 s  \\
  Job 3 & 236 s $\pm$ 8 s  \\
  Job 4 & 447 s $\pm$ 18 s  \\
  \hline
  Job 5 & 206 s $\pm$ 11 s \\
  Job 6 & 102 s $\pm$ 8 s \\
  Job 7 & 218 s $\pm$ 16 s \\
  Job 8 & 417 s $\pm$ 9 s \\
  \hline
\end{tabular}
  \end{minipage}
  \vspace{-20pt}
\end{table}  
  
%\begin{table}  
%\centering
%\caption{Simultaneous HDFS read overhead}
%\label{tab:HDFSInput}
%\begin{tabular}{r l}
%  \hline
%  Job 1 & 597 s $\pm$ 56 s \\
%  Job 2 & 588 s $\pm$ 46 s \\
%  Job 3 & 603 s $\pm$ 56 s \\
%  Job 4 & 614 s $\pm$ 50 s \\
%  \hline
%\end{tabular}
%\end{table}

\vspace{4pt}
\noindent \emph{Background deletes}
\vspace{2pt}

To verify that the background task to remove workload output has low impact on HDFS read and write performance, we repeat 10 times the following experiment on a 10-machine cluster running Hadoop 0.18.2. 

{\tt \scriptsize 
\begin{verbatim}
  Job 1: Write 10 GB to HDFS;  Wait for job to finish
  Job 2: Read 10 GB from HDFS; Wait for job to finish
  Job 3: Shuffle 10 GB;        Wait for job to finish
  Job 4: Sort 10 GB;           Wait for job to finish

  Job 5: Write 10 GB to HDFS, with HDFS -rmr in background
    Wait for job to finish
  Job 6: Read 10 GB from HDFS, with HDFS -rmr in background
    Wait for job to finish
  Job 7: Shuffle 10 GB, with HDFS -rmr in background
    Wait for job to finish
  Job 8: Sort 10 GB, with HDFS -rmr in background
    Wait for job to finish
\end{verbatim}
}

Jobs 1-4 provide the baseline for write, read, shuffle and sort. Jobs 5-8 quantify the performance impact of background deletes. The running times are in Table \ref{tab:HDFSrmr}. The finishing times are completely within the confidence intervals of each other. Again, our data removal mechanism imposes no measurable overhead. This is because recent HDFS versions implement delete by renaming the deleted file to a file in the {\tt \small /trash} directory, with the space being truly reclaimed only after 6 hours~\cite{HDFSdelete}. Thus, even an in-thread, non-background HDFS remove would impose low overhead. 

%\begin{table}
%\centering
%\caption{Background HDFS remove overhead}
%\label{tab:HDFSrmr}
%\begin{tabular}{r l}
%  \hline
%  Job 1 & 206 s $\pm$ 14 s  \\
%  Job 2 & 106 s $\pm$ 10 s  \\
%  Job 3 & 236 s $\pm$ 8 s  \\
%  Job 4 & 447 s $\pm$ 18 s  \\
%  \hline
%  Job 5 & 206 s $\pm$ 11 s \\
%  Job 6 & 102 s $\pm$ 8 s \\
%  Job 7 & 218 s $\pm$ 16 s \\
%  Job 8 & 417 s $\pm$ 9 s \\
%  \hline
%\end{tabular}
%\end{table}



\subsection{New Capability 1 - Identify Workload-Specific Bottlenecks}
\label{subsec:systemSize}

We run the day-long Facebook-like workload at scale on a 200-machine cluster on Amazon Elastic Computing Cloud (EC2)~\cite{EC2}, running Hadoop 0.18.2 with default configurations. Each machine is a m1.large machine instance with 7.5GB memory, 4$\times$1.0-1.2GHz equivalent CPU capacity, 400GB storage capacity, and ``high'' IO performance. 

When we run the workload, many of the jobs failed to complete. We suspected that there is a system sizing issue because we run on a 200-machine cluster a workload that originally came from a 600-machine cluster. Thus, we decreased by a factor of 3 the size of input/shuffle/output for all jobs. Even then, 8.4\% of the jobs still failed, with the failed jobs appearing in groups of similar submission times, but with the groups dispersed throughout the workload. It turns out that a subtle system sizing issue is the bottleneck. 

What happens is that when there is a mix of large and small jobs, and the large jobs have reduce tasks that take a long time to complete, the small jobs complete their map tasks, with the reduce tasks remaining on queue. When this happens, the map tasks keep completing, allowing newly submitted jobs to begin. We get an increasingly long queue of jobs that completed the map phase but wait for the reduce phase. The cluster is compelled to store the shuffle data of all these active jobs, since the reduce step requires the shuffle data. It is this increasing set of active shuffle data that makes the system run out of storage space. Once that happens, jobs that attempt to write shuffle data or output data will fail. 

MapReduce recovers gracefully from this failure. Once a job fails, MapReduce reclaims the space for intermediate data. Thus, when enough jobs have failed, there would be enough reclaimed disk space for MapReduce to resume executing the workload as usual, until the failure repeats. Hence the failures appear throughout the workload. 

\emph{The ability to identify this bottleneck represents a new capability because the failure occurs only when the workload contains a specific sequence of large and small jobs, and specific ratios of map versus reduce times.} A benchmark cannot identify this bottleneck because it does not capture the right job submission sequences. A direct trace replay does, but potentially takes longer. For example, if the pathological job submission sequence happens frequently, but only in the second half of the trace, then we need to replay the entire first half of the trace before identifying the bottleneck. 

Increasing the disk size would address this failure mode, for example having a cluster with 200TB of storage (sum of input, shuffle, output sizes, multiplied by 3-fold HDFS replication). However, this would be a wasteful over-provision. The real culprit is the FIFO task scheduler, which creates a long queue of jobs that are starved of reduce slots. The Hadoop fair scheduler was designed specifically to address this issue~\cite{fairScheduler}. Thus, we are not surprised that the fair schedule came out of a direct collaboration with Facebook. 

As a natural follow-up, we investigate how much the fair scheduler actually benefits this workload.





\subsection{New Capability 2 - Select Workload-Specific Schedulers}
\label{subsec:fairScheduler}

Briefly, MapReduce task schedulers work as follows. Each job breaks down into many map and reduce tasks, with each task operating on a partition of the data. These tasks execute in parallel on different machines. Each machine has a fixed number of task slots, by default 2 map and 2 reduce slots. The task scheduler receives job submission requests and assigns tasks to worker machines. The FIFO scheduler assigns task slots to jobs in FIFO order, while the fair scheduler gives each job a concurrent fair share of the task slots. A big performance difference occurs when the job stream contains many small jobs following a large job. Under the FIFO scheduler, the large job takes up all the task slots, with the small jobs enqueued until the large job completes. Under the fair scheduler, the jobs share the task slots equally, with the large jobs taking longer, but small jobs being able to run immediately. 

We run the day-long Facebook-like workload on the cluster of 200 m1.large EC2 instances. We compare the behavior when the cluster runs Hadoop 0.18.2, which has the FIFO scheduler, with Hadoop 0.20.2, which has the fair scheduler. We observed three illustrative kinds of behavior. We analyze each, and then combine the observations to discuss why the choice of schedulers should depend on the workload.

%Figure \ref{fig:fairScheduler} shows three illustrative job sequences and their run times under FIFO and fair schedulers. 

%\begin{figure}
%\begin{center}
%\centering
%%\includegraphics[width=0.98\columnwidth]{fairScheduler}
%\includegraphics[trim = 0.1cm 10cm 14cm 0.1cm, clip, width=8cm]{fairScheduler}
%\vspace{-10pt}
%\caption{\small Three illustrative job sequences for comparing the FIFO scheduler (solid markers) and the fair scheduler (hollow markers), showing job failures in the FIFO scheduler (top), unnecessarily long latency in the FIFO scheduler (middle), and slight latency increase for small jobs in Hadoop 0.20.2 (bottom).}
%\label{fig:fairScheduler}
%\end{center}
%\vspace{-10pt}
%\end{figure}


\vspace{4pt}
\noindent \emph{Disk ``bottleneck''}
\vspace{2pt}

Figure~\ref{fig:fairSchedulerFailure} captures a snapshot of 100 consecutive jobs in our day-long workload of roughly 6000 jobs. The horizontal axis indicates the job indices in submission order, i.e., the first job in the workload has index 0. There are several bursts of large jobs that cause many jobs to fail for the FIFO scheduler. These failed jobs have no completion time, leaving a missing marker in the graph. We know there are bursts of large jobs because the jobs take longer to complete under the fair scheduler. We see two such bursts - Jobs 4570-4580, 4610-4650. This is the failure mode we discussed in Section~\ref{subsec:systemSize}. The fair scheduler is clearly superior, due to the higher job completion rate. 

\begin{figure}
\begin{center}
\centering\includegraphics[trim = 0cm 14.8cm 14cm 0cm, clip, width=8cm]{fairSchedulerFailure}
\vspace{-10pt}
\caption{\small A snapshot of 100 jobs in a day-long Facebook-like workload, showing job failures in FIFO scheduler (missing markers, i.e., jobs without a completion time).}
\label{fig:fairSchedulerFailure}
\end{center}
\vspace{-5pt}
\begin{center}
\centering\includegraphics[trim = 0cm 14.8cm 14cm 0cm, clip, width=8cm]{fairSchedulerSuccess}
\vspace{-10pt}
\caption{\small Job submit pattern of small jobs after large jobs from a snapshot of 100 jobs in a day-long Facebook-like workload. The fair scheduler gives lower completion times and is also superior.}
\label{fig:fairSchedulerSuccess}
\end{center}
\vspace{-5pt}
\begin{center}
\centering\includegraphics[trim = 0cm 14.8cm 14cm 0cm, clip, width=8cm]{fairSchedulerSmallJobs}
\vspace{-10pt}
\caption{\small Long sequence of small jobs from a snapshot of 100 jobs in a day-long Facebook-like workload. The FIFO scheduler gives lower completion times and is superior.}
\label{fig:fairSchedulerSmallJobs}
\end{center}
\vspace{-20pt}
\end{figure}
%
%\begin{figure}
%\begin{center}
%\centering\includegraphics[trim = 0cm 14.8cm 14cm 0cm, clip, width=8cm]{fairSchedulerSuccess}
%\vspace{-10pt}
%\caption{\small }
%\label{fig:fairSchedulerSuccess}
%\end{center}
%\vspace{-10pt}
%\end{figure}
%
%\begin{figure}
%\begin{center}
%\centering\includegraphics[trim = 0cm 14.8cm 14cm 0cm, clip, width=8cm]{fairSchedulerSmallJobs}
%\vspace{-10pt}
%\caption{\small }
%\label{fig:fairSchedulerSmallJobs}
%\end{center}
%\vspace{-10pt}
%\end{figure}


\vspace{4pt}
\noindent \emph{Small jobs after large jobs, no failures}
\vspace{2pt}

This is the precise job arrival sequence for which the fair scheduler was designed. Figure~\ref{fig:fairSchedulerSuccess} captures another 100 consecutive jobs in the day-long workload. Here, when both the FIFO and fair schedulers exhibit no job failures, the fair scheduler is still far superior. Several very large jobs arrive in succession (high completion times around Job 4820 and just beyond Job 4845). Each arrival brings a large jump in the FIFO scheduler completion time of subsequent jobs. This is again due to FIFO head-of-queue blocking. Once the large job completes, all subsequent small jobs complete in rapid succession, leading to the horizontal row of markers. The fair scheduler, in contrast, shows small jobs with unaffected running times, sometimes orders of magnitude faster than their FIFO counterpart. Such improvements agree with the best-case improvement reported in the original fair scheduler study~\cite{fairScheduler}, but far higher than the average improvement reported there. 

\vspace{4pt}
\noindent \emph{Long sequence of small jobs}
\vspace{2pt}

Figure~\ref{fig:fairSchedulerSmallJobs} captures 100 consecutive jobs that are all small jobs with fast running times. For this job submission pattern, Hadoop 0.20.2 is slower than Hadoop 0.18.2, unsurprising given the many added features since 0.18.2. The fair scheduler brings little benefit. Small jobs dominate this workload (Table~\ref{tab:clusters}). The vast improvements for small jobs after large jobs would be amortized across performance penalties for long sequences of small jobs. 

\vspace{4pt}
\noindent \emph{Workload-specific choice of schedulers}
\vspace{2pt}

Our experiments show that the choice of schedulers depends on both the performance metric and the workload. The fair scheduler would be a clear winner if the metric is the worst-case job running time or the variance in job running time. However, if average job running time is the metric, then the FIFO scheduler would be preferred if long sequences of small jobs dominate the workload. Thus, even though cluster users benefit from the fairness guarantees of the fair scheduler, cluster operators may find that fairness guarantees are rarely needed, and adopt the FIFO scheduler instead. 

\emph{The ability to make workload-specific choice of schedulers represents a new capability because scheduler performance depends on the frequencies of various job submission patterns.} The right choice of scheduler for one workload would not imply the right choice for another. The original fair scheduler study~\cite{fairScheduler} used a synthetic workload with frequent interleaving of large and small jobs, leading to the conclusion that the fair scheduler should be unequivocally preferred. Here, we execute a workload with a more representative interleaving between large and small jobs. This leads us to a more nuanced, workload-specific choice of MapReduce task schedulers. 

%Under FIFO, subsequent arrivals of small jobs steadily lengthens the job queue. There are several failure modes, and we have not yet pin-pointed the cause of every one. In one common failure mode, jobs fail because the the entire cluster runs out of disk space. The disk holds the working set of shuffle data of all active jobs. Having a large job queue can increase this working set considerably, with earlier jobs in the reduce phase operating on old shuffle data, but subsequent jobs writing additional shuffle data using free map slots. Full disks across the cluster cause jobs to fail despite re-submission and recovery mechanisms, until enough jobs have failed to cause intermediate data to be cleared and disk space to be freed. 

%Using very large disks can avoid this failure (the disks in the experiment are already 400GB). However, switching to the fair scheduler considerably lowers the disk space requirements, since all jobs have an equal chance to finish, allowing the working set of completed jobs to be reclaimed. However, even for the fair scheduler, we still observe the failure mode, just much more rarely. This illustates that running the synthetic workload can test the correct system sizing under realistic job sequences and data intensities, as we have identified a disk space limitation here. 

%The top graph also shows that successful jobs see lighter than expected cluster load when submitted immediately after strings of job failures. The running times for Jobs 4650 onwards all show that jobs using the FIFO scheduler completed faster. The reason is that preceding job failures removed otherwise still active cluster loads! When the failure rates differ so greatly, we believe the failure rate metric should take precedence over efficiency and latency metrics. 


%The middle graph shows the precise job arrival pattern that the fair scheduler was designed to optimize. Several very large jobs arrive in succession (the high markers around Job 4820 and another just beyond Job 4845). Each arrival brings a large jump in the FIFO scheduler finishing time of subsequent jobs. This is again due to FIFO head-of-queue blocking. New jobs continue to lengthen the queue before old jobs can drain. Once the head-of-queue large job completes, all subsequent small jobs complete in rapid succession, leading to the horizontal row of markers. The fair scheduler, in contrast, shows small jobs with unaffected running times, sometimes orders of magnitude faster than their FIFO counterpart. Such improvements is in agreement with the best-case improvement reported in the original fair scheduler paper~\cite{fairScheduler}, but far higher than the average improvement reported there. 

%The lower graph shows the finishing time of small jobs during times of low load (note the different vertical axis). In this context, Hadoop 0.20.2 is slower than Hadoop 0.18.2, unsurprising given the many added features since 0.18.2. The fair scheduler brings little benefit in these settings. However, in this workload, low load periods occur more frequently than high load periods, meaning that the vast improvements during high load may averaged out into performance penalties. 






%
%
%\subsection{Acceptability of Ignoring Compute}
%
%A necessary step in anonymizing our trace data is to ignore specific job names and the computation they perform. 
%While losing computation reproducibility does not impact our ability to evaluate design decisions, our replay accuracy may be impacted due to the absence of compute-bottlenecked jobs in the workload. Although we have sufficient evidence from job names in our traces to suggest that most MapReduce jobs are not compute-bottlenecked, we perform due diligence to justify that it is acceptable to ignore compute. We withhold publication of job names precisely due to anonymity concerns. 
%
%\begin{figure}
%\begin{center}
%\includegraphics[height=5cm,width=8cm]{WordCount}
%\caption{\small Quantifying the cost of ignoring compute in wordcount.}
%\label{fig:WordCount}
%\end{center}
%\end{figure}
%
%We ran a MapReduce job that performed a word count on varying sizes of Wikipedia data loaded into our HDFS. Although word count is not the most compute-intensive example, it represents the higher end of computation performed in our production traces.  Figure \ref{fig:WordCount} compares the difference between performing the actual word count and running RatioMapReduce with the same data size. There is about 20\% difference in duration and energy. The power consumed is virtually identical. 
%
%To provide a more complete analysis, we also evaluate the cost of ignoring the computation on jobs known to be compute bound, such as calculating $\pi$ using 18 map tasks, each performing 5,000,000,000 Monte-Carlo samples. Since our summary statistics make no effort to normalize for computation, our synthetically reproduced $\pi$ has trivial data size, and the job completed in 23.6 seconds, compared to the actual compute $\pi$ counterpart, which took almost 9 minutes to complete. The synthetic job consumed 1.33 joules per node compared to 99290 joules per node. Power consumption, on the other hand, differed by a mere 10 Watts. Thus, in the extreme worst case, discrepancies are significant, but they are easily prevented within our framework if the workload generator inputs also include job computation semantics. 
%
%
%
%
%\subsection{Measurement results}
%\label{subsec:useCaseResults}
%
%We can visually verify that the observed performance differences on test clusters $T$ and $T'$ translate to production clusters $P$ and $P'$. Figure \ref{fig:EC2Hadoop18v20Comparison} shows the map and reduce time comparisons between test clusters $T$ vs. $T'$, and production clusters $P$ vs. $P'$. Each graph compares the map/reduce time in Hadoop 0.18.2 (horizontal axis) vs. Hadoop 0.20.2 (vertical axis). The graphs show log10 values in hexgonal bins, with darker colors meaning more data points in the bin. We also include the reference 1-to-1 diagonal. Dark bins below the diagonal indicates a performance improvement for many jobs going from Hadoop 0.18.2 to 0.20.2. If performance differences translate from test to production clusters, then graphs in each column should show similar shape of distribution for all bins, with roughly matching locations for the densest bins. This is indeed the case. In the left column comparing map times, both the top and bottom graphs show all bins group around the diagonal, with the densest bins also located around the diagonal. In the right column comparing reduce times, both top and bottom graphs also show all bins group around the diagonal, with the densest bins located below the diagonal. 
%
%Figure \ref{fig:EC2Hadoop18v20Comparison} gives qualitative indication that observed behavior translates. We outline below a more rigorous, quantitative comparison of the job failure, efficiency, and latency metrics.
%
%\begin{figure}[t]
%\begin{center}
%\centering
%\includegraphics[trim = 0.1cm 8.5cm 15cm 0.1cm, clip, width=8cm]{EC2Hadoop18v20Comparison}
%\vspace{-10pt}
%\caption{\small Map time and reduce time comparisons between test clusters $T$ vs. $T'$ (bottom), and between production clusters $P$ vs. $P'$ (top). In each column, the similar shapes and locations of the darkest bins show that the performance differences between test clusters translates to production clusters. See the beginning of Section \ref{subsec:useCaseResults} for detailed discussion. }
%\label{fig:EC2Hadoop18v20Comparison}
%\end{center}
%\vspace{-10pt}
%\end{figure}
%
%\subsection{Use case description}
%\label{subsec:useCaseDescribe}
%
%We are operators of a large scale EC2 cluster running a ``Facebook-like'' MapReduce workload. We are running Hadoop 0.18.2 with default configurations on a cluster of 200 m1.large instances. We want to find out what would be the performance improvement if we upgrade to Hadoop 0.20.2 with tuned, non-default configurations. We do not want to do a fork-lift upgrade of the large cluster until we are sure that there are significant performance improvements. We have financial resources to do short-term performance testing on a small scale cluster, but not enough resources to do testing on a mirror large scale cluster, nor conduct long-term measurements and comparisons. 
%
%This use case captures common performance testing challenges for MapReduce cluster operators. Successfully addressing the use case requires exercising all aspects of our MapReduce workload performance methodology. Below are some more details of the use case. 
%
%\subsubsection{``Facebook-like'' workload}
%
%Our workload model is the FB trace from Section \ref{sec:workloadComparison}. This trace describes only the data sizes at each input, shuffle, and output stages. We do not have access to the original map/reduce functions or the production data set at Facebook. Hence our ``Facebook-like'' workload operates on input data of random bytes, using data ratio preserving proxy map/reduce functions. We emphasize this synthetic workload is not a Facebook workload - essential information about map/reduce functions and the input production data set is missing. It is a ``Facebook-like'' workload that has the same job submission sequences, arrival intensities, data size, and data patterns as the original Facebook workload. In our use case, our large scale EC2 cluster runs the ``Facebook-like'' workload. 
%\begin{comment}
%This is a common situation that confronts MapReduce operators. If the monitoring tracing tools are only the built-in tools for Hadoop, MapReduce operators will have the same kind of traces and workload model. 
%\end{comment}
%
%To make sure the workload fits on a small scale test cluster, we scale the data size by the scaling factor of cluster size. As explained in Section \ref{subsec:pipeline}, this scaling preserves the workload compute intensity - a fraction of the workers (cluster size) is doing the corresponding fraction of work (data size). Similarly, the continuous time-window sampling method allows us to capture representative workload properties with short synthetic workloads and statistically bounded deviations. We want rapid experimentation. Thus, we produce a day-long synthetic workload using hour-long continuous samples of the workload trace. 
%
%\subsubsection{Production and test systems}
%
%We follow the performance comparison illustrated in Figure \ref{fig:WLMethod}. The production system $P$ runs Hadoop 0.18.2 with default configurations on 200 m1.large EC2 instances. The test system $T$ has identical characteristics, except the cluster size is decreased to 10 instances. Each EC2 instance has 7.5GB memory, equivalent CPU capacity of 4$\times$1.0-1.2GHz Opteron processors, and ``High'' IO performance. 
%%In our network bandwidth tests, we are able to send as high as 800Mbps using these instances.
%
%We want to know the performance of production system $P'$ that runs Hadoop 0.20.2, also on 200 m1.large EC2 instances. However, the Hadoop configurations would be tuned. Appendix \ref{appendix:tunedConfig} lists the tuned configuration values, and the reasons for changing them from the default. Our small scale test cluster $T'$ has identical characteristics, except the cluster size is decreased to 10 instances. 
%
%We know from Hadoop change logs that there have been many improvements and new features going from Hadoop 0.18.2 to Hadoop 0.20.2. Our measurement results show that the most important change for our ``Facebook-like'' workload is the switch from the FIFO scheduler that assigns all available cluster resources to jobs in FIFO order, to the fair scheduler that seeks to give each active job a concurrent fair share of the cluster resources. 
%
%
%
%\subsubsection{Job failure and efficiency}
%
%The job failure comparison was striking. On test cluster $T$ running Hadoop 0.18.2, 5.5\% of jobs did not complete, while only 0.1\% of jobs failed on test cluster $T'$ running Hadoop 0.20.2. This ordering is preserved on production clusters, with 8.4\% of jobs failing on production cluster $P$ and 0.7\% of jobs failing on production cluster $P'$. While we were not surprised that the ordering of failure rates is preserved, we find it striking that using Hadoop 0.20.2 cuts the failure rate by an order of magnitude. More detailed examination allowed us to identify a failure mode explained by the difference between the FIFO and fair scheduler. 
%
%The vast difference in job failure rates complicates a more rigorous comparison of efficiency and latency. First, we can meaningfully compare only those jobs that successfully completed in both $T$ and $T'$, or $P$ and $P'$. Second, more subtly, job failures in fact lighten the load on the cluster. When a job fails, the remainder of the work for that job is removed from the workload and no longer loads the cluster. Thus, successful jobs running immediately after job failures would see lighter than expected cluster load. However, if the all jobs had run to completion, then all jobs would see the same cluster load. The precise performance ordering would depend on the balance of better schedulers that improve efficiency and latency, bad schedulers that ``improve'' efficiency and latency by ``removing'' jobs from the workload through job failures, and other differences between the two system settings under comparison. 
%
%We include here the efficiency comparisons for jobs that are successful in both Hadoop 0.18.2 and 0.20.2, with an emphasis that the comparison is less reliable for the reasons listed above. Upgrading from Hadoop 0.18.2 on $T$ to 0.20.2 on $T'$ sees an average of 24\% improvement in map times and 22\% improvement in reduce times. The corresponding upgrade from $P$ to $P'$ sees an average of 3\% improvement in map time and 31\% improvement in reduce time. 
%
%We take a more detailed look at the effect of the fair scheduler below. 
%
%\subsubsection{Impact of schedulers}
%
%Briefly, MapReduce task schedulers work as follows. Each job breaks down into many map and reduce tasks, operating on a partition of the input, shuffle, and output data. These tasks execute in parallel on different worker machines on the cluster. Each machine has a fixed number of task slots, by default 2 map tasks and 2 reduce tasks per machine. The task scheduler sits on the Hadoop master, which receives job submission requests and coordinates the worker machines. The FIFO scheduler assigns all available task slots to jobs in FIFO order, while the fair schedule seeks to give each active job a concurrent fair share of the task slots. The biggest performance difference occurs when the job stream contains many small jobs following a large job. Under FIFO, the large job would take up all the task slots, with the small jobs enqueued until the large job completes. Under the fair scheduler, the large and small jobs would share the task slots equally, with the large jobs taking longer, but small jobs being able to run immediately. 
%
%Figure \ref{fig:fairScheduler} shows three illustrative job sequences and their run times under FIFO and fair schedulers. 
%
%\begin{figure}
%\begin{center}
%\centering
%%\includegraphics[width=0.98\columnwidth]{fairScheduler}
%\includegraphics[trim = 0.1cm 10cm 14cm 0.1cm, clip, width=8cm]{fairScheduler}
%\vspace{-10pt}
%\caption{\small Three illustrative job sequences for comparing the FIFO scheduler (solid markers) and the fair scheduler (hollow markers), showing job failures in the FIFO scheduler (top), unnecessarily long latency in the FIFO scheduler (middle), and slight latency increase for small jobs in Hadoop 0.20.2 (bottom).}
%\label{fig:fairScheduler}
%\end{center}
%\vspace{-10pt}
%\end{figure}
%
%In the top graph, several bursts of large jobs cause many jobs to fail for the FIFO scheduler, while the fair scheduler operates unaffected. Under FIFO, subsequent arrivals of small jobs steadily lengthens the job queue. There are several failure modes, and we have not yet pin-pointed the cause of every one. In one common failure mode, jobs fail because the the entire cluster runs out of disk space. The disk holds the working set of shuffle data of all active jobs. Having a large job queue can increase this working set considerably, with earlier jobs in the reduce phase operating on old shuffle data, but subsequent jobs writing additional shuffle data using free map slots. Full disks across the cluster cause jobs to fail despite re-submission and recovery mechanisms, until enough jobs have failed to cause intermediate data to be cleared and disk space to be freed. 
%
%Using very large disks can avoid this failure (the disks in the experiment are already 400GB). However, switching to the fair scheduler considerably lowers the disk space requirements, since all jobs have an equal chance to finish, allowing the working set of completed jobs to be reclaimed. However, even for the fair scheduler, we still observe the failure mode, just much more rarely. This illustates that running the synthetic workload can test the correct system sizing under realistic job sequences and data intensities, as we have identified a disk space limitation here. 
%
%The top graph also shows that successful jobs see lighter than expected cluster load when submitted immediately after strings of job failures. The running times for Jobs 4650 onwards all show that jobs using the FIFO scheduler completed faster. The reason is that preceding job failures removed otherwise still active cluster loads! When the failure rates differ so greatly, we believe the failure rate metric should take precedence over efficiency and latency metrics. 
%
%The middle graph shows the precise job arrival pattern that the fair scheduler was designed to optimize. Several very large jobs arrive in succession (the high markers around Job 4820 and another just beyond Job 4845). Each arrival brings a large jump in the FIFO scheduler finishing time of subsequent jobs. This is again due to FIFO head-of-queue blocking. New jobs continue to lengthen the queue before old jobs can drain. Once the head-of-queue large job completes, all subsequent small jobs complete in rapid succession, leading to the horizontal row of markers. The fair scheduler, in contrast, shows small jobs with unaffected running times, sometimes orders of magnitude faster than their FIFO counterpart. Such improvements is in agreement with the best-case improvement reported in the original fair scheduler paper~\cite{fairScheduler}, but far higher than the average improvement reported there. 
%
%The lower graph shows the finishing time of small jobs during times of low load (note the different vertical axis). In this context, Hadoop 0.20.2 is slower than Hadoop 0.18.2, unsurprising given the many added features since 0.18.2. The fair scheduler brings little benefit in these settings. However, in this workload, low load periods occur more frequently than high load periods, meaning that the vast improvements during high load may averaged out into performance penalties. 
%
%Based on these observations, we decide for the use case that we should upgrade to Hadoop 0.20.2. We further recommend that the fair scheduler should be the default scheduler for workloads with similar patterns of small jobs mixed with large jobs. The order-of-magnitude latency benefit for all jobs during load peaks far outweights latency increases of a few tens of seconds for small jobs during common periods of light load. 
%
%The original fair scheduler paper~\cite{fairScheduler} could not perform this analysis because the micro-benchmarks used there do not sufficiently capture job arrival sequences. In contrast, our continuous time-window trace sampling method does.
%
