%\section{A Look at Real Traces}
\section{Lessons from Two Production Traces}
\label{sec:traces}

There is a chronic shortage of production traces available for MapReduce researchers. Without examining these traces, it would be impossible to evaluate various proposed MapReduce benchmarks. We have access to two production Hadoop MapReduce traces, which we analyze and compare below. 

One trace comes from a 600-machine cluster at Facebook (FB trace), spans 6 months from May 2009 to October 2009, and contains roughly 1 million jobs. The other trace comes from a cluster of approximately 2000 machines at Yahoo! (YH trace), covers three weeks in late February 2009 and early March 2009, and contains around 30,000 jobs. Both traces contain a list of job submission and completion times, data sizes for the input, shuffle and output stages, and the running time in task-seconds of map and reduce functions (e.g., 2 tasks running 10 seconds each will be 20 task-seconds). Thus, these traces offer a rare opportunity to compare two large-scale MapReduce deployments using the same trace format. 

We compare MapReduce data characteristics (Section \ref{subsec:dataChar}), job submission and data movement rates (Section \ref{subsec:timeChar}), and common jobs within each trace (Section \ref{subsec:commonJobs}). The comparison helps us develop a vocabulary to capture key properties of MapReduce workloads (Section \ref{subsec:workloadVocab}). 


\subsection{Data Characteristics}
\label{subsec:dataChar}

MapReduce operates on key-value pairs at input, shuffle (intermediate), and output stages. The size of data at each of these stages provides a first-order indication of \emph{what data the jobs run on}. Figure \ref{fig:workloadStats} shows the aggregate data sizes and data ratios at the input/shuffle/output stages, plotted as a cumulative distribution function (CDF) of the jobs in each trace. 

\begin{figure}[!t]
\begin{center}
%\begin{minipage}[t]{0.49\columnwidth}
\centering
\includegraphics[trim=0cm 8.5cm 12.7cm 0cm, clip, width=8cm]{dataCharacteristics}
\vspace{-10pt}
\caption{\small Data size at input/shuffle/output (top), and data ratio between each stage (botttom).}
\label{fig:workloadStats}
\end{center}
\vspace{-20pt}
\end{figure}


The input, shuffle, and output data sizes range from KBs to TBs in both traces. Within the same trace, the data sizes at different MapReduce stages follow different distributions. Across the two traces, the same MapReduce stage also has different distributions. Thus, the two MapReduce systems were performing different computations on different data sets. Additionally, many jobs in the FB trace have no shuffle stage - the map outputs are directly written to the Hadoop Distributed File System (HDFS). Consequently, the CDF of the shuffle data sizes has a high density at 0. 

The data ratios between the output/input, shuffle/input, and output/shuffle stages also span several orders of magnitude. 
%, offering further evidence that the two MapReduce systems were performing different types of work. 
Interestingly, there is little density around 1 for the FB trace, indicating that most jobs are data expansions (ratio $>>$ 1 for all stages) or data compressions (ratio $<<$ 1 for all stages). The YH trace shows more data transformations (ratio $\approx$ 1). 


\subsection{Job Submission and Data Intensity Over Time}
\label{subsec:timeChar}

Job submission patterns and data intensities indicate \emph{how much work there is}. The YH trace is too short to capture the long-term evolution in job submission and data intensity. Thus, we focus on the FB trace for this analysis. 

\begin{figure}[!t]
\begin{center}
\centering
%\includegraphics[width=0.98\columnwidth]{weeklyTrend-FB}
\includegraphics[trim = 0cm 13cm 14cm 0cm, clip, width=7.5cm]{weeklyTrend-FB}
\vspace{-10pt}
\caption{\small Weekly aggregate of job counts (top) and sum of map and reduce data size (below). FB trace.}
\label{fig:weeklyTrend}
\end{center}
\vspace{-5pt}
\end{figure}

\begin{figure}[!t]
\begin{center}
\centering
%\includegraphics[width=0.98\columnwidth]{hourlyTrend-FB}
\includegraphics[trim = 0cm 11.5cm 14cm 0cm, clip, width=7.5cm]{hourlyTrend-FB}
\vspace{-10pt}
\caption{\small Hourly job submission rate (top) and sum of input, shuffle, output data sizes (below) over a randomly selected week. FB trace.}
\label{fig:hourlyTrend}
\end{center}
\vspace{-20pt}
\end{figure}

Figure \ref{fig:weeklyTrend} shows weekly aggregates of job counts and the sum of input, shuffle, and output data sizes over the entire trace. There is no long term growth trend in the number of jobs or the sum of data size. Also, there is high variation in the sum data size but not in the number of jobs, indicating significant changes in the data processed. We also see a sharp drop in the number of jobs in Week 11, which our Facebook collaborators clarified was due to a change in cluster operations. 

Figure \ref{fig:hourlyTrend} shows hourly aggregates of job counts and sum data sizes over a randomly selected week. We do not aggregate at scales below an hour as many jobs take tens of minutes to complete. There is high variation in both the number of jobs and the sum data size. 
The number of jobs also show weak diurnal patterns, peaking at mid-day and dropping at mid-night. To detect the existence of any cycles, we performed Fourier analysis on the hourly job counts over the entire trace. There are visible but weak cycles at the 12 hour, daily, and weekly frequencies, mixed in with a high amount of noise. 
%
%We also investigated whether a simple generative process could fit the job arrival patterns. We randomly sample the distribution of job inter-arrival times, computed over the entire trace, and generate a week-long job sequence. Figure \ref{fig:empiricalVsPoisson} compares the hourly aggregate of this sequence with an empirical sequence. Clearly, this simple process cannot capture the full range of peak to average behavior. 
%
%\begin{figure}
%\begin{center}
%\centering
%%\includegraphics[width=0.98\columnwidth]{hourlyTrend-FB}
%\includegraphics[trim = 0.1cm 15.5cm 14.5cm 0.1cm, clip, width=8cm]{empiricalVsPoisson}
%\vspace{-10pt}
%\caption{\small Comparison of empirical, hourly aggregate job arrival rates over a week against arrival rates generated by a random sampling arrival process. }
%\label{fig:empiricalVsPoisson}
%\end{center}
%\vspace{-10pt}
%\end{figure}


\subsection{Common Jobs Within Each Trace}
\label{subsec:commonJobs}

In this section, we investigate whether we can capture \emph{what the computation is} using a small set of ``common jobs''. We use k-means, a well-known data clustering algorithm~\cite{introToMachineLearning}, to extract such information. We describe each job with many features (dimensions), and input the array of all jobs into k-means. K-means finds the natural clusters of data points, i.e., jobs. We consider jobs in the same cluster as belonging to a single equivalence class, i.e., a single common job. %More details about the mechanics of the k-means algorithm can be found in various statistical machine learning texts, e.g., . 

We describe each job using all 6 features available from the cluster traces - the job's input, shuffle, output data sizes in bytes, its running time in seconds, and its map and reduce time in task-seconds. We linearly normalize all data dimensions to a range between 0 and 1 to account for the different measurement units in each feature. We increment $k$, the number of clusters, until there is diminishing improvement in the cluster quality. We measure cluster quality by \emph{variance explained}, a standard metric computed by the difference between the total variance in all data points and the residual variance between the data points and their assigned cluster centers. 

\begin{figure}[t]
\begin{center}
\centering
%\begin{minipage}[t]{0.49\columnwidth}
%\centering
%\includegraphics[width=0.98\columnwidth]{ClusterESS-FB}
%\end{minipage}
%\begin{minipage}[t]{0.49\columnwidth}
%\centering
%\includegraphics[width=0.98\columnwidth]{ClusterESS-YH}
%\end{minipage}
\includegraphics[trim = 0.1cm 16cm 18cm 0.1cm, clip, width=6cm]{kmeansQuality}
\vspace{-10pt}
\caption{\small Cluster quality - \% variance explained vs. the number of clusters. The marker indicates the number of clusters used for more detailed analysis.}
\label{fig:clusterESS}
\end{center}
\vspace{-20pt}
\end{figure}

Figure \ref{fig:clusterESS} shows the \% variance explained as $k$ increases. Even at small $k$, we start seeing diminishing improvements. Having too many clusters will lead to an unwieldy number of common jobs, even though the variance explained will be higher. We believe a good place to stop is $k=10$ for the FB trace, and $k=8$ for the YH trace, indicated by the markers in Figure \ref{fig:clusterESS}. At these points, the clustering structure explains 70\% (FB) and 80\% (YH) of the total variance, suggesting that a small set of common jobs can indeed cover a large range of per-job behavior in the trace data. 

We can identify the characteristics of these common jobs by looking at the numerical values of the cluster centers. Table \ref{tab:clusters} shows the cluster size and our manually applied labels. We derive the labels from looking at the data ratios and durations at various stages, e.g., ``aggregate, fast'' jobs have shuffle/input and output/shuffle ratios both $<<$1, and relatively small duration compare with the other jobs. 

Both traces have a large cluster of small jobs, and several small clusters of various large jobs. Small jobs dominate the total number of jobs, while large jobs dominate all other dimensions. Thus, for performance metrics that place equal weights on all jobs, the small jobs should be an optimization priority. However, large jobs should be the priority for performance metrics that weigh each job according to its size either in data, running time, or map and reduce task time.   

Also, the FB and YH traces contain different job types. the FB trace contains many data loading jobs, characterized by large output data size $>>$ input data size, with minimal shuffle data. The YH trace does not contain this job type. Both traces contain some mixture of jobs performing data aggregation (input $>$ output), expansion (input $<$ output), transformation (input $\approx$ output), and summary (input $>>$ output), with each job type in varying proportions. 

%\begin{table}
%\centering
%\caption{\small Cluster sizes and labels for FB (top) and YH (below). See Appendix \ref{appendix:clusterLabels} for the numeric values of the cluster centers, and an explanation of how we assigned the cluster labels. }
%\vspace{2 mm}
%\small
%\begin{tabular}{r|l} 
%\small
%{\bf \# Jobs} & {\bf Label} \\ \hline
%& \\ \hline
%1081918 & Small jobs \\ 
%37038 & Load data, fast \\
%2070 & Load data, slow \\
%602 & Load data, large \\
%180 & Load data, huge \\ 
%6035 & Aggregate, fast \\
%379 & Aggregate and expand \\
%159 & Expand and aggregate \\
%793 & Data transform \\
%19 & Data summary \\ \hline
%& \\ \hline
%21981 & Small jobs \\
%838 & Aggregate, fast \\
%91 & Expand and aggregate \\
%7 & Transform and expand \\
%35 & Data summary \\
%5 & Data summary, large \\
%1303 & Data transform \\ 
%2 & Data transform, large \\ \hline
%\end{tabular}
%\normalsize
%\label{tab:clusters}
%\vspace{-10pt}
%\end{table}


\begin{table*}[!t]
\centering
\caption{Cluster sizes, medians, and labels for FB (top) and YH (below). Map time and reduce time are in task-seconds, e.g., 2 tasks of 10 seconds each is 20 task-seconds.}
\vspace{-5pt}
\footnotesize
\begin{tabular}{r|r r r r r r|l} 
{\bf \# Jobs} & {\bf Input} & {\bf Shuffle} & {\bf Output} & {\bf Duration} & {\bf Map time} & {\bf Reduce time} & {\bf Label} \\ \hline \hline
\multicolumn{8}{l}{Facebook trace} \\ \hline
1081918 & 21 KB & 0 & 871 KB & 32 s & 20 & 0 & Small jobs \\ 
37038 & 381 KB & 0 & 1.9 GB & 21 min & 6,079 & 0 & Load data, fast \\
2070 & 10 KB	& 0 & 4.2 GB & 1 hr 50 min & 26,321 & 0 & Load data, slow \\
602 & 405 KB & 0 & 447 GB & 1 hr 10 min & 66,657 & 0 & Load data, large \\
180 & 446 KB & 0 & 1.1 TB & 5 hrs 5 min & 125,662 & 0 & Load data, huge \\ 
6035 & 230 GB & 8.8 GB & 491 MB & 15 min & 104,338 & 66,760 & Aggregate, fast \\
379 & 1.9 TB & 502 MB & 2.6 GB & 30 min & 348,942 & 76,736 & Aggregate and expand \\
159 & 418 GB & 2.5 TB & 45 GB & 1 hr 25 min & 1,076,089 & 974,395 & Expand and aggregate \\
793 & 255 GB & 788 GB & 1.6 GB & 35 min & 384,562 & 338,050 & Data transform \\
19 & 7.6 TB & 51 GB & 104 KB & 55 min & 4,843,452 & 853,911 & Data summary \\ \hline \hline
\multicolumn{8}{l}{Yahoo trace} \\ \hline
21981 & 174 MB & 73 MB & 6 MB & 1 min & 412 & 740 & Small jobs \\
838 & 568 GB & 76 GB & 3.9 GB & 35 min & 270376 & 589385 & Aggregate, fast \\
91 & 206 GB & 1.5 TB & 133 MB & 40 min & 983998 & 1425941 & Expand and aggregate \\
7 & 806 GB & 235 GB & 10 TB & 2 hrs 35 min & 257567 & 979181 & Transform and expand \\
35 & 4.9 TB & 78 GB & 775 MB & 3 hrs 45 min & 4481926 & 1663358 & Data summary \\ 
5 & 31 TB & 937 GB & 475 MB & 8 hrs 35 min & 33606055 & 31884004 & Data summary, large \\
1303 & 36 GB & 15 GB & 4.0 GB & 1 hr & 15021 & 13614 & Data transform \\ 
2 & 5.5 TB & 10 TB & 2.5 TB & 4 hrs 40 min & 7729409 & 8305880 & Data transform, large \\ \hline
\end{tabular}
\normalsize
\label{tab:clusters}
\vspace{-10pt}
\end{table*}

%\textbf{Implications for workload model:} A small group of ``common jobs'' can cover a majority of variance in job characteristics within a workload. A good workload model must capture the characteristics and proportions of these jobs. We can run a data clustering algorithm to identify the empirical proportions and multi-dimensional characteristics of each job type in a particular workload,

\subsection{To Describe a Workload}
\label{subsec:workloadVocab}

Our comparison shows that the two traces capture different MapReduce use cases. Thus, we need a good way to describe each workload and compare it against other workloads. 

We believe a good description should focus on the semantics at the MapReduce abstraction level. This includes the data on which the jobs run, the number of jobs there are in the workload and their arrival patterns, and the computation performed by the jobs. A workload description at this level will persist despite any changes in the underlying physical hardware (e.g., CPU/memory/network), or any overlaid functionality extensions (e.g., Hive or Pig). 

\vspace{3pt}
\noindent\emph{Data on which the jobs run:} 

The data size and data ratios at the input/shuffle/output stages provide a first-order description of the data. We need to describe both the statistical distribution of data sizes, and the per-job data ratios at each stage. Statistical techniques like k-means can extract the dependencies between various data dimensions. Also, the data format should be captured. Even though our traces contain no such information, data formats help us separate workloads that have the same data sizes and ratios, but different data content. 

\vspace{3pt}
\noindent\emph{Number of jobs and their arrival patterns:} 

The list of job submissions and submission times provide a very specific description. Given that we cannot list submission times for all jobs, we should describe submission patterns using averages, peak-to-average ratios, diurnal patterns, etc. 

\vspace{3pt}
\noindent\emph{Computation performed by the jobs:} 

The actual code for map and reduce tasks represents the most accurate description of the computation done. However, the code is often unavailable due to logistical and confidentiality reasons. We believe that a good alternative is to identify classes of common jobs in the workload, using data characteristics, job durations, and task run times, as in our k-means analysis. When the code is available, it would be helpful to add some semantic description, such as text parsing, reversing indices, image processing, anomaly detection, and others. We expect this information would be available within organizations directly managing the MapReduce clusters. 

The above facilitates a qualitative description. We introduce a quantitative description in Section \ref{sec:synthesis} when we describe how to synthesize a representative workload. 

%Combining the insights from previous subsections, we believe the following is a good workload model. The model consists of cluster traces that contain a list of jobs, their submission times, a description of their input data set, and a description of the map and reduce functions of each job. This model has several noteworthy properties. 
%
%\textbf{Completely empirical model:} The cluster trace \emph{is} the model. This is a very simplistic approach. It avoids the difficulties of fitting different analytical distributions for data characteristic of different workloads (Section \ref{subsec:dataChar}), or parameterizing analytical job arrival models with empirically observed arrival rates (Section \ref{subsec:timeChar}). If the traces cover diurnal cycles (Section \ref{subsec:timeChar}), then the model automatically captures the diurnal patterns. The model also caputures the correct mix of job types, and for each job type, the input data properties and properties of the map and reduce functions (Section \ref{subsec:commonJobs}). The model relies on good monitoring capabilities to obtain traces of representative behavior, and workload synthesis tools that can operate on purely empirical models. We assume the former, and describe the latter in the next section of the paper. 
%
%\textbf{Can utilize partial information:} Even if we have access to production data and code, it would be difficult to get accurate ``descriptions of input data set'' and ``descriptions of map and reduce functions'', because both could continuously change. Thus, we must make do with lists of jobs with their corresponding job type, proxy data set, and proxy map/reduce functions for each job type. Sometimes, we have even less information, and our monitoring system records only the data size at the input, shuffle, and output stages. We are then compelled to use generic test data, and proxy map/reduce functions that preserve the data ratios but perform no other computation. Since our model is completely empirical, having partial information only means that less information is passed into the workload synthesis and replay stages. We show later in the paper that using proxy data sets and map/reduce functions can alter performance behavior considerably, but a careful interpretation of the workload replay results would still yield useful insights. 
%
%\textbf{Independent of system characteristics and behavior:} Our model specifies the workload without invoking descriptions of system characteristics or behavior. This approach gives us system-independent workloads that allow MapReduce developers to optimize hardware, configurations, schedulers, and other features using different workloads. Our model also leaves as potential optimization targets all behavior characteristics, such as running time, CPU consumption, or data locality. If we include system characteristics in the workload description, we would restrict the workload to only systems with identical characteristics. We also believe that system behavior should not be a part of the workload description at all - running time, CPU consumption, data locality etc. would change from system to system. A workload model that describes system behavior would need to be recalibrated upon any change in the input data, map/reduce function code, or the underlying hardware/software system. 
%
%This empirical model of traces of jobs, submission times, proxy data set, and proxy map/reduce functions forms the input to our synthetic workload generator, described next. 

%
%
%It is not surprising that existing MapReduce micro-benchmarks cover only some subsets of some workloads. A comparison between Table \ref{table:gridmix1and2} and Figure \ref{fig:workloadStats} illustrates the point. We can improve micro-benchmarks by using some method to identify common jobs, such as the k-means analysis we presented. Such analysis will allow us to include more representative jobs and more realistic weights and repeats for each job. 
%
%Our workload comparison also shows that MapReduce at different organizations exhibit completely different behaviors. Until we analyze more workloads, it remains unknown whether different MapReduce workloads share enough common features to justify a single micro-benchmark. Before such a general micro-benchmark suite can be developed, our analysis methods in this section can help organizations build per-workload micro-benchmarks. 
%
%Our analysis of job behavior across time illustrate another shortcoming of existing micro-benchmark structures. Micro-benchmarks today either measure one job at once, or launch a suite of jobs in rapid succession. Such mechanisms are oblivious to the large range of time varying behavior. Thus, to evaluate system performance for the ``average'' and ``common'' case, and to stress the system with realistic bursts and cycles, we need an evaluation methodology at the multi-job workload level. We endeavor to develop such a methodlogy in the next section. 







