
%\vspace{-4pt}
\section{Towards MapReduce Workload Suites}
\label{sec:conclude}
%\vspace{-1pt}

We must go beyond using one-size-fits-all benchmarks for MapReduce performance evaluations. Our comparison of two production traces shows great differences between MapReduce use cases. No single benchmark can capture such diverse behavior. We advocate for performance evaluations using representative workloads and presented a framework to generate and execute such workloads. We demonstrated that running realistic workloads gives cluster operators new ways to identify system bottlenecks and evaluate system configurations. 

We believe that having representative workloads can assist many recent MapReduce studies. For example, studies on how to predict MapReduce job running times~\cite{progressPrediction,smdb2010} can evaluate their mechanisms on realistic job mixes. Studies on MapReduce energy efficiency~\cite{EEHDFS,EEMRvldb} can quantify energy savings under realistic workload fluctuations. Various efforts to develop effective MapReduce workload management schemes~\cite{mantri,mesos} can generalize their findings across a different realistic workloads. In short, having realistic workloads allow MapReduce researchers better understand the strengths and limitations of the proposed optimizations. 

Our work complements efforts to develop MapReduce simulators~\cite{mumak, mrperf}. Having realistic workloads allows cluster operators to run simulations with realistic inputs, amplifying the benefit of MapReduce simulators. Our approach differs from that taken by benchmarks that focus on IO byte stream properties~\cite{YCSB}. We focus on MapReduce-level semantics, such as input/shuffle/output data sizes, because doing so would produce more immediate MapReduce design insights. 

Many open problems remain for future work. One simplification we made is that it is sufficient to replicate only the data characteristics when we execute the workload. This simplification is acceptable here because we know through direct conversations that the Facebook production cluster runs many Extract-Transform-Load (ETL) jobs, whose behavior is dominated by data movements. Future work should go beyond this simplification. Ideally, we would construct common jobs with more complete semantics than just the data ratios.  

Looking forward, we expect that we would eventually understand the full taxonomy of MapReduce use cases. At that point, we can move beyond the highly-targeted, one-per-use-case workload suites that we proposed here. The next step in performance evaluations would move towards standardized workload suites, which serves as a richer kind of ``benchmark''. 

The first step towards that goal is to understand more MapReduce workloads. To that end, we invite all MapReduce cluster operators to publish their production workloads. We hope our workload description vocabulary can provide an initial format for releasing traces. We also hope that our workload synthesis tools can assure MapReduce operators that they can release representative workloads without compromising confidential information about their production clusters. 

