
\section{Benchmark Lessons from Hadoop MapReduce} 

The field of big data encompasses many types of systems. Two important examples are relational databases and MapReduce. This section serves as the counterpart to Section~\ref{sec:TPC-C}. It discusses the state of limited empirical knowledge about how organizations use big data, which hinders the identification of representative functions of abstraction (Section~\ref{subsec:MapReduceEmpirical}). However, the empirical data hints toward the identification of some application domains, each of which subject to its own functional workload model (Section~\ref{subsec:MapReduceAppDomains}). Further, we argue that the effort to build a big data benchmark should go beyond MapReduce-specific benchmarks, some of which are deficient even as MapReduce-only benchmarks (Section~\ref{subsec:MapReduceBenchmarkCritique}). 

\subsection{Towards functions of abstraction for MapReduce}
\label{subsec:MapReduceEmpirical}

The widespread deployment of MapReduce is a relatively recent phenomenon compared with the established nature of systems that fall under TPC-C. As such, only within the last year did researchers undertake systematic comparison of MapReduce deployments within multiple industrial sectors~\cite{HadoopWorkloadsVLDB2012}. The latest empirical insights indicate that the effort to extract Hadoop MapReduce functions of abstraction remains a work in progress. As the functional workload models capture the mixes of functions of abstraction, the lack of the latter also implies the lack of the former. Two bottlenecks are that more system deployments need to be surveyed and that information regarding the functional computation goals remains to be collected. 

The limited nature of existing empirical data is due to the fact that MapReduce became accepted for business-critical computations only recently. The data in~\cite{HadoopWorkloadsVLDB2012}, while unprecedented for MapReduce, is limited to seven workloads. This is far from the breadth of the OLTP survey that preceded TPC-C. 
%Had the seven workloads in~\cite{HadoopWorkloadsVLDB2012} been similar to one another, perhaps one could develop some confidence that all MapReduce system deployments would be similar. 
A key result from~\cite{HadoopWorkloadsVLDB2012} is the diversity of observed behavior. This result indicates that we should survey more system deployments to systematically understand both common and outlier behavior. Even if functions of abstraction are extracted from the current, limited survey, there is no guarantee that these functions of abstraction would be representative of a majority of MapReduce deployments. 

The lack of direct information regarding functional computation goals is due to the fact that current logging tools in the Apache Hadoop implementation of MapReduce collect only system-level information. Specifically, the analysis in~\cite{HadoopWorkloadsVLDB2012} identified common MapReduce jobs using abstractions that are inherently tied to the map and reduce computational paradigm (\ie input, shuffle, output data sizes, job durations, map and reduce task times). While such a systems-view has already led to some MapReduce-specific performance tools~\cite{SWIM}, this view becomes insufficient for constructing functions of abstractions that encompass both MapReduce and other systems. 

A good starting point to define functions of abstraction within MapReduce deployments would be to capture the data query or workflow text at Hadoop extensions such as Hive~\cite{Hive}, Pig~\cite{Pig}, HBase~\cite{HBase}, Oozie~\cite{Oozie}, or Sqoop~\cite{Sqoop}. The hope is that the analysis of a large collection of such query or workflow texts would mirror the empirical survey that led to the TPC-C functions of abstraction. A complementary effort is to systematically collect the experiences of human data scientists and big data systems administrators. A collection of such first-hand experiences should offer insights on what are the common big data business goals and the ensuing computational needs. The emergence of enterprise MapReduce vendors with a broad customer base helps expedite such efforts.


\subsection{Speculation on application domains within MapReduce}
\label{subsec:MapReduceAppDomains}

The data in~\cite{HadoopWorkloadsVLDB2012} allows us to speculate on the big data application domains that are addressed by the MapReduce deployments surveyed, notwithstanding the limits outlined in Section~\ref{subsec:MapReduceEmpirical}. In the following, we describe the characteristics of these application domains.

A leading application domain is \emph{flexible latency analytics}, for which MapReduce was originally designed~\cite{MapReduce}. Flexible latency analytics is indicated by the presence of some jobs with input and output data sets that are orders of magnitude larger than for other jobs, up to the ``full'' data set. This application domain has previously been called ``batch analytics''. However, as with other application domains such as decision support, the batch nature is due to the limited capabilities of early systems. Low latency is desirable but not yet essential; hence ``flexible latency''. The data in~\cite{HadoopWorkloadsVLDB2012} indicates that different deployments perform vastly different kinds of analytics, suggesting that the application domain likely involves functions of abstraction with a wide range of characteristics. 

Another application domain is \emph{interactive analytics}. Evidence suggesting interactive analytics include (1) diurnal workload patterns, identified by visual inspection, and (2) the presence across all workloads of frameworks such as Hive and Pig, one of whose design goals was ease of use by human analysts familiar with SQL. The presence of this application domain is confirmed by human data scientists and systems administrators~\cite{conversationFacebook}. Low computational latency would be a major requirement. It is likely that this application domain is broader than online analytical processing (OLAP), since the analytics typically involve unstructured data, and some analyses are specifically performed to explore and identify possible data schema. The functional workload model is likely to contain a dynamic mix of functions of abstraction, with a large amount of noise and burstiness overlaid on a daily diurnal pattern. 

Yet another application domain is \emph{semi-streaming analytics}. Streaming analytics describes continuous computation processes, which often update time-aggregation metrics. For MapReduce, a common substitute for truly streaming analytics is to setup automated jobs that regularly operate on recent data, \eg compute click-rate statistics for a social network with a job every five minutes. Since ``recent'' data is intentionally smaller than ``historical'' data, we expect functions of abstraction for this application domain to run on relatively small and uniformly sized subset of data. The functional workload model is likely to involve a steady mix of these functions of abstraction. 

According to the seven deployments surveyed in~\cite{HadoopWorkloadsVLDB2012}, all three application domains appear in all deployments. While interactive analytics carries the most weight in terms of the number of jobs, they are all good candidates for inclusion in a big data benchmark, provided that they are confirmed by either empirical or human surveys of additional big data deployments.


\subsection{MapReduce-specific benchmarks and their shortcomings}
\label{subsec:MapReduceBenchmarkCritique}

The success of MapReduce greatly helped raised the profile of big data. While the application domains currently dominated by MapReduce should be a part of big data benchmarks, the existing MapReduce benchmarks are not representative of all big data systems. The following examines some existing MapReduce benchmarks, some of which are deficient even as MapReduce-only benchmarks. 

Some tools measure stand-alone MapReduce jobs~\cite{MapReduceDBMSComparison,Hibench,Gridmix,Terasort}. These tools lack the true concept of a multi-job \emph{workload}. They are inherently limited to measuring only a narrow sliver of the full range of cluster behavior, as a real life cluster hardly ever runs one job at a time or just a handful of specific jobs. Such tools are insufficient for quantifying even just MapReduce performance. 

Some tools do have a workload perspective, but assume more of a physical view~\cite{Gridmix3}. They seek to reproduce the exact breakdown of jobs into tasks, the exact placement of tasks on machines, and the exact scheduling of task execution. This is a physical view because one cannot compare two MapReduce systems that, for example, have different algorithms for translating jobs into tasks, placing tasks onto machines, scheduling tasks to be executed differently, or even the same MapReduce software system that operates on two clusters of different sizes. Further, the attempt to reproduce a large amount of execution details introduces scalability issues for both the tracing tool and the trace replay tool~\cite{conversationFacebook,intelWBDB2012}. Such physical view is also insufficient for quantifying MapReduce performance, let alone the rest of big data. 

Some tools have a systems-view of MapReduce workloads~\cite{SWIM}. They are sufficient for measuring MapReduce performance, and are already used by Cloudera and its partners. However, as the systems view is specific for MapReduce, they are also insufficient for a more general big data benchmark. 

Hence, identifying the functions of abstraction for MapReduce application domains is an important step for building a part of a general big data benchmark.





%The following discussion uses the same set of large-scale Hadoop cluster traces, since they are the only one of its kind available. Table~\ref{table:summaryTraces} summarize the traces, which include Facebook, a social-media company and leading Hadoop user, and e-commerce, media, telecommunications, and retail customers of Cloudera, a leading vendor of enterprise Apache Hadoop. 
%
%\begin{table}[t]
%\centering
%\scriptsize
%\begin{tabular}{r r r r r r} 
%\hline
%{\bf Trace} & \textbf{Machines} & {\bf Length} & {\bf Date} & {\bf Jobs} & {\bf Bytes }\\ 
%            &                   &              &            & 		      & {\bf moved} \\ \hline 
%\texttt{CC-a}    & $<$100  & 1 month    & 2011                 & 5759     &	80 TB		\\
%\texttt{CC-b}    & 300     & 9 days     & 2011                 & 22974    &	600 TB		\\
%\texttt{CC-c}    & 700     & 1 month    & 2011                 & 21030    &	18 PB		\\
%\texttt{CC-d}    & 400-500 & 2+ months  & 2011                 & 13283    &	8 PB		\\
%\texttt{CC-e}    & 100     & 9 days     & 2011                 & 10790    &	590 TB		\\
%\texttt{FB-2009} &  600    & 6 months   & 2009                 & 1129193  &	9.4 PB		\\ 
%\texttt{FB-2010} & 3000    & 1.5 months & 2010                 & 1169184  &	1.5 EB \\ \hline
%Total            & $>$5000 & $\approx$ 1 year & -              & 2372213  & 1.6 EB \\ 
%\hline 
%\end{tabular}
%\normalsize
%\vspace{5pt}
%\caption{\small Summary of MapReduce traces. CC is short for ``Cloudera Customer''. FB is short for ``Facebook''. Bytes moved is input + shuffle + output data sizes for all jobs.}
%\label{table:summaryTraces}
%\end{table}
%
%\vspace{8pt}
%\noindent To summarize the empirical analysis in~\cite{HadoopWorkloadsVLDB2012}, key insights are: 
%
%\begin{itemize}
%\item There is a new class of MapReduce application domains for interactive, semi-streaming analysis that differs considerably from the original MapReduce domain of purely batch computations. 
%\item There is a wide range of behavior within this application domain, such that we must exercise caution in regarding any aspect of the overall dynamics as ``typical''. 
%\item Some prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs no longer hold.
%\item The application domains constantly evolve, such that even for the same cluster, design insights need to be periodically refreshed. 
%\end{itemize}
%
%\vspace{8pt}
%\noindent Some more detailed quantitative observations include:
%
%\begin{itemize}
%\item Data access patterns: Skew in data accesses range between an 80-1 and 80-8 rule. Temporal locality exists, and 80\% of data re-accesses occur on the range of minutes to hours. 
%\item Load variation over time: The cluster load is bursty and unpredictable. Peak-to-median ratio in cluster load range from 9:1 to 260:1. 
%\item Common job types: All workloads contain a range of job types. Over 90\% of all jobs are characterized by 10s of KB to GB of data, a range of data patterns between the map and reduce stages, and durations of 10s of seconds to a few minutes. Other job types appear with a wide range of frequencies. 
%\item SQL-like programming frameworks: The cluster load that comes from Hive, Pig, and other such frameworks is up to 80\% and at least 20\%. Additional tracing at the Hive, Pig, and HBase levels is required. 
%\end{itemize}
%
%\noindent Such empirical insights form the basis for deriving the Hadoop MapReduce functions of abstraction and functional workload model. 

