\section{Recommendations for Big Data Benchmark}

The concepts of functions of abstraction, functional workload model, and application domains help us develop a vision for a possible big data benchmark. This vision is captured in Figure~\ref{fig:benchmarkVision}. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 0cm 0.7cm 1cm 0cm, clip, width=12cm]{figs/BenchmarkVision}
\caption{A vision for a big data benchmark.}
\label{fig:benchmarkVision}
\end{center}
\end{figure}

Big data encompasses many application domains. Hence a big data benchmark also encompasses multiple application domains. OLTP is one domain. If confirmed by further survey, other possible domains are flexible latency analytics, interactive analytics, and semi-streaming analytics. There may be other application domains yet to be identified. The criteria for including an application domain is that (1) a trace-based or human-based survey indicates that the application domain is important to the big data needs of a large range of enterprises, and (2) sufficient empirical traces are available to allow functions of abstraction and functional workload models to be extracted. 

Within each application domain, there are multiple functions of abstraction, extracted from empirical traces and defined in the fashion outlined in Section~\ref{subsec:functionsOfAbstraction}. While each specific system deployment may include a different set of functions of abstraction, the benchmark should include the common functions of abstraction from across all system deployments within the application domain. What is ``common'' needs to be supported by empirical traces. 

There is also a representative functional workload model, extracted from empirical traces and defined in the fashion outlined in Section~\ref{subsec:functionalWorkloadModel}. Each specific system deployment or application may include a different functional workload model, \ie different mix of functions of abstraction, organization of data sets, and workload arrival patterns. The benchmark should include a single representative functional workload model for each application domain, \ie a functional workload model that is not specific to any one application, greatly simplified, and yet typical of the entire application domain. The details of this representative functional workload model need to be supported by empirical traces.

Enterprises with different business needs would prioritize different application domains. Hence, each application domain would be given a different numerical weight in terms of the relative amounts of computation generated. As typical enterprises can be classified into a small number of categories, we speculate that their big data needs can likewise be classified. Hence, there would be several different sets of numerical weights, each corresponding to the big data needs of a particular type of enterprise. The actual numerical weights of each such application sets need to be supported by empirical traces across multiple application domains at each type of enterprise.

The traces and survey used to support the selection of functional workload models and numerical weights should be made public. Doing so allows the benchmark to establish scientific credibility, defend against charges that it is not representative of real life conditions, and align with the business needs of enterprises seeking to derive value from big data. 

Further, the hierarchical structure allows the coverage of diverse application domains and diverse business-driven priorities among application domains. At the same time, comparability between results is enhanced by limiting the weights of application domains to a fixed small set, and for each application domain, limiting the functional workload model to a single representative one. 

It should also be noted that we could include TPC-C, and its successor, TPC-E, as the functional workload model for the OLTP application domain of a big data benchmark.

One issue unaddressed by this framework is to balance the need for a comparable benchmark against the need for the benchmark to keep up with evolution in how enterprises use big data. The choice within the TPC community is to keep the benchmarks static. This choice has successfully allowed the TPC style benchmarks to remain comparable across several decades, but has also led to some criticism that the benchmarks are stale. A different choice made by the SPEC community is to evolve its benchmarks regularly. This ensures that the benchmarks reflect the latest ways in which systems are operated, but limits comparable benchmark results to only those within the same version of each benchmark. The framework we proposed allows for either choice. Big data itself is likely to evolve over time. The pace of this evolution remains to be seen. 

To summarize, in this paper we introduce several concepts essential for building a big data benchmark --- functions of abstraction, functional workload model, and application domains. A big data benchmark built around these concepts can be representative, portable, scalable, and relatively simple. The next step in the process of building a big data benchmark would be to survey additional system deployments to identify the actual application domains worthy of inclusion in the benchmark, and within each application domain, identify the representative functional workload model and its functions of abstraction. 

%What can we carry over from small data benchmarking
%How the synthetic design approach with functions of abstraction can be applied to Big Data. Propose what they might be as a starting point for Big Data benchmarks.
%What is yet to be solved for Big Data benchmarking
%Tying in with earlier section about "How is benchmarking impacted by the differences".  Aspects of Big Data that represent new challenges not solved with small data, feeding into future work.
