\section{What's New for Benchmarking Big Data}

Prior work on computer performance methodology identified the following properties of a good benchmark~\cite{JimGrayBenchmarkCriteria,ferrariBook}:

\vspace{2pt}
\noindent \emph{Representative} 

The benchmark should measure performance using metrics that are relevant to real life application domains, and under conditions that reflect real life computational needs. 

\vspace{2pt}
\noindent \emph{Portable} 

The benchmark should be able to be ported to alternative kinds of systems that can service the computational needs of the same application domain. 

\vspace{2pt}
\noindent \emph{Scalable} 

The benchmark should be able to measure performance for both large and small systems, and for both large and small problem sizes. What are ``large'' and ``small'' systems and what are ``large'' and ``small'' problems are defined by the specific application domain to be benchmarked. 

\vspace{2pt}
\noindent \emph{Simple} 

The benchmark should be conceptually understandable. What is ``simple'' depends on the application domains to be benchmarked. The benchmark should try to abstract away details that do not affect performance, and represent case-by-case system configuration or administration choices. 

\vspace{4pt}
\noindent Given these criteria of a good benchmark, there are at least four properties of big data systems that make it challenging to develop big data benchmarks. 

\vspace{2pt}
\noindent \emph{System diversity} 

Big data systems store and manipulate many kinds of data. Such diversity translates to significant, and sometimes mutually exclusive, variations in system design. A design optimized for one application domain may be suboptimal, or even harmful for another. Conversely, it is challenging for a big data benchmark to be representative and portable. The benchmark needs to replicate realistic conditions across a range of application domains, and use metrics that translates across potentially divergent computational needs. 

\vspace{2pt}
\noindent \emph{Rapid data evolution} 

Big data computational needs constantly and rapidly evolve. Such rapid changes reflects the innovations in business, science, and consumer behavior facilitated by knowledge extracted from big data. Consequently, the change in the underlying data likely outpaces the ability to develop schema about the data, or gain design intuition about the systems that manipulate the data. It becomes challenging to ensure that the benchmark tracks such changes both in the data sources and the resulting systems. 

\vspace{2pt}
\noindent \emph{System and data scale} 

Big data systems often involve multi-layered and distributed components, while big data itself often involves multiple sources of different formats. This translates to multiple ways for the system to scale at multiple layers, and the data to scale across multiple sources and formats. Consequently, it becomes challenging for a big data benchmark to correctly scale the benchmarked problem, and correctly measure how the system performs in solving these problems. 

\vspace{2pt}
\noindent \emph{System complexity} 

The multi-layered and distributed nature of big data systems also make it challenging for a big data benchmark to be simple. Any simplifications of big data systems is likely to remain complex in the absolute sense. Further, any simplifications need objective, empirical measurements to verify that system performance is not affected. 

Of the four benchmark properties, we argue that the most important are ``representative'' and ``portable''. The rest of the paper introduces some benchmark principles that helps achieve these two properties. ``Scalability'' remains an important challenge to be addressed by future work. ``Simple'' is likely to be hard to achieve in an absolute sense --- for complex systems, ``simple'' is inherently in conflict with ``representative''. 

