\section {Introduction}

Big data is one of the fastest growing fields in data processing today. It cuts across numerous industrial and public service areas, from web commerce to traditional retail, from genomics to geospatial, from social networks to interactive media, and from power grid management to traffic control. 
%It blurs the lines between production systems and academic research. 
New solutions appear at a rapid pace, and the need emerges for an objective method to compare their applicability, efficiency, and cost. In other words, there is a growing need for a set of big data performance benchmarks.

There have been a number of attempts at constructing big data benchmarks~\cite{YCSB,MapReduceDBMSComparison,Ferdman,Fadika}. None of them has gained wide recognition and broad usage. It remains an elusive goal to evaluate a wide range of proposed big data solutions. The field of big data performance is in a state where every study and claim employs a different methodology. Results from one publication to the next are not comparable and often not even closely related, as it was the case for OLTP some twenty years ago and for decision support shortly thereafter.

We recognize the need for a yard-stick to measure the performance of big data systems. The exploding market research and trade publications coverage of the subject indicates that big data is fast spreading within the corporate IT infrastructures, even for non-technology, traditional industry sectors. This new stage in the development of big data present an opportunity to recognize significant application domains as they emerge, in order to define relevant and objective benchmarks. 

We argue that the field as a whole has not gained sufficient experience to definitively say ``the big data benchmark should be X.'' Big data remains a new and rapidly changing field. The inherent complexity, diversity, and scale of such systems introduce additional challenges for defining a representative, portable, and scalable benchmark. However, by combining past experience from TPC-style benchmarks and emerging insights from MapReduce deployments, we can at least clarify some key concerns and concepts associated with building standard big data benchmarks. 

In this position paper, we draw on the lessons learned from more than two decades of database systems performance measurements, and propose a model that can be extended to big data systems. We discuss properties of good benchmarks and identify the characteristics of big data systems that make it challenging to develop such benchmarks (Section 2). We then present an insider's retrospective on the events that led to the development of TPC-C, the standard yardstick for OLTP performance , and discuss the conceptual models that led to a fully synthetic, yet representative benchmark (Section 3). We formalize two key concepts, \emph{functions of abstraction} and \emph{functional workload models}, and discuss how they enable the construction of representative benchmarks that can translate across different types of systems (Section 4). We then examine some recent cluster traces for MapReduce deployments, and highlight that the process of identifying MapReduce functions of abstraction and functional workload models remains a work in progress (Section 5). Combining the TPC-C and MapReduce insights, we present a vision for a big data benchmark that contains a mix of application domains (\ie application set), each containing its own functional workload models and functions of abstraction (Section 6). 
