\section{Functions of Abstraction and Functional Workload Model}

The process of constructing TPC-C highlights two key concepts --- \emph{functions of abstraction} and the \emph{functional workload model}. In this section, we explain what they are, and why they are vital to constructing big data benchmarks that target specific application domains while accommodating diverse system implementations. 

\subsection{Functions of abstraction}
\label{subsec:functionsOfAbstraction}

\emph{Functions of abstraction} are units of computation that appear frequently in the application domain being benchmarked. They are expressed in a generic form that is independent of the underlying system implementation. They are units of computation against a data set. Their definition includes the distribution of the data they target and the rules governing the interactions they have with each other. Such a specification is \emph{abstract}, in that it only defines the \emph{functional} goal of the unit of computation, independently of the system level components that might be used to service it. As a result, they can operate against variable and scalable data volumes and can be combined in workloads of desired complexity. Further, one can extract various performance metrics from their combined execution to meet specific benchmarking goals.

TPC-C (\ie Order-Entry) is articulated around the following five functions of abstraction: a midweight read-write transaction (\ie New-Order), a lightweight read-write transaction (\ie Payment), a midweight read-only transaction (\ie Order-Status), a batch of midweight read-write transactions (\ie Delivery), and a heavyweight read-only transaction (\ie Stock-Level)~\cite{TPC-C-details}. They are specified in the semantic context, or story-line, of an order processing environment. That context, however, is entirely artificial. Its sole purpose is to allow easy description of the components.

For each of the above functions of abstraction, the functional goal is defined in terms of a set of data manipulation operations.  The targeted data is defined through the distribution of values used as input variables. Their interactions is governed by the transactional properties of atomicity, consistentency, isolation, and durability (\ie the ACID properties). Defined as such, the TPC-C transactions achieve the goals of functions of abstraction as follows:
\begin{itemize}
\item The underlying system could be a relational database, a traditional file system, a CODASYL database, or an extension of the Apache Hadoop implementation of MapReduce that provides transactional capabilities. 
\item The volume and distribution of the data set on which the transactions operate is specified separately. 
\item Benchmarks of various complexities can be created by using various combinations and mixes of the defined functions of abstraction. While TPC-C involves the combination of all five transactions, the Payment transaction can be run by itself to implement the TPC-A (\ie Debit-Credit) benchmark.
\item Multiple performance metrics can be extracted by measuring various aspects of the resulting execution. The TPC-C standard tpm-C metric (number of New-Order transactions executed per minute) is one of them. Many other metrics are possible, \eg the 99th percentile latency (response time) of the Payment transactions.
\end{itemize}

\noindent Once defined, the functions of abstraction can be combined with a specified scheduling and with the definition of table structures and populations to form a functional workload model, which we explain next. 


\subsection{Functional Workload Model}
\label{subsec:functionalWorkloadModel}

The \emph{functional workload model} seeks to capture the representative load that a system needs to service in the context of a selected application domain. It describes in a system-independent manner the compute patterns (\ie functions of abstraction) as well as their scheduling and the data set they act upon. Describing the scheduling involves specifying one or more load patterns to be applied to the system. This definition is in terms of execution frequency, distribution and arrival rate of each individual function of abstraction. Describing the data set acted upon involves specifying its structure, inter-dependence, initial size and contents, and how it evolves over the course of the workload's execution. These descriptions are of a synthetic nature in that they are limited to the essential characteristics of the functional goals of the selected application domain. 

The simplicity and lack of duplication at the core of the definition of functions of abstraction are just as critical for the design of the other components (\ie scheduling and data set) of the functional workload model. This is exemplified in TPC-C as follows: 
\begin{itemize}
\item There are functions of abstractions in the form of five transactions.
\item There is a randomized arrival pattern for the transactions.
\item There is an inter-dependence between the transactions. In particular, every New-Order will be accompanied by a Payment, and every group of ten New-Order transactions will produce one Delivery, one Order-Status, and one Stock-Level transaction~\cite{TPC-C-details}. 
\item There are specified structures, inter-dependencies, contents, sizes, and growth rates for the data set, materialized in nine tables (\ie Warehouse, District, Customer, History, Order, New-Order, Order-Line, Stock, and Item~\cite{TPC-C-overview}). 
\end{itemize}

\noindent A major shortcoming of several recent benchmarking attempts is the lack of any clear workload model, let alone a functional workload model as defined here. The resulting \emph{micro benchmarks} measure system performance using one stand-alone compute unit at a time~\cite{MapReduceDBMSComparison,Hibench,Gridmix,Terasort}. They are lacking the functional view that is essential to benchmarking the diverse and rapidly changing big data systems aimed at servicing emerging application domains, as we explain next. 
%Both the functions of abstraction and the functional workload model seek to capture the functional goals of any given application domain. 


\subsection{Functional benchmarks essential for big data}
\label{subsec:FunctionalBenchmarksEssentialForBigData}

The ultimate goal of benchmarking is to facilitate claims such as ``System $X$ has been engineered to have good performance for the application domain $Y$.'' Evaluating big data systems is no exception and, as such, requires an approach based on the same level of abstraction that brought success to existing benchmarking efforts, such as that for OLTP and decision support. 

\begin{figure}[t]
\begin{center}
\centering
\includegraphics[trim = 2cm 4cm 3cm 1cm, clip, width=10cm]{figs/FunctionalWorkloadModel}
\caption{The conceptual relations between application domains, functional workload models, functions of abstraction, and the system and physical views.}
\label{fig:functionalWorkloadModel}
\end{center}
\end{figure}

The functional view explained previously mirrors the view detailed in early computer performance literature~\cite{ferrariWorkloadAFIPS}. The functional view enables a large range of similarly targeted systems to be compared, because such an abstraction level has been intentionally constructed to be independent of system implementation choices. In particular, the functional description of TPC-C does not preclude an OLTP system to be built on top of, say, the Hadoop distributed file system, and its performance compared against a traditional RDBMS system. Such comparisons are essential for decisions that commit significant resources to alternate kinds of big data systems, \eg for assigning design priorities, purchasing equipment, or selecting technologies for deployment.  

The functional view also allows the benchmark to scale and evolve. This ability comes from the fact that functions of abstraction are specifically constructed to be independent of each other, and of the characteristics of the data sets they act upon. Thus, functions of abstraction can remain relatively fixed as the size of the data set is scaled. Further, as each application domain evolves, functions of abstraction can be deprecated or added, appear with changed frequencies or sequences, or performed on data sets with different characteristics. Thus, functions of abstraction forms an essential part of a scalable and evolving benchmark. 

A common but incorrect approach is to benchmark systems using \emph{physical} abstractions that describe hardware behavior, as illustrated by the \emph{Physical View} layer in Figure~\ref{fig:functionalWorkloadModel}. This view describes the CPU, memory, disk, and network activities during workload execution. Benchmarking systems at this level allows the identification of potential hardware bottlenecks. However, the physical level behavior changes upon hardware, software, or even configuration changes in the system. Thus, benchmarks that prescribe physical behavior precludes any kind of performance comparison between different systems --- the differences between systems alter the physical behavior, making the results from physical benchmark executions on two systems non-comparable. Prior work on computer performance in general have already critiqued this approach~\cite{ferrariWorkloadAFIPS}. Studies on Internet workloads have further phrased this concern as the ``shaping problem''~\cite{simulateInternet}.

Another alternative is to benchmark systems using the \emph{systems} view~\cite{SWIM,YanpeiThesis}, as illustrated by the \emph{Systems View} layer in Figure~\ref{fig:functionalWorkloadModel}. This approach captures system behavior at the natural, highest level semantic boundaries in the underlying system. For TCP-C, the systems view breaks down each function of abstraction into relational operators (systems view for RDBMS) or read/write operations (systems view for file systems). For MapReduce, this translates to breaking down each function of abstraction into component jobs and describe the per job characteristics. The systems view does enable performance comparison across hardware, software, and configuration changes. However, it requires a new systems view to be developed for each style of system. More importantly, the systems view precludes comparisons between two different kinds of systems servicing the same goals, \eg between a MapReduce system and a RDBMS that service the functionally equivalent enterprise data warehouse workload. This would be a major shortcoming for big data benchmarks that seek to accommodate different styles of systems. 

Hence, we advocate the functional view for big data benchmarks, as illustrated by the \emph{Functional Worload Model} layer in Figure~\ref{fig:functionalWorkloadModel}. It enables comparison between diverse styles of systems that service the same functional goals, while allowing the benchmark to scale and evolve as each application domain emerges. 



%\begin{figure}
%\begin{center}
%\centering
%\includegraphics[trim = 0cm 12cm 3cm 0cm, clip, width=10cm]{figs/FunctionalWorkloadModel}
%%\vspace{-17pt}
%\caption{\The layers of abstraction in benchmark design.}
%\label{figMethod:FuncWorkModel}
%\end{center}
%%\vspace{-8pt}
%\end{figure}

