\section {The Process of Building TPC-C}
\label{sec:TPC-C}

TPC-C is a good example of a benchmark that has had a substantial impact on technologies and systems. Understanding the origin of this long standing industry yardstick provides important clues toward the definition of a big data benchmark. In this section, we retrace the events that lead to the creation of TPC-C and present the conceptual motivation behind its design. 

\subsection{The origin of TPC-C}

The emergence and rapid growth of On Line Transaction Processing (OLTP) in the early eighties highlights the importance of benchmarking a specific application domain. The field of transaction processing was heating up and the need to satisfy on-line transaction requirements for fast user response times was growing rapidly.  CODASYL databases supporting transactional properties were the dominant technology, a status increasingly challenged by relational databases. For instance, version 3 of the Oracle relational database, released in 1983, implemented support for the COMMIT and ROLLBACK functionalities. As competition intensified, the need emerged for an objective measure of performance. In 1985, Jim Gray led an industry-academia group of over twenty members to create a new OLTP benchmark under the name DebitCredit~\cite{TransacProcessingPower}.

In the late eighties, relational databases had matured and were fast replacing the CODASYL model. The DebitCredit benchmark, and its derivatives ET1 and TP1, had become de-facto standards. Database system vendors used them to make performance claims, often raising controversies~\cite{DebitCreditIssues}. A single standard was still absent, which led to confusion about the comparability of results. In June of 1988, T. Sawyer and O. Serlin proposed to standardize DebitCredit. Later that year, O. Serlin spearheaded the creation of the Transaction Processing Performance Council (TPC) tasked with creating an industry standard version of DebitCredit~\cite{DebitCreditHistory}. 

%Around the time when the TPC was formed, Digital Equipment Corporation (DEC) was in the process of developing a new relational database product, code name RdbStar. It was soon recognized within the development team that a performance benchmark would be needed to assess the capabilities of early versions of the new product. An effort was started to survey the many database benchmarks known at the time, including the Wisconsin benchmark~\cite{WisconsinBenchmark}, AS3AP~\cite{AS3AP}, The Set Query Benchmark~\cite{SetQueryBenchmark} and a number of other unpublished database benchmarks. 
%
%One such benchmark had been developed by the Microelectronics and Computer Consortium (MCC), one of the largest computer industry research and development consortia, based in Austin, TX. Researchers at MCC were working on distributed database technology and had developed a simulator to test various distributed database designs. Part of the simulator was based on executing OLTP functions inspired by an order processing application. The MCC benchmark approach was selected by DEC as the base for the design of the RdbStar benchmark. Parts of the MCC benchmark were adjusted, as will be discussed later in this paper, and the resulting benchmark became known internally as Order-Entry. 

Around this time, Digital Equipment Corporation (DEC) was in the process of developing a new relational database product, code name RdbStar. The development team soon recognized that a performance benchmark would be needed to assess the capabilities of early versions of the new product. DEC's European subsidiary had been conducting a vast survey of database applications across France, England, Italy, Germany, Holland, Denmark and Finland. Production systems at key customer sites had been examined and local support staff interviewed. The survey sought to better understand how databases were used in the field and which features were most commonly found in production systems. Armed with this data, the RdbStar benchmark development project started with an examination of the many database benchmarks known at the time, including the Wisconsin benchmark~\cite{WisconsinBenchmark}, AS3AP~\cite{AS3AP} and the Set Query Benchmark~\cite{SetQueryBenchmark}.  

The approach found to be the most representative of the European survey's findings came from an unpublished benchmark, one developed by the Microelectronics and Computer Consortium (MCC), one of the largest computer industry research and development consortia, based in Austin, TX. Researchers at MCC were working on distributed database technology~\cite{Belady} and had developed a simulator to test various designs. Part of the simulator involved executing OLTP functions inspired by an order processing application. The MCC benchmark was selected by DEC as the starting point for the RdbStar benchmark. Parts of the MCC benchmark were adjusted, as will be discussed later in this paper, and the resulting benchmark became known internally as Order-Entry. 

In November of 1989, the TPC published its standardized end-to-end version of DebitCredit under the name TPC Benchmark A (TPC-A)~\cite{TPC-A}. TPC Benchmark B (TPC-B)~\cite{TPC-B} followed in August 1990, which represented a back-end version of TPC-A. By then, the simple transaction in DebitCredit was starting to come under fire as being too simplistic and not sufficiently exercising the features of mature database products. The TPC issued a request for proposal of a more complex OLTP benchmark. IBM submitted its RAMP-C benchmark and DEC submitted Order-Entry. The TPC selected the DEC benchmark and assigned its author, F. Raab, to lead the creation of the new standard. July 1992 saw the approval and release of the new TPC Benchmark C (TPC-C)~\cite{TPC-C-overview}.



\subsection{The abstract makeup of TPC-C}

The original Order-Entry benchmark from DEC included two components: a set of database transactions targeting the OLTP application domain, and a set of simple and complex queries targeting the decision support application domain. The TPC adopted the OLTP portion of Order-Entry for the creation of TPC-C. This portion included a controlled mix of five transactions executing against a database of nine tables. 

The design of the transactional portion of Order-Entry did not follow the traditional model used for building business applications. The design of an application can be decomposed into four basic elements, as follows:
\begin{itemize}
\item Tables: The database tables, the layout of the rows and the correlation between tables.
\item Population: The data that populates the tables, the distribution of values and the correlation between the values in different columns of the tables.
\item Tansactions: The units of computation against the data in the tables, the distribution of input variables and the interactions between transactions.
\item Scheduling: The pacing and mix of transactions.
\end{itemize}
In the traditional design model, each of these elements implements part of the business functions targeted by the application. The tables would represent the business context. The population would start with a base set capturing the initial state of the business and evolve as a result of conducting daily business. The transactions would implement the business functions. The scheduling would reflect business activity.  

This model results in benchmarks aligned with the business details of the targeted application rather than by more general benchmarking objectives. Most critically, the benchmark elements are specific to a single application and not representative of the whole domain. In contrast, standard benchmarking is a synthetic activity that seeks to be representative of a collection of applications within a domain, and its sole purpose is to gather relevant performance information. Being free of any real business context, the elements of such a benchmark should be abstracted from a representative cross section of the applications within the targeted domain.

To illustrate the concept of using abstractions to design the elements of a benchmark, we take a closer look at how this applies to transactions. The objective is to look at the compute units of multiple applications and to find repetitions or similarities. For instance, in the OLTP application domain, it is common to find user-initiated operations that involve multiple successive database transactions. While these transactions are related through the application's business semantics, they are otherwise independent from the point of view of exercising and measuring the system. Consequently, they should be examined independently during the process of creating a set of abstract database transactions. Consider the following:
{\footnotesize
\begin{verbatim}
    User-initiated operation
        Database Transaction T1
            Read row from table A
            Update row in table B
            Commit transaction
        Database Transaction T2
            Update row in table A
            Insert row in table C
            Commit transaction
        Database Transaction T3
            Read row from table C
            Update row in table B
            Commit transaction
\end{verbatim}
}
In the above, T1 and T3 are performing similar operations, but on different tables. However, if tables A and C have the same characteristics, T1 and T3 can be viewed as duplicates of the same abstract transaction, one that contains a ``read row'' followed by an ``update row''. 

Order-Entry abstracted the multitude of real-life transactions from the OLTP application domain to only five abstract transactions. Such a compression resulted in a substantial loss of specificity. However, we argue that the loss is more than outweighed by the gain in the ability to gather relevant performance information across a large portion of the OLTP application domain. The success of the benchmark over the last two decades appears to support this view.



