\section{Evaluating a Proposal}
\label{sec:evaluate}

Once the system has been defined and a model chosen, there are two possible next steps. If the research contribution is the system model itself, then the quality of the model needs to be evaluated. Alternately, if the contribution is applying the model to improve system performance, then the evaluation needs to demonstrate both the quality of the model and the improvement in system performance. 

An example of evaluating statistical system models is in~\cite{ICDE2009}. The work seeks to develop a multi-dimensional system performance model for database queries. The key statistical tool used is kernel canonical correlation analysis (KCCA). At a high level, KCCA finds dimensions of maximal correlation between an input dataset of query descriptions and an output dataset of query behaviors~\cite{KCCA}. Both the input and output datasets are multi-dimensional. Evaluating the system model requires demonstrating that KCCA predicts system behavior that approximates actual system behavior. System behavior traces originate from running an extension of the TPC-DS decision support benchmark~\cite{TPCDS}. The trace then divides into training and testing datasets, a standard model evaluation technique in statistics. Graphs of predicted versus actually behavior demonstrate the accuracy of the models across multiple dimensions, including query time, message count, and records used (Figure~1). 

\begin{figure}[t]
\begin{center}
\subfigure[Query elapsed time]{
\includegraphics[trim=2cm 20cm 11cm 1.5cm, clip, width=0.31\textwidth]{caseStudies/GanapathiICDE2009Page9.pdf}
%\label{fig:icde1}
}
\subfigure[Message count]{
\includegraphics[trim=11cm 20cm 2cm 1.5cm, clip, width=0.31\textwidth]{caseStudies/GanapathiICDE2009Page9.pdf}
%\label{fig:icde2}
}
\subfigure[Records used]{
\includegraphics[trim=2cm 10.3cm 11cm 10.5cm, clip, width=0.29\textwidth]{caseStudies/GanapathiICDE2009Page9.pdf}
%\label{fig:icde3}
}
\end{center}
\label{fig:icde}
\caption[]{Evaluating system model - predicted vs. actual behavior. Graphs reproduced from ~\cite{ICDE2009}.}
\end{figure}


An example of demonstrating improved system performance is in~\cite{MASCOTS2011}. The work introduces realistic workload suites for MapReduce~\cite{MapReduce}, and uses them to compare the performance of the default MapReduce FIFO task scheduler versus the MapReduce fair scheduler~\cite{fairScheduler}. A fixed workload is replayed under each MapReduce scheduler, and the system performance behavior is observed and compared. In this case, the statistical model is about the input workload, not on the system, and the research contribution is on accurately capturing system behavior subject to realistic workload variations. Such variations complicate performance comparison, in that some conditions favor one system setting, while some other conditions favor other settings. This is true even for simple performance metrics such as job completion time. Figure~2 shows the fair scheduler gives lower job completion time than the FIFO scheduler under some workload arrival patterns, and vice versa under other arrival patterns. Thus, the choice of scheduler depends on a rigorous understanding of the workload. 

\begin{figure}[t]
\centering
\subfigure[Fair scheduler has lower job completion time.]{
\includegraphics[trim=11cm 18.5cm 2.5cm 6.6cm, clip, width=0.45\textwidth]{caseStudies/MapReduceWorkloadMASCOTSPublishedPage9.pdf}
}
\subfigure[FIFO scheduler has lower job completion time.]{
\includegraphics[trim=11cm 14.5cm 2.5cm 10.5cm, clip, width=0.45\textwidth]{caseStudies/MapReduceWorkloadMASCOTSPublishedPage9.pdf}
}
\label{fig:mascots}
\caption[]{Evaluating system performance under two settings - MapReduce FIFO scheduler vs. fair scheduler for two different job sequences. Graphs reproduced from ~\cite{MASCOTS2011}.}
\end{figure}



Both of the above studies confront similar challenges. First is the challenge of \emph{workload representativeness}. If the evaluation covers workloads that do not represent real life use cases, there would be little guidance on how evaluation results translate to real life systems. This challenge is especially relevant for the study in~\cite{MASCOTS2011}, where the choice of the optimal MapReduce scheduler depends on the particular mix of certain arrival patterns. For the study in~\cite{ICDE2009}, the standard TPC-DS benchmark is augmented with additional queries that are informed by real life decision support use cases. Such knowledge about real life use cases arises from either empirical analysis of system traces, or from system operator expertise. As data management systems become more complex and more rapidly evolving, it is likely that operator expertise alone would become insufficient, and good \emph{system monitoring} becomes pre-requisite for good system design. 

Another challenge is the need for \emph{continuous model re-training}. In~\cite{ICDE2009}, both the training and testing datasets come from the trace. This setup does not translate to a real life deployment, in which only the training dataset is available and the test set is generated in real time as queries are submitted to the system. For example, the query prediction model is trained once per configuration and the system takes some action based on the predicted resource requirements for a new query. However, the resulting behavior of the system's response to the new query is not accounted for when the model is static. The concept of a ``test dataset'' is ad hoc and should be constantly updated by subsequent queries. This is an inherent shortcoming of system behavior models, well-studied in the Internet measurement literature~\cite{simulateInternet}. Consequently, it is desirable to have statistical models that can \emph{constantly re-train parameter values} or even \emph{discover new parameters}. 

A third challenge is imposed by the \emph{limitations of system replay}. Replay here implies executing a workload on a real system, with the system making control actions using statistical models. The study in~\cite{MASCOTS2011} demonstrates this approach. Replay allows designers to explore the interaction between model re-training and system actions that affect the model. However, as data management systems grow in size, replaying long workloads at full scale and full duration becomes a logisitical limit. For example, the experiments in~\cite{MASCOTS2011} required using a 200-machine cluster for several days. Preparing the experiment required additional days of debugging at scale. Even then, the replay is not at the full scale of the original system that was traced. There is a spectrum of replay fidelity, from comparison experiments on the actual front-line, customer-facing systems, using production-scale data and covering long durations, to scaled-down experiments using artificial data and covering short durations. Ultimately, the system designer needs to judiciously select the \emph{appropriate replay fidelity}, balancing the need for quality insights, logistical feasibility, potential for improvements, and risk of negative system impact. 

%    ICDE + MASCOT
%   
%
%ground truth from trace collection
%
%evaluating based on training + test set in the traces
%
%    replay on real mapreduce system
%
%    limitations of replay
%
%    simulation vs. replay vs. training/testing 
