\section{Automated Exploratory Data Analysis}
\label{sec:eda}

%\note{Reviewer}{The description here is pedestrian.
%As indicated above in section 3, a diagram illustrating the whole framework would be nice.\\
%\changed{PK: added table~\ref{tab:eda_framework} for illustrating the structure of the platform}}
%\note{ML}{Please add for each shown figure: ``lessons learned'' from the figure (or else don't show them). \changed{PK: done.}}

%\note{PK}{ Changed the boxplot figure from MAXSAT to QBF as it contains less algorithms and therefore fits better in the limited space. Also, the scatterplot matrix, as well as the density and cumulated density plots in figure~\ref{fig:qbf} were based on QBF. Now it's more consistent.}

%\note{Reviewer}{Increase font size in Figure~\ref{fig:scatter} and \ref{fig:qbf}!}

%\note{Reviewer}{Figure~\ref{fig:qbf} caption should be expanded to clarify.\\
%\changed{PK: I rearranged the 4 four plots within the two figures. Now, the scatterplot matrix is a figure itself and therefore has sufficient space to be better readable. Also, I added the boxplots to the density and cumulated density. This way, I was able to increase the visibility of all four plots. Even though there are now three plots in figure~\ref{fig:qbf} instead of two, they have a little bit more space, because they all now share the same legend. Furthermore, all three plots help to understand the distribution of the performance values within a single algorithm, whereas the scatterplot matrix looks at the correlation between them. The only downside of this approach is, that I had to modify the figures slightly (compared to the HTML pages). Is that still ok?}}

%\note{ML/Reviewer}{Number of features per feature groups not correct in Table~\ref{table:features} - I assume the complete row of PRE-featuretime is not correct. Please read format specification carefully!\\}

% LK: I've commented this for now -- it is somewhat redundant with the
% information provided in the text and I would prefer this to be included in the
% itemized list that is there already.
%\begin{table}[t]
%\centering
%{\small
%\begin{tabular}{lll}
%\toprule
% \multicolumn{3}{c}{\textbf{Overview of all Tasks}} \\ \hline
%   \rule{0pt}{9pt}\textbf{Algorithms Overview} 	& \textbf{Features Overview} 		& \textbf{Further Results / Material} \\ %\hline
%   \rule{0pt}{9pt}Summary of: 				& Summary of: 			& - readme, describing the task\\
%   - performance values 				& - feature values 		& - download the scenario files \\
%   - run status 						& - run status and costs	& - benchmark results \\
%   - dominated algorithms				& of feature steps 		& (see Section~\ref{sec:experiments}) \\
%   								& - duplicated features 	& - configurations \\
%    								& 					& - scenario validator output \\
%   \multicolumn{2}{l}{\textbf{Figures (based on performance values):}}		&  \\
%   - boxplots						&  					&  \\
%   - estimated density plots				&  					&  \\
%   \multicolumn{2}{l}{- estimated cumulative distribution functions}	&  \\
%   - scatterplot matrix  				&  					&  \\
%   \multicolumn{2}{l}{- plot of clustered correlation matrix}			&  \\
%   \bottomrule
%\end{tabular}
%}
%\caption{Framework of the Platform's Automated Exploratory Data Analysis.}
%\label{tab:eda_framework}
%\end{table}

The online platform for our data repository does not only offer the scenario data files themselves, it also
provides many tables and figures that summarize them.
%Table~\ref{tab:eda_framework} gives an overview of the platform's framework.
These pages are automatically generated and currently consist (among
others) of the following parts:

\begin{itemize}
\item an overview table that describes all available scenarios by listing, e.g., the number of instances, algorithms and features, similar to Table~\ref{tab:overview};
\item a summary of the algorithms' performance and run status data;
\item a summary of the feature values, as well as the run status and costs of the feature steps;
\item benchmark results for standard machine learning models for each scenario; see Section~\ref{sec:experiments}.
\end{itemize}

Presenting this additional data offers the following advantages:

\begin{itemize}
\item Researchers can quickly understand which scenarios are available and select those best suited to their needs.
\item Data can be sanity-checked by eye-balling. It is common that data collection errors occur when scenario data is gathered and submitted for the first time.
\item Interesting or challenging properties of the data sets become visible, providing the researcher with a quick and informative first impression.
\end{itemize}

The \textbf{summary page for the algorithms} starts with a table listing simple statistics 
regarding their performance (e.g., mean values and standard variations) and run status 
(e.g., how many runs were successful or not). 
We also indicate whether one algorithm is dominated by another,\footnote{%
\changed{An algorithm $a_1$ dominates another algorithm $a_2$ if and only if $a_1$ has performance at least equal to that of $a_2$ on all instances, and $a_1$ outperforms $a_2$ on at least one instance.}}
which is useful, because there is no reason to include a dominated algorithm in a portfolio.
Various visualizations, such as box plots, scatter plot matrices, correlation plots and density plots 
enable further inspection of the distribution 
%\note{FH}{what does ``further inspection of location'' mean? Or is this a typo?}
and correlation between algorithms\changed{, allowing the reader to better understand the strengths and weaknesses of each algorithm.} For display, we impute high values for the missing performance values corresponding to failed runs so that they are clearly visible rather than silently discarded. All of our plots can be configured to use log scales, which often improves visual understanding when all data are non-negative.
%(i.e., logarithms cannot be calculated if the data includes non-positive values).

\begin{figure}[!t]
	\begin{minipage}[!t]{0.29\textwidth}
	\includegraphics[width=\textwidth]{pics/qbf_boxplot.pdf}
	\end{minipage}
	\hfill
	\begin{minipage}[!t]{0.7\textwidth}
	\includegraphics[width=\textwidth]{pics/qbf_cdf.pdf}
	\end{minipage}
%	\hfill
%	\begin{minipage}[!t]{0.18\textwidth}
%	\hspace*{-0.75cm}
%	\includegraphics[width=2\textwidth]{pics/qbf_legend.pdf}
%	\end{minipage}
%	\begin{minipage}[!t]{0.5\textwidth}
%	\includegraphics[width=\textwidth]{pics/dens_qbf2.pdf}
%	\end{minipage}
%	\hfill
%	\begin{minipage}[!t]{0.5\textwidth}
%	\includegraphics[width=\textwidth]{pics/prob_qbf2.pdf}
%	\end{minipage}
	 \caption{\changed{Algorithm performance distributions of the QBF-$2011$
     scenario: Boxplots (left) and cumulative distribution functions (right);
     both on a log scale. The gaps between the end of the curves (right) and
     $1.00$ denotes the fraction of instances that were not solved within the
     timeout.}}
	\label{fig:qbf}
\end{figure}

%\note{HH}{I am trying this one more time: the estimated densities are misleading and should be omitted. Consider Figure~2: Most of the modes shown in the middle plot are almost certainly spurious, as can be clearly seen from the CDFs -- in particular, the large modes near the cutoff. If anyone really insists on keeping them, I insist that we add at least a warning to that effect to the text. However, I believe keeping them is a mistake: we implicitly encourage others to use a methodology which we know to be flawed in that it can (and often does) produce misleading results, while we all know well how to avoid these issues.}

Figure~\ref{fig:qbf} shows \changed{boxplots} and cumulative distribution functions for the algorithms in the QBF-$2011$ scenario. Such plots allow the detection of mean location, distribution spread, density multimodality and whether the densities are roughly normally distributed. In addition, they reveal how long it took an algorithm to solve the given instances. \changed{For example, for the QBF-2011 scenario in Figure~\ref{fig:qbf}, one can see that the algorithm \system{quantor} finds a solution very quickly on a few instances, i.e., it solves approximately $5\%$ of the instances \hh{nearly instantaneously}. However, if it does not succeed quickly, it often does not succeed at all---it solved less than $30\%$ of all the instances. In contrast, \system{sSolve} usually needs longer to find a solution, but by the time it does, it is one of the best algorithms. Such behavior can indicate that the algorithm \hh{requires a `warm-up'} stage, which should be considered when deploying it.}
%\note{FH}{I believe vague language such as `might indicate' in the examples we present ultimately signals to readers and reviewers that we haven't invested much time into looking at the domains we study. While this is the case for some domains, we know others really well and we might want to use that fact in the examples we give (e.g., for the SAT example below I added some details). Does anyone know what \system{sSolve} does, e.g., from a paper describing it? Maybe its contributor? (I don't know who provided it.)}
%\note{PK}{If I remember correct, this scenario comes from Lars. But in any case, I am not sure, whether it is a bad thing to state something rather vague. I mean, the intention of this section is to describe the automated EDA, which our platform provides. This section should not aim at discussing the scenarios based on expert knowledge, it should rather inform the reader, what he/she might see by looking at our plots (without being an expert in each of the applied algorithms).}

The left panel of \changed{Figure~\ref{fig:scatter} shows pairwise scatterplots of the QBF-$2011$ scenario, allowing an easy comparison of algorithm pairs on all instances from a given scenario. 
\hh{Each point represents a problem instance within the task, and from the location of the point cloud, one can see whether an algorithm is dominant over the majority of instances, or whether relative performance strongly varies between instances. The first case can be identified by a cloud that is located either in the upper-left or lower-right corner of a single scatterplot. In such a case, the dominated algorithm could be discarded from the portfolio. However, if this type of domination is not present, there is the potential to realize performance improvements by means of per-instance algorithm selection.}}
%\note{FH}{Why did we pick this scenario? The text does not refer to the figure again, so does that mean we did not learn anything from these plots? Is there a more interesting case where we do actually learn something?}
%\note{PK}{We picked this scenario mainly because of its number of algorithms; the SAT scenarios had too many algorithms, so you barely see anything on a paper and scenarios with only two algorithms were not very interesting as well.}

%The left half of Figure~\ref{fig:scatter} shows box plots for the algorithms in the MAXSAT$12$-PMS scenario. The right half displays pairwise scatter plots for the QBF-$2011$ scenario.
%The diagonal shows histograms of the algorithms' performance and the scatter plots are enhanced with a linear regression line. Pearson's coefficient of correlation -- a measure for the linear relationship between two variables -- is also calculated for each pair of algorithms. Strong positive correlations are marked green, strong negative correlations red, mediocre correlations orange and low correlations black. 
%Detecting correlations between algorithms is interesting, as algorithms that have a (high) positive correlation are more likely to be redundant in a portfolio, whereas pairs with a (high) negative correlation are more likely to complement each other.


\begin{figure}[!t]
	\centering
	\begin{minipage}[!t]{0.55\textwidth}
	\includegraphics[width=1.05\textwidth]{pics/qbf_scatter.pdf}
	\end{minipage}
%	\newline
	\hfill
	\begin{minipage}[!t]{0.43\textwidth}
	\hspace{-0.65cm}
	\includegraphics[width=1.15\textwidth]{pics/qbf_cormatrix.pdf}
	\end{minipage}
	 \caption{\changed{Pairwise correlations among algorithms of the QBF-$2011$ scenario: A scatter plot matrix on a log scale (left) and the plot of a clustered correlation matrix (right).}}
	\label{fig:scatter}
\end{figure}

\changed{\hh{Because detecting correlation in algorithm performance is also of interest 
	when analyzing the strengths and weaknesses of a given portfolio-based solver}~\cite{xu2012evaluating},} we also present a clustered correlation matrix, cf. Figure \ref{fig:scatter} (right panel). \changed{Algorithms that have a (high) positive correlation are more likely to be redundant in a portfolio, whereas pairs with a (high) negative correlation are more likely to complement each other.} Here, we calculate Spearman's correlation coefficient between ranks. Blue boxes represent positive correlation, red boxes represent negative correlation, and shading indicates the strength of  correlation. The algorithms are also clustered according to these values (using Ward's method~\cite{ward63}) and then sorted, such that similar algorithms appear together in blocks. \changed{This type of clustering allows the identification of algorithms with highly correlated performance.}
	
\changed{The example given in Figure \ref{fig:cormat} (the plot of a correlation matrix of the SAT$12$-ALL scenario) shows three groups of algorithms (\system{minisatpsm} to \system{restartsat}, \system{sattimep} to \system{tnm} and the three \system{mphaseSAT}-algorithms) with high correlations within each group. Also, the performance of \system{marchrw} is distinct from all the others. Hence, one might want to select a representative per group, reducing the size of the entire portfolio from 31 to four algorithms.}

\begin{figure}[!t]
	\includegraphics[width=\textwidth]{pics/cormatrix_sat12_all.pdf}
	 \caption{Clustered correlations for the SAT$12$-ALL scenario.}
	\label{fig:cormat}
\end{figure}


As we do with algorithm runs, we summarize the features e.g., by giving basic statistics of the
feature values, and the run status and cost of the feature processing steps. Table~\ref{table:features} displays the summary of the feature steps for the SAT$12$-RAND scenario. \changed{In this scenario, all 115 features use the feature step `Pre' as a requirement \fh{(since the first step is to preprocess the instance)}. While this preprocessing step
%In addition, 109 of those 115 features need an additional feature step. 
succeeded in all cases, \fh{one other step did not: the feature step `CG' (which computes clause graph features) failed in 37.37\% of cases due to exceeding time or memory limits, and even for instances where it succeeded, it was quite expensive ($8.79$ seconds on average).} 
%. However, among the 1362 instances of the entire scenario, the computation of the ten features that belong to the feature step `CG', failed in 37.37\% of the cases and is relatively expensive with $8.79$ seconds on average. 
Such information is useful understanding feature behavior: 
e.g., how risky it is to compute a feature step; how much time must one invest in order to obtain the corresponding features?}

\begin{table}[t]
\centering
{\small
\begin{tabular}{rrrrrrrrr}
\toprule
& & \multicolumn{3}{c}{runstatus [\%]} & \multicolumn{4}{c}{cost [s]} \\ 
\cmidrule(lr){3-5}\cmidrule(lr){6-9}
feature step & $\#$ & ok & $\ldots$ & crash & min & mean & max & missing [\%] \\ 
  \midrule
%Pre\_featuretime 		& 115 & 100.00	& $\ldots$ & 0.00 & 0.00 & 0.06 & 1.31 & 0.00 \\ 
%  Basic\_featuretime 		&  14 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.00 & 0.07 & 0.00 \\ 
%  KLB\_featuretime 		&  20 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.18 & 6.09 & 0.00 \\ 
%  CG\_featuretime 		&  10 & 62.63 	& $\ldots$ & 37.37 & 0.02 & 8.79 & 20.28 & 0.00 \\ 
%  DIAMETER\_featuretime	&   5 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.60 & 2.11 & 0.00 \\ 
%  cl\_featuretime 		&  18 & 100.00 	& $\ldots$ & 0.00 & 0.01 & 1.99 & 2.02 & 0.00 \\ 
%  sp\_featuretime 		&  18 & 100.00 	& $\ldots$ & 0.00 & 0.01 & 0.33 & 3.05 & 0.00 \\ 
%  ls\_saps\_featuretime 	&  11 & 100.00 	& $\ldots$ & 0.00 & 1.36 & 2.12 & 2.51 & 0.00 \\ 
%  ls\_gsat\_featuretime 	&  11 & 100.00 	& $\ldots$ & 0.00 & 2.03 & 2.29 & 3.03 & 0.00 \\ 
%  lobjois\_featuretime 	&   2 & 100.00 	& $\ldots$ & 0.00 & 2.00 & 2.00 & 2.27 & 0.00 \\ 
  Pre 		& 115 & 100.00	& $\ldots$ & 0.00 & 0.00 & 0.06 & 1.31 & 0.00 \\ 
  Basic 		&  14 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.00 & 0.07 & 0.00 \\ 
  KLB 		&  20 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.18 & 6.09 & 0.00 \\ 
  CG 		&  10 & 62.63 	& $\ldots$ & 37.37 & 0.02 & 8.79 & 20.28 & 0.00 \\ 
  DIAMETER	&   5 & 100.00 	& $\ldots$ & 0.00 & 0.00 & 0.60 & 2.11 & 0.00 \\ 
  cl 		&  18 & 100.00 	& $\ldots$ & 0.00 & 0.01 & 1.99 & 2.02 & 0.00 \\ 
  sp 		&  18 & 100.00 	& $\ldots$ & 0.00 & 0.01 & 0.33 & 3.05 & 0.00 \\ 
  ls\_saps 	&  11 & 100.00 	& $\ldots$ & 0.00 & 1.36 & 2.12 & 2.51 & 0.00 \\ 
  ls\_gsat 	&  11 & 100.00 	& $\ldots$ & 0.00 & 2.03 & 2.29 & 3.03 & 0.00 \\ 
  lobjois 	&   2 & 100.00 	& $\ldots$ & 0.00 & 2.00 & 2.00 & 2.27 & 0.00 \\ 
   \bottomrule
\end{tabular}
}
\caption{Feature group summary for the SAT$12$-RAND scenario. The second column shows how many features
  depend on a feature group as a requirement; next, proportions of runstatus events are listed, followed by basic
  statistics for step costs.}
\label{table:features}
\end{table}

We also check whether instances occur with exactly the same feature values, indicating that the experimenter
might have erroneously run on the same instance twice. %We furthermore list which features are constant across all instances and therefore useless for modelling, although they might result in unwanted cost. They can even result in numerical problems for some machine learning models.

It should be mentioned that all of the tables and figures presented, and
additional ones, which we have omitted for space reasons, were automatically
generated by our online platform, and are also accessible through the R package
\system{aslib}. 
The functions are highly configurable, so users can use them for 
their own data exploration or publications and flexibly combine individual elements.
\changed{For the future, we plan to extend our data analysis by additional techniques, such as 
further measures of algorithm performance~\cite{SmithMiles201412}.}

%\note{Reviewer}{Maybe add some techniques from \cite{SmithMiles201412}}
