\section{\changed{Basic Algorithm Selection Experiments}}
\label{sec:experiments}

%\note{FH}{The intro to this section did not mention the new experiments at all. Rather than being too apologetic about not quite representing state-of-the-art results, I tried to also describe our experiments as one realistic way of characterizing our benchmarks.}
\fh{In this section, we present exploratory benchmark experiments that give an indication of the diversity of our benchmarks. First, we evaluate the performance
of basic algorithm selectors on our scenarios. We then perform a subset selection study to identify the important algorithms and instance features in each of the scenarios.
We make no claim that the presented experimental settings are exhaustive or that we achieve state-of-the-art 
algorithm selection performance; rather, we provide 
baseline results that can be achieved by standard machine learning approaches for the core technology of an
algorithm selection system---the selector itself. These results, and our
framework in general, allow us to study which algorithm selection approaches work
well for which of our scenarios.}
%\note{LK}{``\ldots{}these results also serve to study which algorithm selection
%approaches work well for which of our scenarios.''\\
%We have explicitly avoided saying this -- this makes it easy for a reviewer to
%say that we haven't considered method X and therefore any statement about what
%works well and what doesn't is misleading. I would \emph{strongly} advocate
%going back to the previous wording to avoid any implications that this is a
%survey of different approaches to algorithm selection.}


\pk{In} order to reach the performance of current state-of-the-art algorithm selection systems~\cite{xu2012satzilla2012,malitsky_algorithm_2013}, we would have to include various extensions, such as cost-sensitive classification and complementary techniques such as pre-solving.\footnote{A pre-solver is a default solver that is run for a small amount of time without any algorithm selection
taking place; cf.\ \cite{xu_satzilla_2008}. The problem instances that are
solvable in this time are solved without incurring any of the overhead that
algorithm selection brings, such as the computation of features. This is
relevant in practice, as the cost of computing features can be much higher than
the cost of solving a very easy instance.}

We use the \system{LLAMA} toolkit~\cite{kotthoff_llama_2013}, version 0.8.1, in combination with the \system{aslib}
package\footnote{\url{https://github.com/coseal/aslib-r}} to
run the algorithm selection experiments. \system{LLAMA} is an R~\cite{R}
package
that facilitates many common algorithm selection scenarios. In particular, it
enables access to classification, regression, and clustering models for algorithm
selection---the three main approaches we use in our experiments---from other R
packages. As \system{LLAMA} does not include any machine learning
algorithms, we use the \system{mlr} R
package~\cite{bischl_mlr} as an interface to the machine learning models
provided by other R packages. We parallelize all of our benchmark experiments
through the \system{BatchExperiments} \cite{Bischl2015_1} R package.

In this paper, we only present aggregated benchmark results, but the
interested reader can
access full benchmark results at \url{http://aslib.net}. Our
experiments are fully reproducible as the complete code to generate these
results can be accessed in the Github repository mentioned earlier.


Please note that for some of the algorithm selection scenarios we have opted to use
only a subset of feature processing groups (and their associated features) \changed{as recommended by 
authors of the scenarios}; 
we did this because some feature steps are excessively expensive to calculate and have
not fully proved their worth in selection models.
Detailed information (e.g., the names of the feature
processing groups we selected and their average costs) is provided on the ASlib webpage. 

\subsection{Data preprocessing}

%\note{FH}{This is the first time that training data is mentioned; I suggest pulling up the short paragraph on cross-validation from the end of 6.1, and then saying that we preprocessed the data in each training splits as follows. We did do that preprocessing separately for each C/V split, right (not peeking at the entire data)?}
Before running the experiments, we preprocessed the \changed{training} data as follows.
%ML: Since we have not repetitions in our data, we have not really did this. 
%First, if there was more than one repetition of feature computation per problem
%instance, we computed the mean over all repetitions. 
We removed constant-valued (and therefore irrelevant) features and imputed missing feature values as the mean over all non-missing values of the feature. 
We normalized the range of each feature to the interval $[-1,1]$. 
While this is unnecessary for some machine learning approaches (e.g.,
decision trees), it is often helpful or mandatory for others (e.g., SVMs or clustering). 
\pk{Missing performance values were imputed using the timeout value for the data set.}
%ML: Not part of our experiments
% or, if no
%timeout was given, ten times the worst performance recorded in the data set. 
%ML: This should be ensured by our format definition.
%We considered an
%algorithm run successful if the run status was ``ok'' and the
%corresponding performance value was not missing.

For each problem instance, we calculated the feature computation cost based on the costs for the feature groups specified in the data. If the problem
instance was solved during feature computation, we only considered the cost of the features up
to the one that solved it. Furthermore, we set the runtime
for all algorithms to zero for instances solved during feature
computation. We added the feature costs computed in this way to the runtimes
of the individual algorithms on the respective instances. Given these new
runtimes, we checked whether the specified timeout was now exceeded by any
algorithm and set the run status of the corresponding algorithm accordingly. Preprocessing runtimes to include feature computation time in this way allows us to focus on an algorithm selection system's overall performance, and avoid overstating the fraction of instances that would be solved within a time budget in cases where features are expensive to compute. 

Each scenario specifies a partition into $10$ folds for cross-validation to ensure consistent evalution
across different methods. We also used this splitting in our experiments.
%The code used to run the experiments is
%part of the \system{aslib} package.

\subsection{Experimental setup}

We consider three fundamentally different approaches to algorithm selection that have been studied extensively in the
literature (cf.\ Section~\ref{sec:background:how}): 

\begin{itemize}
  \item \emph{classification}: \fh{using a multi-class classifier to directly predict the best of the $k$ possible algorithms;}
  \item \emph{regression}: predicting each algorithm's performance via a regression model and then choosing the one with the best predicted performance;
  \item \emph{clustering}: clustering problem instances in feature space, then determining the cost-optimal solver for each cluster and finally assigning each new instance to the solver associated with the instance's predicted cluster.
\end{itemize} 

\begin{table}[thb]
\centering
\small
\begin{tabular}{p{0.2em}llr}
\toprule[1pt]
& Technical Name & Algorithm and parameter ranges & reference\\
\midrule
\multicolumn{2}{l}{\emph{classification}}\\
& ksvm & support vector machine & \cite{ksvm}\\
& & $C\in [2^{-12}, \; 2^{12}], \; \gamma \in [2^{-12}, \; 2^{12}]$\\
& randomForest & random forest  & \cite{rf}\\
& & $\texttt{ntree}\in [10, \; 200], \; \texttt{mtry}\in [1, \; 30]$\\
& rpart & recursive partitioning tree, CART & \cite{rpart}\\
\midrule
\multicolumn{2}{l}{\emph{regression}}\\
%mars & multivariate adaptive regression splines & \cite{mda}\\
& lm & linear regression & \cite{R}\\
& randomForest & random forest & \cite{rf}\\
& & $\texttt{ntree} \in [10, \; 200], \; \texttt{mtry} \in [1, \; 30]$\\
& rpart & recursive partitioning tree, CART & \cite{rpart}\\
\midrule
\multicolumn{2}{l}{\emph{clustering}}\\
& XMeans & extended $k$-means clustering & \cite{hall_weka_2009}\\
\bottomrule[1pt]
\end{tabular}
\caption{Machine learning algorithms \pk{and their parameter ranges} used for our experiments.}
\label{tab:mlalgs}
\end{table}

%\note{FH}{Not a big deal, but I find it odd that you
%(a) optimized ntree for random forests (more is always better, it's just more expensive; Bernd, I still don't believe your claim from back then that more trees can hurt in this setting ;-); (b) while you're at it did not optimize the standard RF parameters (how many random features to select at each node, and some parameter restricting depth or number of points required for further splitting); (c) while you're at it did not optimize the ridge parameter of linear regression (or was this unregularized linear regression? That should perform really poorly)}

%\note{PK}{I'll try to answer some of your comments from above: (a) I don't see an advantage of just increasing the number of trees -- even though a RF with more trees tends to be more flexible in a lot of cases, it does not guarantee better results but an increase in memory and decrease in training and prediction speed; (b) as far as I know the RF always builds complete trees, allowing each single tree to overfit completely to its training data (which does not matter because of the bagging approach behind it) -- therefore, the number of trees is the most reasonable tuning parameter of a RF (from my point of view); (c) lm is just the basic version of a linear regression -- this does not use any penalty terms or other parameters as the ridge regression does}

The specific machine learning algorithms we employed for our experiments are
shown in Table~\ref{tab:mlalgs}. To provide a good baseline, they include
representatives from each of the three major approaches above. 
We tuned the hyperparameters of ksvm and randomForest (classification and regression) with the listed parameter ranges, using random search with 250 iterations and a nested cross validation (with 3 internal folds) to
ensure unbiased performance results.
All other parameters were left at their default values.
For the clustering algorithm, we set the (maximum) number of clusters to 30
after some preliminary experiments; the exact number of clusters was determined
dynamically by XMeans.

\subsection{Evaluation}

\pk{Each} of the algorithm selection models \pk{was evaluated based on} three different
measures: the fraction of all instances solved within the timeout;
the penalized average runtime with a penalty factor of 10 (PAR10: this means averaging runtimes with timeouts counting as 10 times the time budget); and the
average misclassification penalty (which, for a given instance, is the difference between the performance of the selected algorithm and the performance of the best algorithm).
\pk{The} performance of each algorithm selection model \pk{was compared} to the virtual best
solver (VBS) and the single best solver. 
Note that the misclassification penalty for VBS is
zero by definition. The single best solver is the (actual) solver that has the
overall best performance on the data set. Specifically, we consider the solver
with the best PAR10 score over all problem instances in a scenario.%
\footnote{The single best solvers determined according to the other
evaluation measures described here are presented in the detailed experimental
results on the ASlib web page; the results were qualitatively similar.}

\subsection{Experimental results}

%\note{Reviewers}{Further analysis, including costs of feature selection and preprocessing, might be considered.}

Figure~\ref{fig:res-heat} presents a summary of our experimental results. In
most cases, the algorithm selection approaches performed better than the single
best solver. We expected this, as all of our data sets came from 
publications that advocated algorithm selection systems.

Nevertheless, there were significant differences between the scenarios. While for
most of them, almost all algorithm selection approaches outperformed the single
best algorithm, there are some scenarios that seem to be much harder for algorithm
selection. In particular, on the \pk{SAT11-INDU} scenario, \pk{three}
approaches were not able to achieve a performance improvement
and all other approaches \pk{(with the exception of random regression forests)} improved only slightly.  

\begin{figure}[t]
\includegraphics[width=\textwidth]{pics/res-heat-diff}
\caption{Summary of the experimental results. 
%The shading represents the factor by which the selection model outperformed the PAR10 single best solver, measured in terms of PAR10 score and presented on a log scale. Values greater than 1 mean that the selector was better than the single best solver. Values less than 1, i.e. where the selector was worse than the single best solver, are shown in \textit{italics}. The performance for the best model for each data set is shown in \textit{\textbf{bold italics}}. The geometric mean speedup for each selection model type across all scenarios is shown on top of the heatmap.
\pk{We show how much of the gap between the single best and the virtual best
solver in terms of PAR10 score was closed by each model. That is, a value of 0
corresponds to the single best solver and a value of 1 to the virtual best.
Negative values (highlighted in \textit{italics}) indicate performance worse than the
single best solver. Within each data set, the best model is highlighted in
\textit{\textbf{bold italics}}. The shading emphasizes that comparison: dark
cells correspond to values close to 1 (i.e.\ close to the virtual best solver),
whereas lighter fillings correspond to the worse models. Above the heatmap,
the arithmetic mean is given for each model type
across all scenarios, allowing for a quick comparison of the different models.
The numbers on the right-hand side of the heatmap show the best performance
for each scenario.}
}
\label{fig:res-heat}
\end{figure}

Random regression forests stood out as quite clearly the best overall approach, yielding the best performance for $11$ of the $13$ datasets. This is in line with recent results showing the strong performance of this model for algorithm runtime prediction~\cite{hutter2014algorithm}. The results are also consistent with those of the original papers introducing the datasets. For example, \citet{xu2012evaluating} reported somewhat better results for the three SAT$11$ datasets than the one achieved here with our off-the-shelf methods (which is to be expected since their latest SATzilla version used a cost-sensitive approach and pre-solving schedules).

XMeans performed worst on average.
We suspect that the performance of clustering approaches is highly sensitive to the selection 
and normalization of instance features. % to have a better distance metric in the feature space.
On some scenarios, XMeans performed well; it was the best-performing approach on SAT12-RAND.
However, on SAT11-INDU, SAT12-INDU, and SAT12-ALL (which also partly consists of industrial SAT instances),
XMeans performed worse than the single best solver. 
This leads us to suspect suspect that the default subset of instance features is not favorable for XMeans on industrial SAT instances. 


%\subsection{Summary}
%\note{FH}{I moved the following from being its own section (6.6 Summary) to here, since this only relates to these experiments not the the forward selection ones. It doesn't hurt quite as much here, but as the summary section of the entire selection experiment section it was extremely anti-climactic to say that basically the experiments are a repeat of those in \emcite{kotthoff_evaluation_2012}. I think it would be actually be much much better yet to move this right to the beginning, in Section 6.0; something along the lines of ``We follow the setup of \emcite{kotthoff_evaluation_2012} to evaluate a range of AS approaches on our algorithm selection scenarios.'' Then, readers know what's coming and don't feel fooled. The part comparing the results to those in \emcite{kotthoff_evaluation_2012} could just remain here...}

%Our focus in the experiments just presented has been on demonstrating the flexibility and potential of ASlib.
%However, we note that the results presented here are similar to those presented
%by \emcite{kotthoff_evaluation_2012}, with some overlap in data sets and
%methodology. Some important differences are that the current work considered a
%number of additional approaches, \pk{found that it is more
%challenging to improve the SAT$11$-INDU data compared to} than the SAT$11$-HAND and 
%SAT$11$-RAND data sets, and achieved better results on the CSP-$2010$ scenario.
%%
%With both, the data sets and the code for the above evaluation available, it
%is easy for interested parties to reproduce our results and compare to their own
%approaches.



\subsection{Algorithm and Feature Subset Selection}

To provide further insights into our algorithm selection scenarios,
we applied forward selection~\cite{Kohavi97} to the algorithms and features to
determine whether smaller subsets still achieve comparable performance.
We performed forward search independently for algorithms and features for each
scenario.

The process starts with the empty set and then greedily and iteratively adds the 
algorithm or feature to the set which most improves the cross-validated score (PAR$10$) of the predictor.
The selection is terminated when the score does not improve by at least 1.
In all other aspects, the experimental setup was the same as described before.
As a prognostic model we used the random regression forest,\footnote{We
used the random forest with default parameters, as the tuning was done for the
full set of features and solvers.} as it was the best overall approach so far.
We note that the selection results use normal resampling and not the
nested version, which may result in \fh{overconfident performance estimates for
the selected subsets~\cite{Bischl2012_3}}.
We accept this caveat since our goal here is to study the ranking of the features and the size of the selected sets,
and a more complex, nested approach would have resulted in multiple selected sets.
% We note that the presented results depend on the methodology and machine
% learning algorithm used, and the selection results use normal resampling and not the
% nested version, which may result in \fh{overconfident performance estimates for the selected subsets~\cite{Bischl2012_3}.
% We accept this caveat since our goal here is not to improve performance itself, but rather to estimate how far one can reduce the set of algorithms and features in a real-world setting.}
%\pk{Instead of improving the performance itself, we want to estimate, how far one can} reduce the set of algorithms and features in a real-world setting.

% \note{BB}{I dislike the wording here wrt to the biased results, I should reword a bit!\\FH: Reworded; please adapt as you see fit ...}

\begin{table}[thb]
\small
\begin{center}
\begin{tabular}{p{.3\textwidth}ccccc}
\toprule
 & Original & \multicolumn{2}{c}{Algorithms} & \multicolumn{2}{c}{Features}\\
 & (tuned)\\
\input{results_selection}
\end{tabular}
\caption{Number of selected algorithms and features and the resulting PAR$10$ values for the 
  corresponding reduced sets. For reference, the second column lists the PAR$10$ score of 
  the regression forest on the full algorithm and feature sets from the previous
  set of experiments.}
\label{tab:fasel}
\end{center}
\end{table}    

Table~\ref{tab:fasel} presents the results of forward selection for
algorithms and features on all scenarios.
Usually, the number of selected features is very small compared to the complete
feature set.
This is consistent with the observations of \emcite{hutter_identifying_2013}
who showed that only a few instance features are necessary to reliably predict
the runtime of algorithm configurations. For example, on SAT12-RAND, the only
three features selected were \fh{a feature based on survey propagation concerning the probability of 
variables to be unconstrained, and two balance features concerning the ratio of positive and negative (1) occurences of each variable and 
(2) literals in each clause.}
%are POSNEG\_RATIO\_CLAUSE\_min, POSNEG\_RATIO\_VAR\_min, and SP\_unconstraint\_max.
%\note{FH}{I reworded to avoid giving the names of the features here without explaining what they are.
%The latter one is a feature based on survey propagation: for each variable, use probabilistic inference to estimate the probability that the variable is unconstrained (false in some satisfying assignment, true in another); then, take the max of the probabilities across the variables.
%The first and second concern balance features: the ratio of positive to negative literals in each clause (take the min over all clauses);
%and the ratio of positive to negative occurrences of each variable (take the min over all variables).
%These are not features that have come up as important frequently, so, unfortunately, I don't have a whole lot of intuition to interpret this finding ... Holger/Kevin, the only place where POSNEG\_ratio\_var\_mean has come up before was in your work with Lin on predicting solubility... do you have any intuitions from that (or other) work about the importance of POSNEG\_RATIO\_CLAUSE\_min, POSNEG\_RATIO\_VAR\_min, and SP\_unconstraint\_max?}

The number of algorithms after forward selection is also substantially reduced
on most scenarios. On the SAT scenarios, we expected to see this because the
scenarios consider a huge set of SAT solvers that were not pre-selected in any
way. \emcite{xu2012evaluating} showed that many SAT solvers are strongly
correlated (see Figure \ref{fig:cormat} in
Section~\ref{sec:eda}) and make only very small contributions to the VBS. For
example on the SAT12-RAND scenario, only three solvers were selected: sparrow,
eagleup, and lingeling.
We did not expect the set of algorithms to be reduced on the
ASP-POTASSCO scenario, as the portfolio was automatically constructed using
algorithm configuration to obtain a set of complementary parameter settings that
are particularly amenable to portfolios; \fh{indeed, forward selection kept as many as 
8 of the 11 configurations.}

% LK: I think this is too detailed, in particular because we don't do anything
% similar for the feature selection.
%Looking in more detail at the selected algorithm configurations,
%the configurations of the last $2$ and the $5$th iteration were reduced.
%The reason for the removed configuration could be that
%\emcite{holisc14a} used a different set of instances for the configuration,
%that does not generalize perfectly on the instances used here,
%and the performance improvement was small in the last iterations of \system{Hydra}.

Our results indicate that in real-world settings, selecting the most predictive
features and the solvers that make the highest contributions can be important.
More detailed results can be found on the ASlib website.

