% FIXME:
% BB: This section needs an intro. Now!
% Mention where the scenarios are available on our page and give a brief overview table.
% Also match / check this intro section with the following intro sections
% Otherwise I have not checked these texts in detail.
% ML: I wrote a short intro and added a table. Please feel free to add and edit it.

\section{Algorithm Selection Scenarios Provided in ASlib Release 1.0.1}
\label{sec:scenarios}

% \note{Reviewer}{This section is particularly long and boring. 
% The description is not at the right level and not complete. 
% Since those are just scenarios, you do not need to explain SAT, MAXSAT, etc in full.}
% \note{ML}{I agree with the reviewer that the long description is not necessary for the paper and we could/should shorten this section quite a bit.}

% \note{Reviewer}{How is the pool of nominated algorithm selected? Further details would be nice. Further empirical evidence could be included to justify the selection.}

%\note{FH}{I reworded to remove redundant text passages.}

The set of algorithm selection scenarios in release version 1.0.1 of our library, shown in Table~\ref{tab:overview},
has been assembled \fh{to represent a diverse set of selection problem settings that covers a wide range of problem domains, types of algorithms, features and problem instances. Our scenarios include both problems that have been broadly studied in the context of algorithm selection techniques (such as SAT and CSP),
as well as more recent ones (such as the container pre-marshalling problem).} 
All of our scenarios were taken from publications that
report performance improvements through algorithm selection
and consist
of algorithms where the virtual best solver (VBS)\footnote{The VBS is defined as a solver that perfectly selects the best solver from a given set on a per-instance basis.} is 
significantly better than the single best algorithm.\footnote{The single best algorithm has the best performance averaged across all instances.} 
Therefore, these are problems on which it makes sense to seek performance improvements via algorithm selection.
All scenarios are available on our online platform
(\url{http://www.aslib.net/}).

The scenarios we provide here are focused
on constraint satisfaction problems, 
%FIXME: BB later mention BBOB and meta learning. currently we cant
%but they also include machine learning and we 
% (ASP, CSP, QBF, SAT).
but we encourage readers to submit new scenarios.
In the following, \fh{we briefly describe the scenarios we included and what makes them interesting.}

%FIXME: Add Joaquins scenario when available
\begin{table}
\begin{center}
\small
\begin{tabular}{l cccccc}
\toprule
scenario 				& $\#\mathcal{I}$ 		& $\#\mathcal{A}$ 		& $\#\mathcal{F}$  		& $\#\mathcal{F}_g$  & Costs 	 & Literature \\
\midrule
 SAT$11$-HAND 	& $296$ 	 	& $15$ 	 		& $115$  		& $10$				  & $\checkmark$  & \cite{xu_satzilla_2008}\\
 SAT$11$-INDU 	& $300$			& $18$ 	 		& $115$  		& $10$				  & $\checkmark$ & \cite{xu_satzilla_2008}\\
 SAT$11$-RAND 	& $600$ 		& $9$ 	 		& $115$  		& $10$				  & $\checkmark$ & \cite{xu_satzilla_2008}\\
 SAT$12$-ALL	 	& $1614$		& $31$ 	 		& $115$  		& $10$				  & $\checkmark$ & \cite{xu2012satzilla2012}\\
 SAT$12$-HAND 	& $767$			& $31$ 	 		& $115$  		& $10$				  & $\checkmark$ & \cite{xu2012satzilla2012}\\
 SAT$12$-INDU  	& $1167$		& $31$ 	 		& $115$  		& $10$				  & $\checkmark$ & \cite{xu2012satzilla2012}\\
 SAT$12$-RAND  	& $1362$		& $31$ 	 		& $115$ 		& $10$				  & $\checkmark$ & \cite{xu2012satzilla2012}\\
 QBF-$2011$  	 	&  $1368$		& $5$ 	 		& $46$  		& $1$				  & $\times$ & \cite{pulina_self-adaptive_2009}\\
 MAXSAT$12$-PMS 	& $876$			& $6$ 	 		& $37$  		& $1$				  & $\checkmark$  & \cite{malitsky_evolving_2013} \\
 CSP-$2010$  	 	& $2024$		& $2$ 	 		& $17$  		& $1$				  & $\times$ & \cite{gent_learning_2010}  \\
 PROTEUS-$2014$		& $4021$		& $22$			& $198$			& $4$				  & $\checkmark$ & \cite{hurley_proteus_2014}\\
 ASP-POTASSCO 	 	& $1294$		& $11$ 	 		& $138$ 		& $5$				  & $\checkmark$ & \cite{holisc14a}\\
 PREMARSHALLING-ASTAR-$2013$ & $527$ & $4$ 	 		& $16$  		& $1$				  & $\times$ & \cite{Ti14tr} \\
 %BBOB-$2013$ 	 	& $1200$		& $7$ 	 		& $53$  		& $1$				  & $\checkmark$ & $-$\\ %FIXME 
 %ML-UCI-WEKA 		& $?$			& $?$ 	 		& $?$  			& $?$				  & $?$  & \cite{?} \\ %FIXME
\bottomrule
\end{tabular}
\caption{Overview of algorithm selection scenarios in the ASLib with the number of instances $\#\mathcal{I}$, 
number of algorithms $\#\mathcal{A}$, 
number of features $\#\mathcal{F}$,
number of feature processing groups $\#\mathcal{F}_g$
and availability of feature costs.}
\label{tab:overview}
\end{center}
\end{table}

\subsection{SAT: propositional satisfiability}

% \note{ML/Reviewer}{I shortened this subsection further because the two reviewers complained about too much details}
% \note{KT}{Further shortened section, for datasets that do not have a corresponding TR, maybe create an appendix with the ``boring'' stuff? (e.g., SAT)}

The propositional satisfiability problem (SAT) is a classic NP-complete problem that consists of determining the existence of an assignment of values to variables of a Boolean formula such that the formula is true. It is widely studied, with many applications including formal verification \cite{prasad2005survey}, scheduling \cite{crawford1994experiment}, planning \cite{kautz1999unifying} and graph coloring \cite{van2008another}. Our SAT data mainly stems from different iterations of the SAT competition,\footnote{\url{http://www.satcompetition.org/}} \changed{which is split into three tracks: industrial (INDU), crafted (HAND), and random (RAND).}

% TODO >>> Move to Appendix?
% Both datasets are subdivided into three tracks. Industrial (INDU) tracks are encodings (both satisfiable and unsatisfiable) from real-world applications, such as hardware and software verification, bioinformatics, planning and scheduling. Crafted and handmade (CRAFTED/HAND) tracks contain custom-made instances and encodings of combinatorial problems made to challenge SAT solvers (both satisfiable and unsatisfiable). Finally, the random (RAND) tracks consist of randomly generated satisfiable instances. Solver creators determine the track or tracks in which their solvers participate, so each track has a distinct set of specialized solvers. 
% 
% In more detail, the SAT $2011$ dataset consists of all instances from the 2011 SAT 
% Competition, with solver performance data provided by the organizers. The INDU track
% contains 300 instances and 18 solvers, the CRAFTED track 296 instances and 15
% solvers, and the RAND track 600 instances and 9 solvers. In the same vein, the
% SAT $2012$ dataset is a mixture of data from all three SAT competitions and three SAT
% Races since 2006. It was used to train the successful SATzilla 2012 solver
% \cite{xu2012satzilla2012}, and algorithm executions were performed by its
% creators. Each track comprises the same 31 solvers, including 28 originating
% from various competitions, two versions of the solver \texttt{spear}~\cite{babic2008spear} configured
% for software and hardware verification~\cite{HutBabHooHu07}, and a new solver \texttt{mxc}~\cite{Bregman09thesat}. In terms
% of problem instances, there are 1\,362 instances for the RAND track, 767 instances
% for HAND track, and 1\,167 instances for the INDU track. 
% Finally, 538 instances were sampled from each of the three tracks and combined to form the general ALL track.
% \changed{We note that the SAT $2011$ scenarios cannot be combined into one scenarios 
% because the set of algorithms is different for each $2011$ scenario.} 
% 
% In addition to performance data, we provide 115 instance features for every dataset (which were developed by the SATzilla team over the years starting in 2003 \cite{Satzilla03}, and are described by \citet{nudelman_understanding_2004}, \citet{xu_satzilla_2008}, and \citet{hutter2014algorithm}). They are distinctively grouped by their origins, and can vary in extraction cost.
% Some easy instances can be solved during feature extraction with the help of probing features. 
% TODO <<<

\changed{The SAT scenarios are characterized by a high level of maturity and diversity in terms of their solvers, features and instances. Each SAT scenario involves a highly diverse set of solvers, many of which have been developed for several years.} \fh{In addition, the set of SAT features is probably the best-studied feature set among our scenarios; it includes both static and probing features that are organized into as many as ten different feature groups. The instance sets used in our various SAT scenarios range from randomly-generated ones to real-world instances submitted by the industry.}

\subsection{QBF-2011: Quantified Boolean Formula solver evaluation 2010}

A quantified Boolean formula (QBF) is a formula in propositional logic with
\changed{universal or existential} quantifiers \changed{on each variable in the} formula. 
A QBF solver finds a set of variable assignments
that makes the formula true or proves that no such set can exist. This is a
\hh{PSPACE-complete} problem\changed{ for which 
solvers exhibit a wide range of} performance characteristics.
Our QBF-2011 data set comes from the QBF Solver Evaluation
%HH: sounds strange - why call it QBF-2011 and not QBF-2010?
2010\footnote{\url{http://www.qbflib.org/index_eval.php}} and consists of instances from the main, small hard, 2QBF and random tracks.
The instance features and solvers are taken from the AQME
system and described in more detail by Pulina et al.~\cite{pulina_self-adaptive_2009}.

\changed{Although the QBF scenario \hh{includes only five algorithms, this
set is highly diverse. Our QBF solvers}
and instances are taken from a competition setting that was used to evaluate the
performance of the solvers, similar to the SAT domain just described.}

\subsection{MAXSAT12-PMS}

MaxSAT is the optimization version of the previously introduced SAT problem, and aims to find a variable assignment that maximizes the number of satisfied clauses. The MaxSAT problem representation can be used to effectively encode a number of real-world problems, such as 
%timetabling~\cite{timetabling10} problems, %ML: Citation and Link was broken 
FPGA routing~\cite{XuRuSa03}, and software package installation~\cite{ArBeLyMaRa10}, among others, as it permits reasoning about both optimality and feasibility.
%
This particular scenario focuses on the partial MaxSAT (PMS) problem~\cite{SATHandbook}.
%, in which clauses in a \changed{Boolean formula are partitioned} and classified as either hard or soft. The objective of a PMS solver is to satisfy all of the ``hard'' clauses while minimizing the number of unsatisfied ``soft'' clauses. 
%The provided data is a collection of random, crafted and industrial unweighted MaxSAT instances from the 2012 MaxSAT Evaluation~\cite{MAXSATevaluations}. 
% There are a total of 876 instances, each solved with 6 state-of-the-art solvers from 2012 (akmaxsat\_ls, akmaxsat, DSWPM1\_924, pwbo2.1, qmaxsat0.21comp, qmaxsat0.21g2comp). 

% For the features, the dataset relies on a collection of 37 attributes based on the core set of SAT features described by \citet{xu_satzilla_2008}. This includes the problem size features, such as number of clauses and variables and their ratio, the variable-clause graph features, balance of positive to negative literals per variable and per clause, and the proximity to the horn formula. In addition to this standard set of SAT based features, the feature data also includes the percentage of soft clauses. 

\changed{
This scenario is composed of a collection of random, crafted and industrial instances from the 2012 MaxSAT Evaluation~\cite{MAXSATevaluations}, which makes it especially diverse in comparison to the other scenarios. The techniques used to solve the various instances in this scenario tend to be significantly different from each other, leading to a substantial performance gap between the best single solver and the virtual best solver. Furthermore, \hh{because there are only six solvers with very different performance characteristics, algorithm selection} approaches must be very accurate in their choices, since any mistake is heavily penalized.
}

\subsection{CSP-2010: Lazy learning in constraint solving}

Constraint programming~\cite{StuckeyFSTF14} is concerned with finding solutions to constraint
satisfaction problems---a task that is NP-complete. Learning in the
context of constraint solving is a technique by which previously unknown
constraints that are implied by the problem specification are uncovered during
search and subsequently used to speed up the solving process.

% The data set consists of 2,024 problem instances, each of which is described by
% 86 features. There are two algorithms: one solver that employs lazy
% learning~\cite{gent_lazy_2010} and one that does not~\cite{gent_minion_2006}.
% The data set is heavily biased towards the non-learning solver, as it obtains
% better performance in the majority of cases. There are, however, some instances
% where using lazy learning improves performance by several orders of magnitude.
% More details can be found in~\cite{gent_learning_2010}.

\changed{
The scenario contains only two solvers: one that employs lazy
learning~\cite{gent_lazy_2010, gent_learning_2010} and one that does not~\cite{gent_minion_2006}.
The data set is heavily biased towards the non-learning solvers, such that the baseline (the
single best algorithm) is very good already. Improving on this is a challenging
task and harder than in many of the other scenarios. Furthermore, both solvers
share a common core, which results in a scenario that directly evaluates the
efficacy of a specific technique in different contexts.% where the differences are much less obvious. %%% Too vague, created a run-on sentence. Removing.
}

\subsection{\changed{PROTEUS-2014}}

The PROTEUS scenario, stemming from~\cite{hurley_proteus_2014}, includes an extremely diverse mix
of well-known CSP solvers alongside competition-winning SAT solvers 
that have to solve (converted) XCSP instances\footnote{The XCSP instances are taken from
\url{http://www.cril.univ-artois.fr/~lecoutre/benchmarks.html} as described
in~\cite{hurley_proteus_2014}.}. 
The SAT solvers can accept different conversions of the CSP
problem into SAT (see, e.g.,~\cite{daniel_simple_csp_to_sat,tamura_sugar,tanjo_azucar}), 
which in our format are provided as separate algorithms. 
Indeed, this scenario is the only one in which
solvers are tested with varying ``views'' of the same problem. Furthermore, the
features of this scenario are also unique in that they include both the SAT and
CSP features for a given instance. This potentially provides additional
information to the selection approach that would normally not be available for
solving CSPs. An algorithm selection system has a very high degree of
flexibility here and may choose to perform only part of the possible
conversions, thereby reducing the set of solvers and features, but also reducing
the overhead of performing the conversions and feature computations. There are
also synergies between feature computation and algorithm runs that can be
exploited, e.g., if the same conversion is used for feature computation and to
run the chosen algorithm then the cost of performing the conversion is only incurred
once. In other cases, where features are computed on one representation and
another one is solved, conversion costs are incurred both during feature
computation and the running of the algorithm.
%\note{FH}{As a reviewer, I would ask what is meant by ``An algorithm selection system has a very high degree of
%flexibility here and may choose to perform only part of the possible
%conversions, thereby reducing the set of solvers and features''. In particular,
%where does the cost of the conversions count that need to be done for both
%instance feature computation and to run the algorithm? As part of the algorithm
%runtime, as part of the instance feature time, as part of both (you have to pay
%twice), or somehow only on demand (you have to pay once, if you use it for the
%algorithm or the features or both)?\\\\
%LK: I've tried to clarify this.}


\subsection{ASP-POTASSCO: Answer Set Programming}

Answer Set Programming (ASP, \cite{baral02a,gekakasc12a}) is a form of declarative programming 
with roots in knowledge representation, non-monotonic reasoning and constraint solving.
In contrast to many other constraint solving domains (e.g., the satisfiability problem),
ASP provides a rich yet simple declarative modeling language 
in which problems \changed{up to $\Delta_3^{\rm p}$ (disjunctive optimization problems)} can be expressed.
\changed{ASP has proven to be efficiently applicable to many real-world applications},
e.g., product configuration~\cite{soinie99a},
decision support for NASA shuttle controllers~\cite{nobagewaba01a},
synthesis of multiprocessor systems~\cite{ismabogesc09a} 
and industrial team building~\cite{griilelirisc10a}.

%Like algorithms for many other constraint solving domains,
%ASP solvers are highly sensitive to parameter configuration
%and there is no single dominant configuration for all types of ASP instances.
%As previously shown \cite{gebser_portfolio_2011,mapuri13a},
%this can be exploited by algorithm selection.

\changed{
In contrast to the other scenarios,
the algorithms in the ASP scenario were automatically constructed by an adapted version of \system{Hydra}~\cite{xu_hydra_2010},
i.e., the set of algorithms consists of complementary configurations of the solver \system{clasp}~\cite{gekasc12b}.
The instance features were also generated by a light-weight version of \system{clasp},
including static and probing features organized into feature groups; they
were previously used in the algorithm selector \emph{claspfolio}~\cite{gebser_portfolio_2011,holisc14a}.}


%Our ASP data set consists of a representative and extensive set of 1\,294 grounded ASP instances in NP  
%which were collected by the Potassco group~\cite{gekakaosscsc11a}
%and include several ASP Competition instances.
%The portfolio consists of $11$ configurations of the state-of-the-art ASP solver \system{clasp}~\cite{gekasc12b};
%it was automatically constructed by an adapted version of \system{Hydra}~\cite{xu_hydra_2010} for ASP.
%The ASP feature generator \system{claspre}~\cite{gebser_portfolio_2011}, a light-weight version of \system{clasp}, was used to extract $138$ instance features,
%which includes static and dynamic features recorded during a short solving period of \system{clasp}.


% \subsection{BBOB\_2013: Black Box Optimization Benchmarks}
% 
% \note{FH}{Let's drop the information that this was part of a master thesis; we don't say for each of the other data sets that they come from PhD theses etc, and neither should we. Let's also drop ``-Master-Thesis'' from the dataset title. On the flipside, importantly, let's put in the citation for the Master thesis (for the time being, it's also OK if it's cited as ``in progress'', we just need to fix it for the CRC)!}
% 
% This data set was created during a master thesis 
% and covers the problem of finding the best solver for continuous black-box optimization problems using features derived by \system{Exploratory Landscape Analysis}~\cite{Bischl2012_2}. The original \system{Black-Box Optimization Benchmark}~\cite{hansen2010} consists of 24 continuous functions, representing various optimization problems. Due to scaling, rotation and shifting of those 24 functions, one can generate several additional problem instances. This data set used ten instances per function and each of those were replicated ten times in order to handle stochastic features and optimization algorithms. Furthermore, each of the functions can be computed for several dimensions of the decision space. The underlying data set is based on 2-, 3-, 5-, 10- and 20-dimensional problems.
% 
% In~\cite{mersmann2011} a set of high-level features was introduced, which can be used for characterizing (unknown) functions. However, as those high-level features depend on expert knowledge, they were replaced by more easily computable feature groups, with a total of 50 so-called low-level features. Those six groups are \system{convexity} (3 features), \system{curvature} (15), \system{y-distribution} (3), \system{levelset} (12), \system{local search} (10) and \system{meta-models} (7). Thus, the feature data consists of 12\,000 instances (5 dimensions, 24 functions, 10 instances, 10 replications) and 50 feature values.
% 
% The portfolio of the used optimization algorithms consists of representatives of different optimization techniques: \system{L-BFGS-B} (a Quasi-Newton method with limited memory and bound constraints~\cite{byrd1995}), \system{CMA-ES} (Covariance Matrix Adaption Evolutionary Strategy~\cite{hansen2006}), \system{DEoptim} (Differential Evolution Optimization~\cite{ardia2011}), \system{Genoud} (Genetic Optimization Using Derivatives~\cite{mebane2011}), \system{GenSA} (Generalized Simulated Annealing~\cite{xiang2013}), \system{PSO} (Particle Swarm Optimizer~\cite{qin2010}) and a very simpe random search algorithm.
% 
% The performance of the solvers was measured using the \system{Expected Runtime} (ERT~\cite{hansen2010}) based on ten runs per problem instance. As the ERT strongly depends on the dimension and complexity of the underlying problem instance, the ERT was converted into a \system{Relative Expected Runtime} (relERT~\cite{Bischl2012_2}) by standardizing the ERTs of an instance with the best ERT of that instance. In case of missing ERTs, e.g., due to timeouts, the corresponding relERT was calculated using Par10 on the values of that instance.


% \subsection{Machine Learning UCI-WEKA defaults}
% 
% Supervised classification is a classical machine learning problem, in which
% algorithms (classifiers) build a model to predict a categorical target
% variable based on a set of descriptive features. These models are evaluated by
% splitting the data into a training and a test set as part of a
% cross-validation procedure, training the model on the training data and
% testing it on the test data. For this scenario, we queried the OpenML platform \cite{openml} for the performance results of 53 classification algorithms from the WEKA workbench \cite{weka} on 88 tabular classification datasets from the UCI repository \cite{uci}.
% 
% The algorithms include
% \texttt{ADTree},
% \texttt{AODE},
% \texttt{BayesNet},
% \texttt{DecisionStump},
% \texttt{DecisionTable},
% \texttt{IBk},
% \texttt{Id3},
% \texttt{LBR},
% \texttt{LMT},
% \texttt{LWL},
% \texttt{Logistic},
% \texttt{LogitBoost},
% \texttt{NBTree},
% \texttt{NaiveBayes},
% \texttt{OneR},
% \texttt{PART},
% \texttt{Ridor},
% \texttt{ZeroR},
% \texttt{AdaBoostM1},
% \texttt{Bagging},
% \texttt{Decorate},
% \texttt{J48},
% \texttt{JRip},
% \texttt{LogitBoost},
% \texttt{MultiBoostAB},
% \texttt{MultilayerPerceptron},
% \texttt{RBFNetwork},
% \texttt{REPTree},
% \texttt{RandomForest},
% \texttt{SMO},
% \texttt{Stacking}, and
% \texttt{VotedPerceptron}. All algorithms are evaluated using a 10-fold crossvalidation procedure and a list of evaluation measures including predictive accuracy, root mean squared error (RMSE), 
% \todo{FH: if we list both predictive ``accuracy" and ``root mean squared error'', we should state how they differ; predictive accuracy is a vague term.}
% mean absolute error (MAE), cputime and memory consumption. For this scenario, only the default parameter settings of these algorithms were used.
% 
% All datasets were converted to the ARFF format, and the exact versions can be found on OpenML.org. All datasets are characterized with a set of (meta)features including statistical and information-theoretic properties of the data distribution, such as the number of instances and features, missing values, class entropy, mutual information, as well as the result of landmarkers\cite{landmarking}, i.e. the performances of simplified classifiers run on the datasets.

\subsection{PREMARSHALLING-ASTAR-2013: Container pre-marshalling}

\changed{The container pre-marshalling problem (CPMP) is an NP-hard container
stacking problem from the container terminals literature~\cite{StVo08}.
\hh{We constructed an algorithm selection scenario from two recent A* and IDA* approaches for
solving the CPMP presented in~\cite{TiPaVo14tr}, using instances from the
literature}. The scenario is described in detail in~\cite{Ti14tr}.

The pre-marshalling scenario \hh{differs from} other scenarios in
particular because of its highly homogeneous set of algorithms. All of the algorithms
are parameterizations of a single symmetry breaking heuristic, either using the
A* or IDA* search techniques, which stands in sharp contrast to the diversity
of solvers present in most other datasets. Furthermore, the features provided
are new and not as well tested as in the other scenarios, perhaps more accurately
resembling the features that would be created by domain experts on their first
attempt at modeling a problem. Finally, the scenario represents a real-world,
time-sensitive problem from the operations research literature, where algorithm
selection techniques can have a large impact.}

