\section{Background}
\label{sec:related_work}

\citet{rice_algorithm_1976} was the first to formalize the idea of selecting among different algorithms on a per-instance basis. While he referred to the problem simply as \emph{algorithm selection}, we prefer the more precise term \emph{per-instance algorithm selection}, to avoid confusion with the (simpler) task of selecting one of several given algorithms to optimize performance on a given set or distribution of instances.

%HH: I've cleaned up the definition. Note: I don't like the use of loss function here and would prefer performance metric which, IMO, is more closely aligned
%    with the general use case for algorith configuration (which goes beyond ML). I propose to change this now or in the final version.
%\note{HH/KLB}{
	%\note{HH,KLB,FH}{HH: I've changed in quite a few places to the more precise ``per-instance algorithm selection''\\
	%KLB: I preferred what it was before---Rice called it the ``algorithm selection problem''. Or at least explain in the text why you object to Rice's (at this point, pretty established) terminology, given that we're attributing the problem definition to him.\\HH: I don't see this as a big problem, but have now added 
%a brief clarification.\\FH: Actually, I also don't see the need for the `per-instance' in the problem's name -- the original `algorithm selection' is indeed pretty established by now and I haven't seen anyone be confused by it. Have you? I found the `per-instance' distracting in reading (e.g., in this quote from the following section: ``algorithm selection scenarios, i.e., instances of the per-instance algorithm selection problem''). For me, ``per-instance configuration'' absolutely makes sense, but does it for selection? What do others think? (I guess this paper establishes nomenclature we should all try to use consistently in the future.)} 
\begin{define}[Per-instance algorithm selection problem]
	\label{def:algo_sel}
	Given a set $\mathcal{I}$ of problem instances and a 
	distribution $\mathcal{D}$ over $\mathcal{I}$, a space of algorithms $\mathcal{A}$, and 
	a performance measure $m: \mathcal{I} \times \mathcal{A} \rightarrow \mathds{R}$, the 
	% metric -> function; HH: I don't think it necessarily has to be a metric
	\emph{per-instance algorithm selection problem} 
	is to 
	find a mapping\footnote{In practice, the mapping $s$ is often implemented by using 
		so-called instance features, i.e., 
		numerical characterizations of the instances $i \in \mathcal{I}$.
		These instance features are then mapped to an algorithm using 
		machine learning techniques. However, the computation of instance features incurs additional costs,
		which have to be considered in the performance measure $m$.
		%\note{ML}{The features (or their costs) are not an input of the performance metric $m$. It contradicts the last sentence in the paragraph above.\\KLB: good point; I think this should be changed.\\KLB: also, this took away from the introduction's narrative. I've moved it to a footnote for now.}
        }
        $s: \mathcal{I} \rightarrow \mathcal{A}$ that optimizes $\mathds{E}_{i \sim \mathcal{D}}m(i, s(i))$,
	the 
	 performance measure achieved by running the selected algorithm $s(i)$ for instance $i$, \fh{in expectation across} instances $i \in \mathcal{I}$ drawn from \mbox{distribution $\mathcal{D}$}.
\end{define}

%{\bf HH: I've had to make modifications below, because our definition of AS does not include several of the variants discussed below.}

%\note{Reviewer}{Cite and discuss also \cite{SmithMiles201412}}

There are many ways of tackling per-instance algorithm selection and related problems in practice. However, almost all contemporary approaches use machine learning 
to build predictors of the behaviour of given algorithms as a function of instance features. 
This general strategy may involve a single learned model or a complex combination of several, which, given a new problem instance 
to solve, is used to decide which algorithm or which combination of
algorithms to choose.

Given a portfolio of algorithms and a set of problem instances, building an algorithm selection model entails answering several questions, which we discuss in what follows.

\subsection{What to select and when}

It is perhaps most natural to select a single
algorithm for solving a given problem instance. This is used in the
SATzilla~\cite{Satzilla03,xu_satzilla_2008},
\textsc{ArgoSmArT}~\cite{nikoli_instance-based_2009},
SALSA~\cite{demmel_self-adapting_2005} and
\textsc{Eureka}~\cite{cook_maximizing_1997} systems, to name but a few examples.
The main disadvantage of this approach is that there is no way of mitigating a
poor selection---there is no way of recovering if the system chooses an algorithm that exhibits bad performance on the
problem.

Alternatively, we can seek a schedule that determines an ordering and time budget according to which we run all or a subset of the algorithms in the portfolio; usually, this schedule 
is chosen in a way that reflects the expected performance of the given algorithms (see, e.g., \cite{pulina_self-adaptive_2009,cphydra,kadioglu_algorithm_2011,hoos_aspeed_2014}).
Under some of these approaches, the computation of the schedule is
treated as an optimization problem that aims to maximize, e.g., the number of
problem instances solved within a timeout. For stochastic algorithms, the further
question of whether and when to restart an algorithm arises, opening the possibility of schedules that contain only a single algorithm, restarted several times (see, e.g., \cite{gomes_algorithm_2001,cicirello_max_2005,streeter_restart_2007,gagliolo_learning_restart_strategies_2007}).
%\note{FH}{Added \cite{gomes_algorithm_2001} and \cite{gagliolo_learning_restart_strategies_2007}; leaving the former out of the references would've been a bit of a fault pax.}
Instead of performing algorithm selection only once before starting to solve a problem, selection can
also be carried out repeatedly while the instance is being solved, taking into account information revealed during the algorithm run. 
%Most recent approaches only consider the case where the selection step is run once and its decisions are valid for the entire solving process. There are, however, efforts to 
Such methods monitor the execution of
the chosen algorithm(s) and take remedial action if performance deviates from
what is expected~\cite{gagliolo_adaptive_2004,MUSportfolio,LeiteBV12}, or perform selection 
repeatedly for subproblems of the given instance \cite{LAG1,LAG2,arbelaez_continuous_2010,samulowitz_learning_2007}.

\subsection{How to select}\label{sec:background:how}

The kinds of decisions the selection process is asked to produce drive the choice of machine learning models that perform the selection. If only a
single algorithm should be run, we can train a classification model that makes
exactly that prediction. This renders algorithm selection conceptually quite
simple---only a single machine learning model needs to be trained and run to
determine which algorithm to choose (see, e.g., \cite{guerri_learning_2004,gent_learning_2010,malitsky_non-model-based_2011}).

There are alternatives to using a classification model to select a single algorithm to be run on
a given instance, such as using regression models to predict the performance of each algorithm in the
portfolio. This regression approach
was adopted by early versions of SATzilla~\cite{Satzilla03,xu_satzilla_2008}, as
well as by several other
systems~\cite{roberts_learned_2007,silverthorn_latent_2010,Mersmann2013}.

%\note{ML}{SATZilla is a bad example for resource allocation based on performance prediction.
%It selects only one solver based on their regression models. HH: True; I reworded to clarify 
%the argument and SATzilla reference. I think it would be a mistake to focus on 
%selecting multiple algorithms here; the focus should be on the use of regression.}

Other approaches include 
the use of clustering techniques to partition problem instances in feature space and make decisions for each partition
separately~\cite{stergiou_heuristics_2009,kadioglu_isac_2010}, hierarchical models that make a series of
decisions~\cite{xu_hierarchical_2007,hurley_proteus_2014}, and cost-sensitive support vector
machines~\cite{Bischl2012_2}. The current version of
SATzilla~\cite{xu_hydra-mip_2011} uses cost-sensitive decision forests to
determine the best algorithm for each pair of algorithms and selects the overall
best by aggregating these ``votes''.


\subsection{Selection enablers}

%\note{Reviewer}{an explanation of instance specific features and/or exploratory landscape analysis will improve the paper: 
%what information is extracted?  are parameters important?  
%what impact does information quality have on overall oracle performance (as listed in your experiments in section 6).}

In order to make their decisions, algorithm selection systems need information
about the problem instance to solve and the performance of the algorithms in the
given portfolio. The extraction of this information---the
features used by the machine learning techniques used for selection---incurs overhead not required
when only a single algorithm is used for all instances regardless of instance characteristics. 
It is therefore desirable
to extract information as cheaply as possible, thus ensuring that the performance benefits
of using algorithm selection are not outweighed by this overhead.

Some approaches use only past performance of the algorithms in the portfolio
as a basis for selecting the one(s) to be run on a given problem
instance~\cite{gagliolo_adaptive_2004,streeter_combining_2007,silverthorn_latent_2010}. This approach has the benefit that the required data can be collected with minimal overhead as algorithms are executed. 
It can work well if the performance of the algorithms is similar
on broad ranges of problem instances. However, when this assumption is not 
satisfied (as is often the case), more informative features are needed.

%\note{Reviewer}{".. number of clauses in SAT, number of constraints in constraint problems". This is a bit awkward, clauses are constraints, while at the same time CSP/CP problems offer richers features (e.g., complexity of a constraint)}

Turning to richer instance-specific features, commonly used features include the number of variables of a problem instance and
properties of the variable domains (e.g., the list of possible assignments in
constraint problems, the number of clauses in SAT, the number of goals in
planning). 
%HH: added
Deeper analysis can involve properties of graph representations derived from the
input instance (such as the constraint graph~\cite{leyton2003portfolio,gent_learning_2010}) or properties of encodings into
different problems (such as SAT features for SAT-encoded planning
problems~\cite{fawcett2014improved}).
%\note{BB}{In the section above we should clarify that this is really only an example of features for eg SAT.
  %It sounds a bit too general, and our application domain is richer with other types of features.}

In addition, features can be extracted from
short runs of one or more solvers on the given problem instance. 
Examples of such probing features include the number of search
nodes explored within a certain time, the fraction of partial solutions that are
disallowed by a certain constraint or clause, the average depth reached
before backtracking is required, or characteristics of local minima found quickly using local search.
Probing features are usually more expensive to
compute than the features that can be obtained from shallow analysis of the instance
specification, \fh{but they can also be more powerful and have thus been used by many authors (see, e.g., 
\citep{nudelman_understanding_2004,cphydra,pulina_multi-engine_2007,xu_satzilla_2008,hutter2014algorithm}).}
%
% FH: moved the following sentence from intro to here.
%\note{Reviewers}{Discuss continuous blackbox optimization more in detail.}
For continuous blackbox optimization, algorithm selection can be performed based on Exploratory Landscape Analysis
\cite{Mersmann2013,Bischl2012_2,Kerschke2014}. 
The approach defines a set of numerical features (of different complexities and computational costs) 
to describe the landscapes of such optimization problems. Examples range from simple features that 
describe the distribution of sampled objective values to more expensive probing features based on local search. 

\changed{Finally, in the \fh{area of meta-learning (learning about the performance of machine learning algorithms; for an overview, see, e.g, \citep{Brazdil_metalearning_2008}), these} features are known as \emph{meta-features}. They include statistical and information-theoretical measures (e.g., variable entropy), landmarkers (measurements of the performance of fast algorithms~\cite{Pfahringer:2000p553}), sampling landmarkers (similar to probing features) and model-based meta-features~\cite{Vanschoren2010}. These meta-features, and the past performance measurements of many machine learning algorithms, are available from the online machine learning platform OpenML~\cite{openml2013}. In contrast to ASlib, however, OpenML is not designed to allow cross-domain evaluation of algorithm selection techniques.
}

%\note{BB}{I added the last sentences to satisfy the reviewer}

\subsection{Algorithm Selection vs.\ Algorithm Configuration}

A problem closely related to that of algorithm selection is the following algorithm configuration problem:
given a parameterized algorithm $A$, a set of problem instances $I$ and a performance measure $m$, find a parameter setting of $A$ that optimizes $m$ on $I$.
While algorithm selection operates on finite (usually small) sets of algorithms, algorithm configuration operates on the combinatorial space of an algorithm's parameter settings. General algorithm configuration methods, such as ParamILS~\cite{hutter_paramils_2009}, GGA~\cite{ansotegui_gender-based_2009}, I/F-Race~\cite{BirEtAl10}, and SMAC~\cite{HutHooLey11-SMAC}, 
%can handle the many discrete parameters that occur in practical algorithms.
have yielded substantial performance improvements (sometimes orders of magnitude speedups) of state-of-the-art algorithms for several benchmarks, including SAT-based formal verification~\cite{HutBabHooHu07}, mixed integer programming~\cite{HutHooLey10-mipconfig}, AI planning~\cite{Vallati13-SOCS}, and the combined selection and hyperparameter optimization of machine learning algorithms~\cite{ThoHutHooLey13-AutoWEKA}.
%
Algorithm configuration and selection are complementary since configuration can identify algorithms with peak performance for homogeneous benchmarks and selection can then choose between these specialized algorithms. Consequently, several possibilities exist for combining algorithm configuration and selection~\cite{hutter_performance_2006,xu_hydra_2010,kadioglu_isac_2010,xu_hydra-mip_2011,malitsky_evolving_2013,ISAC++}.
%
%FH: dropped LeiteBV12, which does not do configuration.
%
The algorithm configuration counterpart of ASlib is AClib~\cite{HutEtAl14:AClib} (\url{http://aclib.net}).
\changed{In contrast to ASlib, it is infeasible in AClib to store performance data for all possible parameter configurations, which often number more than $10^{50}$.
Therefore, an experiment on AClib includes new (expensive) runs of the target algorithms with different configurations
and hence, these experiments on AClib are a lot more costly than experiments on ASlib where no new algorithm runs are necessary.\footnote{\fh{In algorithm configuration, this need for expensive runs indeed causes a problem for research. One way of mitigating it is offered by fast-to-evaluate surrogate algorithm configuration benchmarks~\cite{Eggensperger2015}.}}
}

%\note{Reviewer}{What is the difference between ASlib and AClib?}

%\medskip

This concludes our discussion of the background. A full coverage of the wide literature on algorithm selection is beyond the scope of this article, but we refer the interested reader to recent survey articles on the topic~\cite{smith-miles_cross-disciplinary_2009,kotthoff_algorithm_2014,Serban:2013}.

%\subsection{Applications}
%
%\todo{LK: Not sure why this is called ``Applications'' -- it seems to introduce
%a number of specific approaches.}
%\todo{YM: The point is that this field of study is not only focused on picking the best
%solver, and that there are also other applications of the techniques we are studying.}
%
%Regardless of the specifics of the implementation, the benefits and power of 
%algorithm selection is immediately visible when
%looking at the winners of international competitions in
%SAT~\cite{xu2012evaluating}, CSP~\cite{cphydra}, AI
%planning~\cite{helmert_fast_2011}, etc. In any field where problems can be
%solved by multiple approaches, this methodology can help make the decision of
%the best approach for the given instance. For an overview of all the currently
%explored applications we refer the reader to \cite{kotthoff_algorithm_2014}. Yet
%even when only a single solver exists, there are still usually a number of
%parameters that determine its behavior. In such a scenario, multiple versions of
%the solver can be automatically configured using tools like
%SMAC~\cite{HutHooLey11-SMAC} or GGA~\cite{ansotegui_gender-based_2009} to create
%a portfolio~\cite{xu_hydra-mip_2011,malitsky_instance-specific_2012,ISAC++}.
%
%Furthermore, techniques that have been developed for algorithm selection are not
%restricted to the competition scenarios. Approaches like DASH, analyze the
%structure of the nodes of search tree, identifying the best branching heuristic
%to employ at that time~\cite{DASH}. For a given instance, systems like Proteus
%identify whether it is worthwhile transforming the instance to an alternate
%problem representation prior to selecting the most appropriate
%solver~\cite{hurley_proteus_2014}. In the realm of finding all Minimum
%Unsatisfiable Sets (MUS) in a CSP, algorithm selection was used to show that
%improved performance can be achieved by switching the applied solver every few
%minutes based on the MUSs that have already been found~\cite{MUSportfolio}. A
%related application is the creation of solver schedules which not only choose
%the best solver to apply on an instance, but can also intelligently hedge there
%bets by selecting a sequence of solvers and the amount of time that each should
%be run for~\cite{hoos_aspeed_2014,p3S}. And current research is continuing to
%take algorithm selection to new applications.
% 
