%BB: I don't really think it helps the argument to refer to NFL. Quite the opposite. Algorithm selection models would itself be subject to NFL from a theoretical standpoint. And NFL is a useless and irrelevant theorem for what we are doing IMHO.
%ML: What is NFL? National Football League?
%BB: No Free Lunch of course. We removed that argument due to the problems above
%HH: NFL doesn't strictly apply anyway, since we are talking about problems with compact representations, not arbitrary blackbox functions

\section{Introduction}
\label{sec:intro}

%\note{Reviewers}{All three reviewer express their concerns about novelty and significance
%...too little significant new insights being gained 
%...include significant analysis of the collected examples}

Although NP-complete problems are widely believed to be intractable in the worst case, it is often possible to solve even very large instances of such problems that arise in practice. This is fortunate, because such problems are ubiquitous in artificial intelligence applications. There has thus emerged a large subfield of AI devoted to the advancement and analysis of heuristic algorithms for attacking hard computational problems. Indeed, quite surprisingly, this subfield has made consistent and substantial progress over the past few decades, with the newest algorithms quickly solving benchmark problems that were until recently beyond reach. The results of the international SAT competitions provide a paradigmatic example of this phenomenon. Indeed, the importance of this competition series has gone far beyond documenting the \hh{progress achieved by the SAT community in solving difficult and application-relevant SAT instances}---it has been instrumental in driving research itself, helping the community to coalesce around a shared set of benchmark instances and providing an impartial basis for determining which new ideas yield the biggest performance gains.

The central premise of events like the SAT competitions is that the research community ought to build, identify and reward single solvers that achieve strong across-the-board performance. However, this quest appears quixotic: 
%FH: de-capitlized "quixotic"; see http://en.wikipedia.org/wiki/Proper_adjective 
most hard computational problems admit multiple solution approaches, none of which dominates all alternatives across multiple problem instances. In particular, this fact has been observed to hold across a wide variety of AI applications, including
% This fact has been theoretically proven for discrete blackbox function optimization~\cite{wolpert_no_1997}, 
%In particular, this fact is empirically well supported in many sub-communities of AI, for example those dealing with 
propositional satisfiability (SAT)~\cite{xu2012evaluating}, constraint satisfaction (CSP)~\cite{cphydra}, AI planning~\cite{helmert_fast_2011}, and supervised machine learning~\cite{ThoHutHooLey13-AutoWEKA,Vanschoren2012}. 
An alternative is to accept that no single algorithm will offer the best performance on all instances, and instead aim to identify a portfolio of complementary algorithms and a strategy for choosing between them \citep{rice_algorithm_1976}. To see the appeal of this idea, consider the results of the sequential application (SAT+UNSAT) track of the $2014$ SAT Competition.\footnote{\url{http://www.satcompetition.org/2014/results.shtml}} The best of the $35$ submitted solvers, \texttt{Lingeling ayv}~\cite{lingeling}, solved $77\%$ of the $300$ instances.
However, if we could somehow choose the best among these $35$ solvers on a per-instance basis, we would be able to solve $92\%$ of the instances. 

%\note{FH}{It felt like some text got lost here, but it was probably just suboptimal flow. I added a sentence to actually mention what algorithm selection is before talking about algorithm selectors, and now it flows OK.}
\fh{Research on this \emph{algorithm selection} problem~\citep{rice_algorithm_1976} has demonstrated the practical feasibility of using machine learning for this task.}
In fact, although practical algorithm selectors occasionally \fh{choose suboptimal algorithms}, \hh{their performance can 
	get close to that of an oracle that always makes the best choice.}
The area \hh{began to attract} considerable attention when methods based on algorithm selection began to outperform standalone solvers in SAT competitions~\cite{xu_satzilla_2008}. Algorithm selectors have since come to dominate the state of the art on many other problems, including
%~\cite{gomes_algorithm_2001} 
%HH: this reference is, IMO, unsuitable - it is 13 years old and does not tell a real success story. I've therefore removed it. (I know how you meant it, but it is easily misunderstood) The right place for this reference is in related work; I've also cited it later in the paragraph, where it fits better.
%that, typically based on per-instance algorithm selection, dominate the state of the art in SAT~\cite{xu_satzilla_2008}, 
CSP~\cite{cphydra}, AI planning~\cite{helmert_fast_2011}, Max-SAT~\cite{malitsky_evolving_2013}, QBF~\cite{pulina_self-adaptive_2009}, and ASP~\cite{gebser_portfolio_2011}.


%\note{FH}{I moved a discussion of Exploratory Landscape Analysis to 2.3 (Selection Enablers); not sure how it got here, it was very much out of place (commented out in source). We should, however, add continuous blackbox optimization~\cite{Kerschke2014} to the list above, no?}
%FH: moved this to 2.3:
%For continuous blackbox optimization, algorithm selection can be performed based on Exploratory Landscape Analysis
%\cite{Mersmann2013,Bischl2012_2,Kerschke2014}, 
%a technique for deriving features by numerically describing the landscapes of such optimization problems. 

%\changed{To illustrate the power and potential of per-instance algorithm selection, let us consider the concrete case of SAT solving.\footnote{While SAT is arguably the best-studied application domain for per-instance algorithm selection techniques
%and other portfolio methods, similarly substantial performance gains can be achieved on many other problems.}
%SAT is a mature field, and there exist a plethora of SAT algorithms and SAT benchmark instances. 
%\note{ML/Reviewer}{I changed the following paragraph such that we talk about the most recent results of the 2014 SAT Competition.}
%Closing this gap between the best single solver and the oracle is the aim of per-instance algorithm selection.}

% Restricting ourselves to instances used in international SAT competitions (which have been held regularly since 2002), there are roughly 4\,260 instances 
% %HH: use international convention for separators - "," is used for decimal point in some countries ;-)
% from various tracks (random, handcrafted, and industrial). With a time limit of 5\,000 seconds, the best of the 29 top-performing solvers submitted to the 2012 SAT Challenge, mphaseSAT~\cite{mphaseSAT}, solves 2\,605 of these instances ($\approx 61$\%). However, if we had access to an oracle that selects the fastest solver for 
% any given SAT instance without incurring any selection cost, we would be able to solve 3\,776 instances ($\approx 86$\%). 
% %FH: commented this sentence; the numbers don't add up?
% %This means that using only the technology available in 2012, it is possible to have a system that solves an additional 979 (20\%) instances. 
% Closing this large gap between the best single solver and the oracle solver is the task pursued by per-instance algorithm selection.\footnote{In fact, readers familiar with mphaseSAT will note that, inspired by algorithm portfolio techniques \cite{gomes_algorithm_2001}, it applies several 
% 	different phase selection heuristics which make it quite robust, albeit slower than other basic solvers (it was not the
% 	best solver for any of the 4\,260 instances reported here, but was nevertheless the best overall single solver). The best single solver not related to mphaseSAT is CCSAT~\cite{CCSAT}, which only solves 1\,861 instances 
% 	($\approx 44$\%), demonstrating the potential benefit of per-instance algorithm selection even more clearly.
% }
% While SAT is arguably the best-studied application domain for per-instance algorithm selection techniques
% and other portfolio methods, similarly substantial performance gains can be achieved on many other problems.

%The fundamental underlying reasoning driving algorithm selection is that there is no single solver that dominates on every instance. In fact, in the case of the SAT example, mphaseSATm is never the best performing solver on any instance. Instead, it seems that in aiming to be a general purpose solver, it has to sacrifice on some. While this is an extreme example, it is indicative of what happens in practice. The task of algorithm selection is therefore to use some characterizing set of features describing an instance, to select the best performing solver. 
%
%The methods used to choose the best solver for a particular instance, however, vary widely. There are techniques that fit a regression model for each solver that map the features to performance, selecting the expected best~\cite{?}. Alternatively, instances can be grouped based on their features, with a single solver assigned to each cluster. During runtime, the instance is assigned to a cluster and solved with the appropriate solver~\cite{?}. Ranking techniques can also be used to order candidate solvers for each instance~\cite{?}. Classification techniques like a random forest can be trained to rate between every pair of solvers, selecting the one voted to win most frequently~\cite{?}. Alternatively, a random forest can be trained where each tree is specifically constructed to split the training data where instances falling into each node better agree on the best solver than they did in the parent node~\cite{?}. For a thorough overview of the topic, we refer the reader to the following survey~\cite{?}.

To date, much of the progress in research on algorithm selection has been
demonstrated in algorithm competitions originally intended for
non-portfolio-based (``standalone'') solvers. This has given rise to a variety
of problems for the field. First, benchmarks selected for such competitions tend
to emphasize problem instances that are currently hard for existing standalone
algorithms (to drive new research on solving strategies) rather than the wide
range of both easy and hard instances that would be encountered in practice
(which would be appropriately targeted by researchers developing algorithm
selectors). Relatedly, benchmark sets change from year to year, making it
difficult to assess the progress of algorithm selectors over time. Second,
although competitions often require entrants to publish their source code, none
require entries based on algorithm selectors to publish the code used to
\emph{construct} the algorithm selector (e.g., via training a machine learning
model) or to adhere to a consistent input format. Third, overwhelming competition
victories by algorithm selectors can make it more difficult for new standalone
solver designs to get the attention they deserve and can thus create resentment
among solver authors. Such concerns have led to a backlash against the
participation of portfolio-based solvers in competitions; for example,
\hh{starting in 2013 solvers that explicitly combine more than two component
algorithms have been excluded from the SAT competitions}.

The natural solution to these problems is to evaluate algorithm selectors on
their own terms rather than trying to shoehorn them into competitions intended
for standalone solvers. This article, written by a large set of authors active in research
on algorithm selectors, aims to advance this goal by introducing a set of
specifications and tools designed to standardize and facilitate such evaluations. Specifically, we propose a benchmark library, called ASlib, tailored to the cross-domain evaluation of algorithm selection techniques. 
%FH: The following sentence is new, to address Holger's comment.
%Indeed, our library is not limited to the algorithm selection problem specified in Definition \ref{def:algo_sel}, but also supports other related ways algorithm portfolio scenarios.
%
ASlib is based on a standardized data format specification 
(Section~\ref{sec:spec})
that covers a
wide variety of foreseeable evaluations. To date, we have instantiated this specification with
benchmarks from six different problem domains, which we describe in Section~\ref{sec:scenarios}.
% FH: too much detail for the intro. Cut.
% This includes, but is not limited to, situations that explore the time benefits of excluding and not computing certain features. Or experiments into techniques  simultaneously pursuing multiple objectives. 
However, we intend for ASlib to grow and evolve over time.
Thus, our article is accompanied by an online repository
(\url{http://aslib.net}), which accepts submissions from any researcher. Indeed,
there are already scenarios available online that were added after the
ASlib release we describe in this paper.

Our system automatically checks newly submitted datasets to verify that they adhere to the
specifications and then provides an overview of the data, including the results
of some straightforward algorithm selection approaches based on regression, clustering and classification.
We provide some examples of these automatically-generated overviews and benchmark results in Sections \ref{sec:eda} and \ref{sec:experiments}.
All code used to parse the format files, explore the algorithm selection scenarios and run benchmark machine
learning models on them is publicly available in a new R package dubbed
\system{aslib}.\footnote{This package is currently hosted at
	\url{https://github.com/coseal/aslib-r}. We will submit it to the official R package server CRAN alongside the final version of this article.}

Overall, our main objective in creating ASlib is the same as that of an algorithm competition: to allow
researchers to compare their algorithms systematically and fairly, without having to replicate someone else's system or to personally collect
raw data. We hope that it will help the community to obtain an unbiased understanding of
the strengths and weaknesses of different methodologies and thus to improve the current state of the art in per-instance algorithm selection.


%The methods used to attack the algorithm selection problem vary widely, ranging from explicit classification approaches that directly pick a solver for each instance to more indirect approaches, such as predicting the runtime of each solver and then selecting the one predicted to be fastest.
%
%Systems based on either of the many possible methods to solve the algorithm selection problem usually substantially outperforms the single best solver in the domain being investigated, but, as of yet, there has not been an across the board comparison of all the different methodologies. The 

%Another barrier to research on algorithm selection is the considerable upfront cost to new researchers entering the field. Currently, they first need to acquire and compile a diverse set of solvers, a feature set, and a set of problem instances of interest. They then need to measure the performance of each solver on each instance. \changed{In the SAT example above, this would involve running 35 
%solvers on 300 instances, each with a timeout of 5\,000 seconds, which, at a cost of 281 CPU days, necessitates access to substantial computing resources.} %ML: real cost data from SAT Competition website
%\note{KLB}{But the runtime results are actually on the SAT Competition website. So it's not that compelling to say that new researchers would have to rerun these experiments.}


