\section{Summary of Format Specification}
\label{sec:spec}

We propose a data format specification for algorithm selection scenarios,
i.e., instances of the per-instance algorithm selection problem.
This format and the resulting data repository allow a fair and convenient 
scientific evaluation of and comparison between algorithm selectors.

In addition to the definition of the algorithm selection problem in Section~\ref{sec:intro},
the format specification is tailored to the following generic approach to algorithm selection (Figure~\ref{dia:as}),
where an algorithm has to be applied online to a new problem instance $i \in \mathcal{I}$.

%\note{Reviewers}{You should consider providing a schematic/diagram overview of the system, before describing the format specification. 
%When I took a step back, I wanted to know more about the whole algorithm selection framework, not use the format.}
%\note{ML}{See Figure~\ref{dia:as} for a diagram; 
%HH: This is good, but I am confused about the use of $\mathcal{P(I)}$, which I don't see explained anywhere, and which, if it refers to a probability distribution over instances, does not make sense to me.
%It seems to me that feature cost should be a mapping from instances to vectors of positive reals.
%ML: $\mathcal{P}(X)$ was meant as a power set of $X$; I changed it to $2^X$ and of course, $X$ is $\mathcal{F}$ and not $\mathcal{I}$} 

%\note{ML}{
%I also considered an open issue raised by KLB that we did not considered the selection of schedules. This would also need some changes in the definition in the introduction. Right now, I have not included pre-solving schedules.
%HH: I think we should keep it that way; if we open this door too early and too wide, we are just inviting additional criticism regarding our experiments.}


\begin{figure}[tbp]
\resizebox{\textwidth}{!}{\input{pics/as_tikz.tex}}
\caption{Algorithm Selection workflow.}
\label{dia:as}
%\note{FH}{I see a couple of issues with the formalism:\\1. since $f$ maps to $\mathcal{F}$ and c maps from $2^{\mathcal{F}}$, you can't write
%$c(f(i))$ because $f(i)$ is a feature vector (an element of $\mathcal{F}$, not an element of $2^{\mathcal{F}}$). Did you mean to write $c(\mathcal{F}')$? But that also doesn't work since it would make the cost independent of the instance (only dependent on which feature subset you choose). But I think the real issue is the following:\\
%2. Unfortunately, $\mathcal{F}' \subseteq \mathcal{F}$ isn't formally clean, either: if $\mathcal{F}$ is a set of vectors of length $k$, then a subset of $\mathcal{F}$is also a set of vectors of length $k$. It's not immediately clear to me how to write it both simple and formally correct; it seems like you'd have to write $\mathcal{F}$ as the cross-product of individual dimensions, and then define $\mathcal{F}'$ as the cross-product of a subset of those dimensions.\\
%ML: I fixed it accordingly to the suggestion of Lars by not talking about feature subsets}
\end{figure}

\begin{enumerate}
  \item \changed{A vector of instance features $f(i) \in \mathcal{F}$} of $i$ is computed. 
  		Feature computation may occur in several stages,
  		each of which produces \changed{a group of (one or more) features}. Furthermore, later stages may depend on the results of earlier ones.
  	    Each feature group incurs a cost,
  	    e.g., runtime. If no features are required, the cost is $0$ (this occurs, e.g., for variants of algorithm selection that compute static schedules).
  \item A machine learning technique $s$ selects an algorithm $a^* \in \mathcal{A}$ based on the feature vector from Step 1.
  \item The selected algorithm $a^*$ is applied to $i$.
  \item Performance measure $m$ is evaluated, taking into account feature computation costs and the performance of the selected algorithm.
  \item \changed{Some algorithm selectors do not select a single algorithm, but rather a schedule of algorithms:
  		they apply $a^*$ to $i$ for a resource budget $r \in \mathcal{R}$
        (e.g., CPU time); evaluate the performance metric, which also indicates whether $i$ was solved, and then apply another algorithm to $i$, \hh{based on observations made during the run of $a^*$}.\footnote{In \fh{principle}, the workflow can be arbitrarily more complex, e.g., alternating between computing further features and running selected algorithms.}}
\end{enumerate}
%\note{FH}{Is there any relationship between these steps and the blocks in the figure? If there could be, it would help if the blocks in the figure had matching numbers.
%ML: No, there is no perfect matching.
%}

\changed{The purpose of our library is to provide all information necessary for performing algorithm selection experiments
using the given scenario data.
Hence, we do not need to actually run algorithms on instances, since all performance data is already \fh{precomputed}. 
\hh{This drastically reduces the time required for executing studies, i.e., 
the runtime of experiments is now} dominated by the time required for learning $s$
and not by applying algorithms to instances (e.g., solving SAT problems).
It also means that experimental results are perfectly \hh{reproducible};
for example, the runtimes of algorithms
do not depend on the hardware used;
\hh{rather, they can be simply looked up in the performance data for a} scenario.
}

Table~\ref{nutshell:spec} introduces the basic structure of our format definition; 
the complete specification with all details can be found in \fh{an accompanying technical report} \cite{spec} and on our online platform.

%\note{Reviewer}{The description [of the format specification] could be simplified and improved (e.g., "A further file..").}

% All files, except the meta information and README file, follow the Attribute-Relation File Format (ARFF)
% which was developed for the \texttt{Weka} machine learning software and
% can nowadays be easily imported into many programming languages and toolboxes.
% It also offers more ways to define meta information than simple comma-separated-values (CSV) files,
% e.g., the specification of data types for each column and the definitions of missing values.
% Some information is mandatory for any algorithm selection scenario while other information is only sometimes available and useful.
% In the following, we describe these types of data.

\noindent\fbox{
\begin{minipage}[tbh]{0.96\textwidth} 
\captionof{table}{Overview of Format Specification.}
\label{nutshell:spec}
\paragraph{Mandatory Data}
\begin{itemize}
  	\item The \textit{meta information file} is a global description file containing general information about 
		the scenario, including the name of the scenario, performance measures, algorithms, features and 
		limitations of computational resources.
	\item The \textit{algorithm performance} file contains performance measurements and completion status of the algorithm runs.
	\item The \textit{instance feature} file contains the feature vectors for all instances. Another file contains technical information
	about errors encountered or instances solved during feature computation.
	\item The \textit{cross-validation} file describes how to split the instance set into training and test sets
		to apply a standard machine learning approach to obtain an unbiased estimate of the performance of an algorithm selector.
    \item A human-readable \textit{README} file explains the origin and meaning of the scenario, as well as the process of data generation.
%HH: But the other files are also human-readable (as opposed to binary, right? I'd rather say "plain text" here.
\end{itemize}
\paragraph{Optional Data}
\begin{itemize}
  	\item The \textit{feature costs} file contains the costs of the feature groups\changed{, i.e., sets of features computed together.}
  	\item  The \textit{ground truth} file specifies information on the instances and their 
		respective solutions \changed{(e.g., SAT or UNSAT)}.
%HH: truths sounds awkward, so I changed it to "truth"		
	\item The \textit{literature references} file \changed{in BibTeX format} includes information on the context in which the data set was generated 
		and previous studies in which it was used.
\end{itemize}
\end{minipage}
}
 
% \note{FH}{This list of what information is in which file is at a much lower level than the remainder of the paper, and it breaks the flow.
% I agree this belongs in the paper since it is already the small, high-level, excerpt of the entire specification. Nevertheless, many readers, when 
% getting to this point, would either skip the description or get bored/stop reading. I propose to put the information about the files and their contents into a table (one table each for mandatory/optional; it could also be a figure or a minipage with a visual border, anything distinct from the main text), and to pull out a little bit of text from that table into the main text. That will both make it easy for readers to skip the info and for people looking for the info to find it.}

% \subsection{Mandatory Data}
% 
% In a nutshell, an algorithm scenario has to provide meta information,
% performance of algorithm runs, feature vectors
% and a README.
% 
% \paragraph{Meta Information}
% The meta information file is a global description file containing 
% general information about the scenario.
% This is one of the few files which is not an 
% ARFF, but has its own, simple format, consisting mainly of lines with names and corresponding values.
% It includes:
% the name of the scenario;
% the measured performance metrics types of the algorithms and whether runtime or solution quality is optimized;
% the names of all available algorithms;
% the names of all available instance features;
% whether algorithms and features are stochastic or deterministic;
% resource limits of algorithm runs and feature computation.
% Furthermore, 
% instance features are normally computed in a sequence of processing steps.
% Thereby, each step can generate several features
% and a feature can depend on a list of steps.
% This n-to-m relationship can also be expressed here.
% 
% \paragraph{Algorithm Performance}
% The file about algorithm performance contains information on all evaluating runs of 
% the algorithms on the instances. 
% This includes, but is not limited to, their measured performance values and run status,
% e.g., whether an algorithm crashed or was aborted because its run hit its predefined
% cut-off time.
% In practice, certain (stochastic) algorithms may be evaluated multiple times,
% while other (deterministic) algorithms are run just once. 
% To cover both scenarios, each row of this file specifies a single repetition of an algorithm run.
% 
% \paragraph{Instance Features}
% The instance feature file contains the measured feature values for all instances. 
% It is assumed that this information 
% will be used to differentiate between different instances and to predict suitable algorithms
% by using a prognostic model.
% Also, feature generators can be stochastic 
% and therefore we allow repeated measurements of instance features.
% 
% \paragraph{Feature Computation Runstatus}
% A further file contains technical information about the feature calculation in general. 
% It states whether the execution of feature processing steps was successful, 
% if some kind of unusual abort occurred
% or the instance was solved during the feature computation.
% 
% % The file is designed to allow the user to specify what kind of a crash has transpired.
% % Only these instance features can be used where 
% % the depending feature processing steps were marked as successful in this file.
% 
% \paragraph{README}
% Last but not least, a human-readable README file must be provided,
% that explains the origin and meaning of the scenario, the process of the data generation 
% and all of peculiarities, e.g, why a certain crash has occurred or why certain values are missing.
% 
% \subsection{Optional Data}
% 
% We encourage but do not force the contributors of algorithm scenarios to also provide the following information.
% 
% \paragraph{Feature Costs}
% Although usually a minor overhead, feature computation is rarely free. In many cases this cost occurs in the form of 
% time, but no such requirement is made. Instead, this file specifies the cost of computing each feature processing step
% which was defined in the meta information file.
% The costs of a feature set is the sum of the costs of all feature steps 
% which were necessary to compute the feature set.
% If an instance was solved during a feature processing step,
% only the costs until this presolving step matter.
% 
% % We strongly recommend to provide information about feature costs.
% % However, you are allowed to omit this file if your scenario does not entail feature costs or you have 
% % absolutely no knowledge of them. 
% % \todo{FH: some parts of this aren't digested to being part of a paper yet (e.g., ``you are allowed to omit this file if your scenario'' is fine as part of a manual, but not part of an article for the wide readership of AIJ). }
% % Please note that certain analysis, \eg{} runtime improvement against best single algorithm, are not possible,
% % if feature costs are not provided. 
% 
% \paragraph{Validation}
% The cross-validation file contains information how to split the set of instances into training and test sets
% to assess the performance of an algorithm selector in an unbiased and standard machine learning fashion.
% The file should be provided to guarantee that all algorithm selectors are evaluated on the same splits.
% If it is not provided, it will be automatically generated when adding the data set to the repository.
% We support repeated cross-validation, where the process of a single cross-validation is simply
% run multiple times and performance estimators are averaged in the end.
% These repetitions are a standard technique for estimator variance reduction
% on smaller data sets or in cases 
% where other sources of instability might be present in the data.
% 
% \paragraph{Ground Truths}
% The file about ground truths is an optional file meant to specify 
% further available information about the instances and their respective solutions. 
% This can be used both as a sanity check 
% to make sure the solvers return correct solutions, 
% but also for cases where an evaluation metric depends on the optimal solution. 
% Furthermore, some selection methods can exploit structured information about the problem instances.
% For example, \texttt{satzilla}~\cite{xu_satzilla_2008} used hierarchical models 
% exploiting ground truths whether a problem instance is satisfiable or unsatisfiable in case of the 
% boolean satisfiability problem.
% 
% \paragraph{Citation}
% At last,
% there can be file which includes literature references to cite 
% in which context the data set was generated
% or where it was used.
