\documentclass{article} % For LaTeX2e
\usepackage{nips13submit_e,times}
\usepackage[hidelinks]{hyperref}
\usepackage{url}
\usepackage{natbib}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
%\usepackage[lined,boxed,commentsnumbered]{algorithm2e}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{multirow}
\usepackage{listings}
\usepackage{tabularx}
\usepackage{bm}
\usepackage{tablefootnote}
\usepackage{float}
\usepackage{relsize}
\usepackage{microtype}

\usepackage{amssymb}
\usepackage{amsmath}
%\usepackage{amsthm}
%\usepackage{textcomp}

\usepackage{dblfloatfix}
\usepackage{xcolor}

\usepackage{bm}
\usepackage{dsfont}
\usepackage{url}
\usepackage{multirow}
%\usepackage{microtype}
\usepackage{booktabs}
%\usepackage{lmodern}
%\usepackage{moresize}

%\documentstyle[nips13submit_09,times,art10]{article} % For LaTeX 2.09

\newcommand{\note}[1]{}
% comment the next line to turn off notes
\renewcommand{\note}[1]{~\\\frame{\begin{minipage}[c]{\columnwidth}\vspace{2pt}\center{#1}\vspace{2pt}\end{minipage}}\vspace{3pt}\\}

\newcommand{\hpnnet}[0]{\textsc{hp-nnet}}
\newcommand{\hpdbnet}[0]{\textsc{hp-dbnet}}
\newcommand{\hpconvnet}[0]{\textsc{hp-convnet}}

\newcommand{\hide}[1]{}

\newcommand{\Spearmint}[0]{\textsc{Spearmint}}
\newcommand{\TPE}[0]{\textsc{TPE}}
\newcommand{\SMAC}[0]{\textsc{SMAC}}

\title{Towards an Empirical Foundation for \\Assessing Bayesian Optimization of Hyperparameters}


\author{
Katharina Eggensperger, Matthias Feurer, Frank Hutter\\
%Albert-Ludwigs Universitaet Freiburg\\
Freiburg University\\
\texttt{\smaller\{eggenspk,feurerm,fh\}@informatik.uni-freiburg.de} \\
\And
James Bergstra \\
University of Waterloo\\
\texttt{\smaller james.bergstra@uwaterloo.ca } \\
\And
Jasper Snoek \\
Harvard University \\
\texttt{\smaller jsnoek@seas.harvard.edu } \\
\And
Holger H. Hoos and Kevin Leyton-Brown\\
University of British Columbia\\
\texttt{\smaller\{hoos,kevinlb\}@cs.ubc.ca} \\
}

% The \author macro works with any number of authors. There are two commands
% used to separate the names and addresses of multiple authors: \And and \AND.
%
% Using \And between authors leaves it to \LaTeX{} to determine where to break
% the lines. Using \AND forces a linebreak at that point. So, if \LaTeX{}
% puts 3 of 4 authors names on the first line, and the last on the second
% line, try using \AND instead of \And before the third author name.

\newcommand{\argmin}{\operatornamewithlimits{argmin}}

\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}

\newcommand{\vTheta}{{\bm{\Theta}}} % vector notation
\newcommand{\vtheta}{{\bm{\theta}}}
\newcommand{\vlambda}{{\bm{\lambda}}}
\newcommand{\vLambda}{{\bm{\Lambda}}}
\newcommand{\vs}[0]{\emph{vs}}
\newcommand{\eg}[0]{\emph{e.{}g.{}}}
\newcommand{\ie}[0]{\emph{i.{}e.{}}}
\newcommand{\adhoc}[0]{\emph{ad hoc}}
\newcommand{\gauss}{\mbox{${\cal N}$}}
\newcommand{\denselist}{\itemsep -1.5pt\partopsep -20pt}



%% make the whole document shorter--this one is very powerful.  Experiment with the number; 0.97 will do a lot and looks OK
\renewcommand{\baselinestretch}{0.97}

%fh: more LaTeX trickery.
%\addtolength{\floatsep}{-0.050in} %space left between floats.
\addtolength{\textfloatsep}{-0.05in} %space between last top float or first bottom float and the text.
\addtolength{\abovecaptionskip}{-.12in} % space above caption, only for figures not tables (at least for LNCS style)
\addtolength{\belowcaptionskip}{-.05in} % space below caption
%%\addtolength{\subfigcapskip}{-0.1cm}
%%% \intextsep : space left on top and bottom of an in-text float.


% replaces tabular; takes same arguments. use \midrule for a rule, no vertical rules, and eg \cmidrule(l){2-3} as needed with \multicolumn
\newenvironment{ktabular}[1]{\sffamily\small\begin{center}\begin{tabular}[c]{#1}\toprule}{\bottomrule \end{tabular}\end{center}\normalsize\rmfamily\vspace{-5pt}}
\newcommand{\tbold}[1]{\textbf{#1}}
\newcommand{\interrowspace}{.6em}

\makeatletter
\renewcommand\section{\@startsection
  {section}{1}{\z@}%name, level, indent
  {-.6ex \@plus -1ex \@minus -.2ex}% beforeskip
  {.001ex \@plus.2ex \@minus -.2ex}%            afterskip
  {\normalfont\large\bfseries}}% style
\renewcommand\subsection{\@startsection
  {subsection}{2}{0mm}%name, level, indent
  {-2.25ex\@plus -1ex \@minus -.2ex}%beforeskip
  {.2ex \@plus .2ex}%afterskip
  {\normalfont\normalsize\bfseries}}% style
\makeatother

\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}

% The title and the abstract are supposed to be half a page
\maketitle

\vspace*{-0.2cm}
\begin{abstract}
\vspace*{-0.2cm}
Progress in practical Bayesian optimization is hampered by the fact that the only available standard benchmarks are artificial test functions that are not representative of practical applications.
To alleviate this problem, we introduce a library of benchmarks from the prominent application of hyperparameter optimization and use it to compare Spearmint, TPE, and SMAC, three recent Bayesian optimization methods for hyperparameter optimization.
%The performance of a machine learning algorithm heavily depend on the setting of its  hyperparameters. Bayesian Optimization methods provide an automated alternative to manual search. Expecting that this techniques perform differently applied to different problems a baseline for future development and applications is needed. This paper shows a first step towards an empirical comparison of hyperparameter optimization techniques.
\end{abstract}




%\note{FH: I left notes throughout using this mechanism. These notes stand out ugly on purpose -- so that they can't be overlooked easily. Once a note is dealt with, please remove it or comment it out in the source. For checking e.g. length, all notes can also be disabled at once by commenting out a single line towards the top of the source.}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}\label{sec:intro}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%One of the most prominent applications of Bayesian optimization is the tuning of machine learning (ML) hyperparameters. 
The performance of many machine learning (ML) methods depends crucially on hyperparameter settings and thus on the method used to set hyperparameters.
%\note{FH: mention some examples with citations if we have space\\      MF: Maybe the maxout paper; but they only say they do hyperparameter optimization, not how; Maybe tackle this by citing many neural network papers which show that there are a many competing approaches with ad-hoc choices of architecture and not told how hyperparameters are set up\\ FH: decision postponed as we don't have space anyways.}
%Only recently, it has been shown that even a simple random search can sometimes be an efficient hyperparameter optimizer~\cite{BerBen12}, 
Recently, Bayesian optimization methods have been shown to outperform established methods for this problem (such as grid search and random search~\cite{BerBen12})
and to rival---and in some cases surpass---human domain experts in finding good hyperparameter settings~\cite{SnoLarAda12,ThoEtAl13,BerYamCox13}. 
%In order to continue this success story of Bayesian optimization,  
%
As a result, hyperparameter optimization has become an active research area within Bayesian optimization, with characteristics such as low effective dimensionality~\cite{BerBen12,CheCasKra12,WangEtAl13} and problem variants, such as optimization across different data sets~\cite{BarEtAl13} being explored.

One obstacle to further progress in this nascent field is a dearth of hyperparameter optimization benchmarks and comparative empirical studies.
%To date, a typical paper introducing a new hyperparameter optimizer also introduces a new set of hyperparameter optimization benchmarks, on which the optimizer is demonstrated to achieve state-of-the-art performance as compared to, e.g., human domain experts. 
%However, human domain experts are not an objective baseline. 
It can be difficult to evaluate a new optimizer on benchmarks used in previous papers because
(1)~optimizers are written in different programming languages and use different search space representations and file formats; (2)~hyperparameter optimization benchmarks that have been developed jointly with an optimizer are not typically packaged as black boxes 
(including the respective machine learning algorithm and its input data)
that can be used with other optimizers. %(and potentially a CUDA environment for running it) 
%
%; and~(3)~hyperparameter optimization experiments are time-consuming.
%These problems represent a considerable barrier to anyone aiming to develop a new hyperparameter optimization algorithm and objectively measure its performance. As a result, so far it has been unknown how recent hyperparameter optimizers compare across benchmarks.

To alleviate these problems, we have collected and made available a library of hyperparameter optimization benchmarks from the recent literature and used it to empirically evaluate the respective strengths and weaknesses of three prominent Bayesian optimization methods for hyperparameter optimization: \Spearmint{}~\cite{SnoLarAda12}, \TPE{}~\cite{BerEtAl11}, and \SMAC{}~\cite{HutHooLey11}.
We thereby hope to provide an empirical foundation to facilitate the development and evaluation of future methods for this problem.
%Our library is quite versatile, comprising hyperparameter optimization problems with few all-continuous parameters, problems with many discrete parameters, and problems with conditional parameters that are only active given a certain instantiation of their `parent' parameters. 

\hide{
%FH: this text is fine and I reused substantial parts of it, but it was not tuned to BayesOpt, where people want to hear about Bayesian optimization ;-)
%When using machine learning the users are often faced with the problem to choose hyperparameters. This choice is crucial to achieve good performance on a specific dataset. Machine learning should be used not only for the sake of machine learning but also to help other scientific and non-scientific areas. Therefore more and more users are non-experts, who require easy-to-use solutions. Techniques which automatically try to find the best hyperparameters using bayesian optimization were developed~\cite{BerEtAl11, HutHooLey11, SnoLarAda12} and their superiority over a manual search, which still is widely used, has been shown. 

Bayesian optimization for automatic hyperparameter tuning provides frameworks that can be used to search defined hyperparameter spaces for good solutions. Based on this there are approaches for non-expert users, like Auto-WEKA~\cite{ThoEtAl13} which tunes both, finding a good algorithm and good hyperparameters. 

With this frameworks the task to choose a good hyperparameter setting changed to the question which optimizer to use. So far the effiency of each approach is measured against manual or random search but the optimizers were not compared against each other. Though there are partial comparisons, like in~\cite{SnoLarAda12, ThoEtAl13}, there is no intention to do a general evaluation. Hence there is a need for a baseline for both future approaches and applications. This is not a trivial task, as the optimizers don't use the same file formats and same sort of search spaces. To provide benchmarks, data and algorithms need to be collected to set up and perform time and resource demanding experiments. The experiments we conducted are an expensive first step towards such a baseline. 

Section~\ref{sec:smbo} gives a brief overwiew of the optimization methods we compare. Then we present a collection of conducted experiments and explain the difficulty for each problem. In section~\ref{sec:results} the results are shown and explained. Eventually there is a discussion and interpretation of the benchmarks.

\begin{itemize}
\item Citations to use: Random Search/Nnet~\cite{BerBen12}, dbnet~\cite{LarErhCouBerBen07}, GPGO~\cite{OsbGarRob09}, GGA~\cite{LarErhCouBerBen07}
\note{FH: This first bullet point is an outlier, it merely collects all the references for the intro but won't become a paragraph by itself. You probably don't need GPGO and GGA but want another few references for the fact that ML algos are highly parameterized, such as \cite{BerYamCox13} and that the hyperparameter setting matters a lot.

MF: More will follow, \cite{BerYamCox13} is included in the end of section \ref{sec:smbo}}

%\note{FH: I merged some bullet points that could hang together in one paragraph.}
%\item Many hyperparameters in machine learning; they are very important for performance; thus, recently several automated approaches for optimizing them of many hyperparameters
\note{A similar formalization of the problem as used in the Auto-WEKA paper would be nice.

MF: Would, for the sake of shortness, a reference to a formal specification be enough?}

\item Maybe include a formula with a $\lambda$

\item Not clear, which optimizer performs best on which kind of problem. No methodical comparisons available, only partial. 
One reason: not easy to run on problems other groups developed. Benchmarks are crucial for further development
\note{FH: here's the point to say one sentence about why these empirical studies take time to set up right, and why we need something better than what was there before. By this, we also want to advertise our work and convince everyone in hyperparameter optimization that they should rather use our benchmarks (and maybe add to them) rather than doing their own thing.}
\item Our contribution: Collected and made available benchmarks, and used them to compare three hyperparameter optimizers in ?? experiments
\end{itemize}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Bayesian Optimization Methods for Hyperparameter Optimization}\label{sec:smbo}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%We first define the hyperparameter optimization problem formally.
Given a machine learning algorithm $A$ having hyperparameters $\lambda_1, \ldots, \lambda_n$ with respective domains $\Lambda_1, \ldots, \Lambda_n$, we define its hyperparameter space 
%$\vLambda$ as the crossproduct of these domains: 
$\vLambda = \Lambda_1 \times \dots \times \Lambda_n$.
%
For each hyperparameter setting $\vlambda \in \vLambda$, we use $A_{\vlambda}$ to denote the learning algorithm $A$ using this setting. We further use 
$\mathcal{L}(A_{\vlambda}, \mathcal{D}_{\text{train}},\mathcal{D}_{\text{valid}})$ to denote the validation loss (e.g., misclassification rate) that $A_{\vlambda}$ achieves on data $\mathcal{D}_{\text{valid}}$ when trained on $\mathcal{D}_{\text{train}}$.
%
The hyperparameter optimization problem under $k$-fold cross-validation is then to minimize the blackbox function
%(with training and validation folds $\mathcal{D}_{\text{train}}^{(i)}$ and $\mathcal{D}_{\text{valid}}^{(i)}$, $i=1,\dots,k$)
\vspace*{-0.0cm}
\begin{eqnarray}
%\nonumber{}\vlambda^* \in \argmin_{\vlambda \in \vLambda} 
f(\vlambda) = \frac{1}{k} \sum_{i=1}^{k} \mathcal{L}(A_{\vlambda}, \mathcal{D}_{\text{train}}^{(i)}, \mathcal{D}_{\text{valid}}^{(i)}).
\vspace*{-0.4cm}
\end{eqnarray}

We note that an alternative to optimizing hyperparameters is to marginalize over them in a Bayesian model averaging framework; however, in most cases the costs of doing so are prohibitive, and we therefore do not consider that alternative here.

Hyperparameters can be continuous, integer-valued, or categorical. Following~\cite{HutEtAl09} and \cite{BerEtAl11}, we say that a hyperparameter $\lambda_i$ is \emph{conditional} on another hyperparameter $\lambda_j$ if $\lambda_i$ is only active if hyperparameter $\lambda_j$ takes values from a given set $V_i(j) \subsetneq \Lambda_j$. These conditional parameters are, for example, common in deep architectures, or in frameworks including many alternative algorithms, and some hyperparameter optimizers exploit knowledge about these conditionalities in their models to improve their performance~\cite{HutHooLey11,BerEtAl11,arxiv_hierarchical_kernel,SweEtAl13:raiders}.



Bayesian optimization (see~\cite{BroCorFre10} for a detailed tutorial) constructs a probabilistic model $\mathcal{M}$ of $f$ based on point evaluations of $f$ and any available prior information, and uses that model to select subsequent configurations $\vlambda$ to evaluate.
%
%\emph{Sequential model-based optimization (SMBO)} is a variant of Bayesian optimization, in which the model does not necessarily model $f$ directly, but could also model the location of the optimum of $f$.
%Algorithm \ref{alg:smbo} gives pseudocode for SMBO applied to hyperparameter optimization. We note that in order to minimize $f$, SMBO can choose to evaluate the entire function $f$ at some configuration $\vlambda$, or to do only evaluate the loss $\frac{1}{k} \sum_{i=1}^{k} \mathcal{L}(A_{\vlambda}, \mathcal{D}_{\text{train}}^{(i)}, \mathcal{D}_{\text{valid}}^{(i)})$ for a single cross-validation fold of $\vlambda$ at a time (gaining a noisier observation, but only spending a $k$-th of the computational budget). 
%
In order to select its next hyperparameter configuration $\vlambda$ using model $\mathcal{M}$, Bayesian optimization uses an \emph{acquisition function} $a_{\mathcal{M}}:\vLambda \rightarrow \mathds{R}$, which uses the predictive distribution of model $\mathcal{M}$ to quantify how useful knowledge about hyperparameter configurations $\vlambda \in \vLambda$ would be. This function is then maximized over $\vLambda$ to select the most useful configuration $\vlambda$ to evaluate next. Several well-studied acquisition functions exist~\citep{JonSchWel98,SchWelJon98,SriEtAl10:GP-UCB}; all aim to trade off exploitation (locally optimizing hyperparameters in regions known to yield good performance) versus exploration (trying hyperparameters in relatively unexplored regions of the space). 

%
%\note{Tried to clarify explanation. Should it be "which has been observed so far" instead of "which was observed"? Do we need to explain that this is all 1d?\\FH: added 'scalar'}
The most popular acquisition function is the \emph{expected improvement}~\citep{SchWelJon98} over the best previously-observed function value $f_{min}$ attainable at a hyperparameter configuration $\vlambda$ (where expectations are taken over predictions with the current model $\mathcal{M}$):
\begin{equation}
\vspace*{-0.1cm}
\label{eqn:ei}\mathds{E}_{\mathcal{M}}[I_{f_{min}}(\vlambda{})] = \int_{-\infty}^{f_{min}} \max\{f_{min}-f,0\}\cdot p_{\mathcal{M}}(f \mid \vlambda) \; df.
\vspace*{+0.2cm}
\end{equation}
%
One main difference between existing Bayesian optimization algorithms lies in the model classes they employ.
In this paper, we empirically compare three popular Bayesian optimization algorithms for hyperparameter optimization that are based on different model types.

%\note{FH: don't overestimate what you can fit in... check the similar section in the Auto-WEKA paper to get an idea how much space fleshed-out text will take. You can use a similar approach as there. You probably should pull Spearmint up ahead of \TPE{} and \SMAC{}, as it is in a sense the most prototypical Bayesian optimization approach of the three. Then \SMAC{}, then \TPE{}. Also mention that there are more, such as~\cite{BarEtAl13}, but we're focusing on the three most widely used ones.}

\hide{
\begin{itemize}
\item Bayesian Optimization is a general framework for global optimization.

\item It is applicable for problems where the target function $f(\lambda)$ does not have gradient information and the solution for the target function is not analytically tractable nor convex and has a long runtime. The target function will therefore be regarded as a black box, this means that no information about the target function is available but that we are able to evaluate the function algorithm for given hyperparameters.

\item Bayesian Optimization makes use of the fact that there is a correlation between hyperparameters $\lambda$ and target function performance $y$. This is used to build a  regression model $M = p(y|\lambda)$ which predicts the performance of the target function for given hyperparameters.

\item The model is also called {\em surrogate} or {\em response surface}.

\item enables to use a so-called acquisition function to choose the most promising new hyperparameters. The most prominent one is the {\em Expected Improvement} which calculates, for new hyperparemeters, the expected improvement over the best response $f(\lambda)$ so far.

\item important that the acquisition function includes a trade-of between exploration and exploitation which allows to search a huge hyperparameter space.

\item Because evaluating the target function is expensive compared to evaluating the surrogate, we can search for the optimum of the surrogate and then evaluate our target function with these hyperparameters. %By spending time on searching the optimal hyperparameters for the next evaluation we can spend less time evaluating the target function and therefore .

\item The selection of new hyperparameters and the evaluation of them is done iteratively as described in listing \ref{pseudo-smbo}.

\item An in-depth introduction to Bayesian Optimization can be found in \cite{BroCorFre10}.

\item Difference between different hyperparameter optimization softwares are mostly the kind of regression model they use, how they choose new hyperparameters and how verbose the search space can be defined.
\end{itemize}
}


%\begin{itemize}
%\item Some formulas about (black box) functions and response surface, while evaluation of the approximated response surface is cheap
%\item General: tries to approximate response surface
%\item response surface is non convex, unknown gradient, non trivial
%\item (bb) function is expensive to evaluate
%\item Most optimizers follow the same principle (pseudocode)
%\item We choose \SMAC{}, \TPE{}, spearmint (why), but there are many more (GPGO, GGA)
%\item ?? EI
%\item ?? Exploration vs. Exploitation
%\end{itemize}

\hide{
%FH: actually, I think we can drop the pseudocode for space.
\begin{algorithm}[t]
%\vspace*{-0.05in}
{\footnotesize
\caption{SMBO}
\label{alg:smbo}
\begin{algorithmic}[1]
    \label{line:init}\STATE initialise model $\mathcal{M}$;  $\mathcal{H} \gets \emptyset$
    \WHILE{time budget for optimization has not been exhausted}
      \label{line:get_lambda}\STATE $\vlambda \gets$ candidate configuration from $\mathcal{M}$
      \label{line:get_c}\STATE Compute $c = \mathcal{L}(A_{\vlambda}, \mathcal{D}_{\text{train}}^{(i)}, \mathcal{D}_{\text{valid}}^{(i)})$
      \label{line:update_H}\STATE $\mathcal{H} \gets \mathcal{H} \cup \left\{(\vlambda,c)\right\}$
      \label{line:update_M}\STATE Update $\mathcal{M}$ given $\mathcal{H}$
    \ENDWHILE
    \STATE \textbf{return} $\vlambda$ from $\mathcal{H}$ with minimal $c$
\end{algorithmic}
}
%\vspace*{-0.2in}
%\vskip -1in
\end{algorithm}
}

\hide{
%FH: this pseudo code isn't bad, but for now, I copied the one of the Auto-WEKA paper since in this one here M_{t-1} is undefined in the first iteration. If you want to go back to this, comments out the includes for algorithm and algorithmic and bring back algorithm2e.
\begin{algorithm}
 \SetAlgoLined
 \KwIn{target function $f$, model $M$, time limit $T$}
 \KwResult{Best hyperparameter configuration $\lambda^*$}
 H $\leftarrow \emptyset$\;
 \For{$t\leftarrow 0$ \KwTo $T$}{
  $\lambda \leftarrow $ $SelectNew(M_{t-1})$\;
  $Evaluate$ $f(\lambda)$\;
  $H \leftarrow H \cup (\lambda, f(\lambda))$\;
  $FitModel(M_t, H)$\;
 }
 \caption{Pseudo-code for generic Sequential Model-based Optimization}
 \label{alg:smbo}
\end{algorithm}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

{\bf Sequential Model-based Algorithm Configuration (\SMAC{})}~\cite{HutHooLey11,RamHut13:code}.
%\item \textbf{}\footnote{We used implementation v2.06.01 from \url{www.cs.ubc.ca/labs/beta/Projects/SMAC}} 
\SMAC{} uses random forests to model $p_{\mathcal{M}}(f \mid \vlambda)$ as a Gaussian distribution whose mean and variance are the empirical mean and variance over the predictions of the forest's trees. For hyperparameter optimization problems with cross-validation, \SMAC{} evaluates the loss of configurations at single folds at a time in order to save time. Different configurations are compared based only on the folds evaluated for both. \SMAC{} supports continuous, categorical, and conditional parameters. It was the best-performing optimizer for Auto-WEKA~\cite{ThoEtAl13} and has also been used to configure many combinatorial optimization algorithms.	

{\bf \Spearmint{}}~\cite{SnoLarAda12,Snoek13:code}.
%\footnote{We used the implementation from \url{www.cs.toronto.edu/~jasper/software.html}}} 
\Spearmint{} uses a Gaussian process~(GP) to model $p_{\mathcal{M}}(f \mid \vlambda)$ and performs slice sampling over the GP's hyperparameters~\cite{MurAda10}. It supports continuous and discrete parameters (by rounding), but does not provide a mechanism to exploit knowledge about conditional parameters. 
%It draws new points by calculating expected improvement over a Sobol grid, followed by gradient ascent. 
%\note{Commented out the Sobol library limitations since they no longer apply. Shall we uncomment the gradient ascent?\\FH:commented out the whole part since we don't say how the others optimize EI; this was only in there since the limitation was in the Sobol library.}
	%Due to limitations in the Sobol library, it does not support more than 40 hyperparameters. \note{Jasper has not yet committed a new version to his github repo}

{\bf Tree Parzen Estimator (\TPE{})}~\cite{BerEtAl11,Ber13:code}.
%\textbf{\footnote{We used the Hyperopt implementation, Github version from 08/30/2013.} 
\TPE{} is a non-standard Bayesian optimization algorithm. While \Spearmint{} and \SMAC{} model $p(f \mid \lambda)$ directly, \TPE{} models $p(f<f*)$, $p(\vlambda \mid f<f*)$, and $p(\vlambda \mid f\ge f*)$, where $f*$ is defined as a fixed quantile of the losses observed so far, and the latter two probabilities are defined by tree-structured Parzen density estimators. With these distributions defined, a term proportional to the expected improvement from Equation \ref{eqn:ei} can be computed in closed form~\cite{BerEtAl11}. \TPE{} supports continuous, categorical, and conditional parameters, as well as priors for each hyperparameter over which values are expected to perform best. It has been used succesfully in several papers beyond the one in which it was introduced \cite{BerYamCox13,BerCoxC13,ThoEtAl13}.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Hyperparameter Optimization Benchmarks}\label{sec:benchmarks}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

Table~\ref{tab:benchmarks} summarizes all of the benchmarks we collected for the first version of our \emph{hyperparameter optimization library, HPOlib}, which is available at {\smaller\url{www.automl.org/hpolib}}. HPOlib includes simple test functions (used for convenience in many papers) as well as the following benchmarks:
% collected from recent work on hyperparameter optimization using the optimizers we aim to evaluate~\cite{BerEtAl11,SnoLarAda12,ThoEtAl13}:

\hide{
\begin{itemize}
	\item \textbf{Simple Test Functions} that are cheap to evaluate, easy to implement, and that offer the advantage of known ground truth data. 
	\item \textbf{Low-dimensional} problems that are based on machine learning algorithms with only few hyperparameters, none of which are conditional. 
  \item \textbf{High-dimensional} problems that are based on more complex algorithms having all kinds of parameters.
\end{itemize}
%The algorithm themselves are not explained in detail as this is already done by the authors of the origin paper. 
}

\input{benchmarks_table}

{\bf Low-dimensional benchmarks.} We collected three benchmarks with few parameters from~\cite{SnoLarAda12}: simple \textsc{Logistic Regression} to classify the popular MNIST dataset; \textsc{online Latent Dirichlet Allocation (LDA)} for Wikipedia articles, and \textsc{Structured Support Vector Machines (SVM)}. The latter two benchmarks are defined on a grid of hyperparameter values: for each of the grid points ($288$ for LDA; $1400$ for SVM), algorithm performance data 
%(on a validation set; no cross-validation)
%\note{The spearmint paper says: "We ran a grid search over the 1400 possible combinations of these parameters, evaluating each over 5 random 50-50 training and test splits". We think that the information in brackets is misleading. Writing more about that takes a lot of space, should we just leave it out? FH: ok.} 
has been precomputed by~\cite{SnoLarAda12} to allow for very rapid experiments. Another advantage of such precomputed data is that anyone can use these benchmarks without having to compile and run the respective ML algorithms.

%\note{MF: I reduced the margin between the paragraphs. Also altered the text about preprocessing.\\ FH:ok}
{\bf Medium-dimensional benchmarks.} We collected two types of benchmarks of intermediate dimensionality from~\cite{BerEtAl11}.  \hpnnet{} and \hpdbnet{} implement a simple and a deep neural network, respectively. 
Both run faster on GPUs than CPUs, and both include continuous and categorical parameters, some of which are conditional. 
%Both use preprocessing methods that (at least for \hpnnet{}) need to be set carefully to achieve high performance. 
%
For each hyperparameter in these benchmarks, an expert-defined prior over good values is defined. Since these priors cannot be expressed in \SMAC{}'s and \Spearmint{}'s formats, they get lost in translation, as does conditionality in the case of \Spearmint{}.
%We tested both methods on two datasets described in~\cite{LarErhCouBerBen07}: MNIST with rotated background images and convex. 
%

{\bf High-dimensional benchmarks.} 
To test the limits of current optimizers, we also investigated the \textsc{Auto-WEKA} framework~\cite{ThoEtAl13}, which encodes combined model selection and hyperparameter optimization into an enormous hierarchical (i.e., highly conditional) space with 768 hyperparameters. 


All benchmarks in HPOlib can be accessed through the command line. This both allows for the support of optimizers written in arbitrary programming languages and allows us to control their use of resources. 
%
%Many machine learning algorithms take a prohibitive amount of time when called with certain input parameters (e.g., training a neural network with 1\,000 layers) and rather than stalling the entire optimization process, such calls should be terminated prematurely. 
%
HPOlib includes python scripts that offer a common interface to the three Bayesian optimization algorithms used throughout this paper (and can convert between their input formats). These scripts call the optimizers and wrap their calls to the machine learning algorithm being optimized with a tool called runsolver~\cite{Rou11} to ensure that they respect given time and memory limits. If they do not do so (consider, e.g., a call to a neural network setting the number of layers to 1\,000), they are terminated (using SIGTERM and SIGKILL; for \hpnnet{} and \hpdbnet{}, we use the internal termination mechanism and give it an additional 200 seconds grace period to return the quality of its current model). Terminated or otherwise crashed calls to the machine learning algorithm without valid output yield the worst possible result for the hyperparameter setting being evaluated.
%
All calls to the machine learner and their results are stored in a uniform optimizer-independent format, to facilitate the analysis of results.
%
Finally, the HPOlib scripts also allow restarting interrupted optimizer runs from their last state.

Since overfitting is a critical issue in hyperparameter optimization (especially as hyperparameter optimization methods improve), HPOlib supports $k$-fold cross-validation, either by evaluating all $k$ folds at once, or by evaluating one fold at a time.
Out of the benchmarks above only Auto-WEKA used cross-validation, so to study the importance of cross-validation we added additional versions of the \textsc{Logistic Regression} and \hpnnet{} benchmarks with 5-fold cross-validation.

%This software can be expanded by further optimization algorithms and benchmarks callable via CLI. 
%


% and also provide conversion tools. 
%\note{FH: need to populate this website.}
%
%FH: commented out. Not that important, and we often could only approximated it anyways since the original code was not available anymore.
% In order to start a library of hyperparameter optimization benchmarks, we strived to use benchmarks in their original published form (rather than updating them with more modern algorithm variants). The library spans problems from a wide range of complexities (from 3 to 786 dimensions):






%in this paper were almost all previously run with at least one of the optimization techniques described in section~\ref{sec:smbo} to demonstrate their efficiency. Using this experiments to compare all techniques is a good point to start from. Our aim is not to retrieve state-of-the-art performance with the latest approaches, but to focus on well-known problems that are still challenging for the optimizers to show their strengths and limitations.

%#\note{FH: not sure what you mean with this last bullet?}
%\note{MF: moved it to this section; it says that we do not use the latest technique like Dropout, Maxout or whatsoever which give state-of-the-art results but focus on problems which are still challenging for the optimizer like the ones we have}

%\note{I made Table \ref{tab:benchmarks} smaller (that's about as small as you want to go; other small size options than scriptsize are small, footnotesize, and tiny.) That table could use 2 extra columns with references to the algorithms and the datasets.}


%\note{FH: can you please put in the training/test sizes for the LDA grid and SVM grid? Of course, we don't run it, but the data came from a blackbox.}

%\note{FH: commented out \hpdbnet{} MRBI in the benchmarks since we don't have experiments for it yet (due to the cluster outage). Can you please list the 5CV version of Log. Reg. in the benchmarks as well?}
%\note{KE: Everyone cites that paper, although it not available online} It is the only function not tested before by these optimizers. 
%\note{TODO: find a better citations for the global test functions.}









% Results are supposed to take up 0.75 pages
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experiments} \label{sec:results}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\input{results_table}

%\note{FH: moved here from benchmarks.}
%We now report the results obtained by \Spearmint{}, \TPE{}, and \SMAC{} on all benchmarks (see Table \ref{tab:results}).

%\paragraph{Experimental setup.}
We ran each optimizer with its default settings 10 times on each benchmark for the number of function evaluations used in the paper introducing the benchmark.
%The same version of each optimizer was used in each case.
%\footnote{This includes the simple test functions; a noiseless version of \Spearmint{} gets to the optimum faster, but since hyperparameter optimization is typically noisy we used the noisy variant.}
%
We used a runsolver timeout of one hour for individual runs (in case of cross-validation, for individual folds); this only took effect for a few runs on the \hpnnet{} and \hpdbnet{} experiments.
%\note{FH: do we know how many exactly, and in which optimizer run?\\MF: Kind of. The Net is told to stop after 1h, but the runsolver timelimit is 200s higher. The dbnet stops 7 times for \TPE{} is killed twice by the runsolver. For \Spearmint{} there is no case where it stops on it's own, but it is killed four times by the runsolver.}
%where timeouts of 1h on a GPU (NVIDIA Tesla M2070s for \hpnnet{}; NVIDIA GeForce GTX 780 for \hpdbnet{}) was enforced. Nevertheless, 
%\note{Explain usage of different GPUs\\FH: not needed, all the same.}
The \hpnnet{} and \hpdbnet{} experiments were run on a cluster of NVIDIA Tesla M2070s GPUs, which imposed a wall time limit of 24 hours per optimizer run. Since this did not suffice to perform a sufficient number of function evaluations, we used HPOlib's scripts to restart the optimizers from their last saved state until the target number of function evaluations was achieved.
%
%As this introduces a factor of randomness in the experiments, in the future we plan to rerun these experiments in a different environment that allows longer wall time limits.
%
%saved time by already disregarding most evaluated configurations after a single fold, and thereby could use the 500 fold evaluations of folds to consider $\approx400$ configurations.
%\note{Actual values are: hpnnet on convex: 393.3, hpnnet on mrbi 408.2 and logistic regression on mnist: 411, @Frank: is this number dependent of the difficulty of the problem?\\FH: it's dependent on the noise across folds.}


%Given enough GPU time, we plan to rerun them in the future.}

%\paragraph{Results.} 
Table \ref{tab:results} summarizes our results. 
%Overall, these are consistent with our expectations, showcasing the various strengths and weaknesses of the optimizers.
Overall, \Spearmint{} performed best for the low-dimensional continuous problems.\footnote{However, it showed some robustness problems, e.g., crashing on 1 of 10 runs for the \textsc{hartmann-6} function because of a singular covariance matrix. It also had problems with discrete parameter values: by maximizing expected improvement over a dense Sobol grid instead of the discrete input grid, it sometimes repeatedly chose values that were rounded to the same discrete values for evaluation (leading to repeated samples).} 
For the higher-dimensional problems, which also include conditional parameters, \SMAC{} and \TPE{} performed better. 
However, we note that based on the small number of 10 runs per optimizer, many performance differences were not statistically significant.

For benchmarks including $k$-fold cross-validation, \SMAC{} evaluated one fold at a time, while the other methods (which do not yet support single fold evaluations) evaluated all $k$ folds. \SMAC{} thus managed to consider roughly 4 times more configurations than either \TPE{} or \Spearmint{} in the same budget of fold evaluations.
%
%%However, we also observed a few surprises.
%We also encountered a few surprises.
%Firstly, \Spearmint{} had some robustness problems, e.g., crashing on 1 of 10 runs for the \textsc{hartmann-6} function because of a singular covariance matrix. It also had some problems with discrete parameter values: by maximizing expected improvement over a dense Sobol grid instead of the discrete input grid, in some cases it repeatedly chose values that were rounded to the same discrete values for evaluation (leading to repetitions in the samples)
%As expected, \Spearmint{} has problems dealing with conditional hyperparameters, this can be seen for the \hpnnet{}. An analysis of the data shows that it was crucial to switch on preprocessing with a preprocessing parameter set to the right value. On the convex data set, \Spearmint{} did this only once while SMAC enabled preprocessing six times and \TPE{} eight times. %This problem is demonstrated in Figure \ref{fig:lda_trajectory}.
%
%Secondly, we found that in 5 of its 10 runs on AutoWEKA, the 30h time limit did not allow \TPE{} to evaluate more than the 20 hyperparameter settings it considered in its initial random search phase. In contrast, \SMAC{} only evaluated one fold for many candidate hyperparameter settings, and managed to evaluate roughly 80 different hyperparameter settings in the same time limit.
%
%\TPE{} performs quite well on the \hpnnet{} (as expected since it can exploit priors). Unexpectedly, 
%\note{Commented out this paragraph completely since SMAC somehow became good on MRBI and \Spearmint{} became worse...strange...}
%Secondly, performance did not only depend on the algorithm but also on the dataset: SMAC performed best for the \hpnnet{} on one dataset, and exactly the opposite held for \Spearmint{}.
%A more detailed analysis of the data shows that it was crucial to switch on PCA preprocessing with its conditional strength parameter set to around 0.7. SMAC and \Spearmint{} consistently failed to do so for the datasets they performed poorly on. It was surprising to see this, but further investigation is necessary to find out why it occurred.
%While problems with conditional parameters could have been expected for \Spearmint{}, we would have expected better performance for SMAC. 
%We pause here to note that these potential problems in SMAC and \Spearmint{} could not have been identified without a competitive comparison against other methods; otherwise, one would have simply assumed that a good configuration was found.
%
%\note{FH: this might have to go for lack of space...\\HH: That would be a shame. I've earmarked a lot of places for minor cuts - hopefully, this is sufficient to keep this bit, which I find very much worth noting.}
%
%
Since the budget for each optimizer was expressed as a number of function evaluations, they were not penalized for choosing costly-to-evaluate hyperparameters, and in our experiments, \Spearmint{} runs were sometimes up to a factor of three slower than those of \TPE{}; in the future, we plan to 
use time budgets and \Spearmint{}'s time-sensitive EI criterion~\cite{SnoLarAda12}. 
%
%
We also studied the CPU time required by the optimizers. \SMAC{}'s and \TPE{}'s overhead was negligible ($<1$ second), but due to the cubic scaling behaviour of its GPs, \Spearmint{} required $>42$ seconds to select the next data point after 200 evaluations. This would prohibit its use for the optimization of cheap functions, but here this overhead was dominated by the expensive function evaluations.



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion and Future Work}\label{sec:conc_future}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%Since these runs are computationally expensive, we reuse results from \cite{ThoEtAl13}, which already compared SMAC and \TPE{}.}
%\note{Auto-WEKA: Removed the notice about reusing old result, we now got our own results:)}

This work introduces a benchmark library for hyperparameter optimization and provides the first extensive comparison of three optimizers.
%, showing that each has their advantages and disadvantages (mostly in terms of robustness and search space definitions supported).
%each performed well on different domains, problems and datasets. 
%The performance gaps are mainly due to robustness and search space definitions but need to be investigated further.
%
To support further research, our software package and benchmarks are publicly available at {\smaller\url{www.automl.org/hpolib}}. It offers a common interface for the three optimization packages utilized in this paper and allows the easy integration of new ones. 
%This makes it easy to contribute new benchmarks and optimization packages and also eases further comparisons between optimizers.
%
Our benchmark library is only a first step, but we are committed to 
%(e.g., the \hpconvnet{} with 228 hyperparameters from \cite{BerYamCox13}) 
making it easy for other researchers to use and to contribute their own benchmarks and optimization packages.

\hide{
While so far we granted the optimizers a budget of function evaluations (or fold evaluations in the case of cross-validation), realistic budgets in practice are either CPU or wall clock time, and we plan to use this in the future.
When the budget it expressed as a number of function evaluations, optimizers are not penalized for choosing costly-to-evaluate hyperparameters. In our experiments, \Spearmint{} runs were sometimes up to a factor of three slower than \TPE{} runs, which should be taken into account using the time-sensitive EI criterion from~\cite{SnoLarAda12}. Also, the use of cross-validation is so far only practical for SMAC, allowing it to evaluate up to four times more configurations; thus, adding it to other optimizers can be expected to improve their performance.
}

\hide{
\begin{itemize}
\item We have shown huge performance gaps between the different software packages for different data sets. Most of them 
\item This poses the novel question of "which hyperparameter optimizer to choose" to the end user.
\item So in order to remove the burden of model selection and hyperparameter optimization from the end user researchers have to improve their software.
\item But this is only a first step towards a proper empirical comparison as the number of different machine learning problems is quite low. For the future we plan to integrate further hyperparameter optimization benchmarks (especially the \textsc{hp-convnet} with 228 hyperparameters \cite{BerYamCox13}) and make it easy for people to contribute.
\item By this we hope that future papers on hyperparameter optimization either compare their functionality on these benchmarks or add their benchmarks to this library.
\end{itemize}
}

%\note{FH: to write: future work. Integrate further hyperparameter optimization benchmarks (especially the \textsc{hp-convnet} with 228 hyperparameters \cite{BerYamCox13}), make easy for people to contribute.}

%\subsubsection*{Acknowledgments}
%Let's leave this for the CRC (where it can go with the references).
%This research has been enabled by the use of computing resources provided by WestGrid and Compute/Calcul Canada.

\newpage
\footnotesize
\bibliography{short,bayesopt}
\bibliographystyle{unsrt}

\end{document}
