\documentclass{article} % For LaTeX2e
\usepackage{nips12submit_e,times}
\usepackage{amsmath,amssymb,amsthm}
\usepackage{graphicx}
%\usepackage{algorithmic}
\usepackage{algorithm}
\usepackage{algpseudocode}
\usepackage{algorithmicx}

\renewcommand{\algorithmiccomment}[1]{\hspace{1em}// #1}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}

%%%%%%%%%%%%%%%%%%
\newtheorem{corollary}{Corollary}


\title{Hyper-Parameter Optimization for Large Scale Data Sets\thanks{CS761
Course Project Final Report, Spring 2012. Submitted to Jerry Zhu.}}


\author{
Halit Erdogan \\
\texttt{halit@cs.wisc.edu} \\
\And
Johny Lam \\
\texttt{johnylam@cs.wisc.edu} \\
\And
Jonathan Schwartz \\
\texttt{jgschwartz@wisc.edu} \\
}

\newcommand{\fix}{\marginpar{FIX}}
\newcommand{\new}{\marginpar{NEW}}

\nipsfinalcopy % Uncomment for camera-ready version

\begin{document}


\maketitle

\begin{abstract}
%Obtaining the best accuracy in machine learning applications is generally depend on selecting good values for the hyper-parameters of the underlying algorithm.  The problem is generally considered as an optimization %problem where the goal is to find a hyper-parameter configuration which maximizes the accuracy.  Existing works on hyper-parameter optimization simply assume that one can solve this problem approximately by performing %a search in the hyper-parameter space.  However, these approaches are not applicable to large scale data since evaluating (learning and testing) with a single hyper-parameter configuration may require too much time. %We address this problem and develop a hyper-parameter optimization algorithm that can scale to large data sets. The experimental results on small data sets show that we obtain high performance gains which allow us to %scale to large data sets, and comparable accuracies with grid search which help expect the algorithm will provide reasonable results.
We develop a fast and scalable algorithm to find a good set of values to the hyper-parameters of a given machine learning algorithm. We compare our algorithm with grid search on small data sets. The results show that our approach provides significant performance gains while providing comparable accuracies.
\end{abstract}


\section{Introduction}\label{sec_intr}

The goal of machine learning algorithms is to map a given finite data set ${\cal X}$ (sampled from a distribution ${\cal D}$) to a function $g$. The algorithms use a loss function ${\cal L}(x;g)$ where $x \stackrel{\text{iid}}{\sim} {\cal D}$ and try to minimize the \textit{expected loss} $\mathbb{E}_x({\cal L}(x;g))$. Most of the time, a machine learning algorithm ${\cal A}$ comes with a set $H$ of \textit{hyper-parameters}. Each hyper-parameter $h \in H$ has a (possibly infinite) domain. A particular assignment of values to hyper-parameters is called a \textit{hyper-parameter configuration} $\lambda$, and all possible configurations constitute the \textit{hyper-parameter configuration space} $\Lambda$. The actual algorithm is obtained after choosing a $\lambda$ and denoted by ${\cal A}^{\lambda}$.

Finding a good hyper-parameter configuration is of paramount importance in empirical machine learning research. This problem is called \textit{hyper-parameter optimization} and defined as follows:
%
$$
\underset{\lambda \in \Lambda}{\operatorname{argmax}} \ \  \mathbb{E}_x({\cal L} (x;{\cal A}^{\lambda}({\cal X})))
$$
%
In practice, expected loss is generally estimated by some performance metric on the training data, e.g., cross-validation. Let denote this performance metric as $Err({\cal A}, {\cal X}, \lambda)$. Practitioners typically prefer grid search on the configuration space to find the $\lambda$ that minimizes the $Err$~\cite{larerh07}. In grid search, the user manually sets the bounds and discretizes the domains, then the algorithm exhaustively searches the remaining finite configuration space. Grid search is widely used since there is no technical overhead and easy to parallelize. For example, \cite{gandeb11} introduces an implementation of grid search in the map-reduce platform to obtain high-performance. On the other hand, grid search suffers when $|H|$ is large since the size of the search space is exponential in the number of hyper-parameters. Heuristic-based search algorithms have been proposed to remedy this problem~\cite{berber11,berben12}. These algorithms do not exhaustively search the configuration space therefore effective in any-dimensional space.

None of these algorithms are applicable if the computation of $Err$ is too expensive which is very typical when the dataset is large. For example, recent data sets in UCI\footnote{http://archive.ics.uci.edu/ml/} and in machine learning competitions\footnote{http://www.heritagehealthprize.com/} contain thousands of examples. Therefore, it is desirable to have hyper-parameter optimization algorithms that can work on large training sets. In this work, we develop algorithms to fulfill this requirement. The contributions of this work are as follows: (i) We analyze the theoretical bounds of the hyper-parameter optimization, (ii) develop an heuristic algorithm that can easily scale to large data sets, (iii) compare the effectiveness and applicability of our algorithm with grid search on small data sets. The results indicate that our approach gives comparable results with grid search on the small data sets. Therefore one can expect that it will give reasonable results on large data sets where grid search or any other search algorithms are not applicable. 

\section{Mathematical Foundation}\label{sec_math}

The classifier ${\cal A}^{\lambda}$ depends on the configuration $\lambda$, and we are interested in finding a configuration that results in a \textit{best} classifier. A best classifier can be defined as the one that has the lowest expected loss or true test set error. Let's denote the true test set error of a classifier by $R({\cal A}^{\lambda})$. On the other hand, what we can measure is the error of a particular training set of size $n$, let's denote this by $R_{n}({\cal A}^{k})$. 

Once we compute $R_{n}({\cal A}^{k})$ we can use VC-dimension~\cite{vapche71} and Rademacher complexity~\cite{kolpan00} to find lower and upper bounds for the true test set error. Although these bounds are loose or practically incomputable, they provide very useful insights about how and why our heuristic-based algorithm works. These bounds are generally given within a confidence interval. We say that a probability of at least $1-\delta$, the true test set error lays between the lower bound ${\cal LB}(R_n({\cal A}^{\lambda}), \delta)$ and the upper bound ${\cal UB}(R_n({\cal A}^{\lambda}), \delta)$. This helps us define the following corollary.

\begin{corollary}
Given two hyper-parameter configurations $\lambda$ and $\lambda'$, if ${\cal LB}(R_n({\cal A}^{\lambda}), \delta) > {\cal UB}(R_n({\cal A}^{\lambda'}), 
\delta)$ then with probability at least $1-\delta$, $R({\cal A}^{\lambda}) > R({\cal A}^{\lambda'})$ . In other words, $\lambda'$ is a better hyper-parameter configuration than $\lambda$ for the algorithm ${\cal A}$.
\end{corollary}

For a given training set size $n$, we can use this corollary to determine the better hyper-parameter configurations. This idea leads to the following hierarchical optimization procedure.
%
\begin{algorithmic}
\Repeat
\State $BAD = \{\lambda \in \Lambda \ | \ \exists \lambda' \in \Lambda, s.t. \ {\cal LB}(R_n({\cal A}^{\lambda}), \delta) > {\cal UB}(R_n({\cal A}^{\lambda'}), \delta) \}$
\State $\Lambda := \Lambda - BAD$
\State increase $n$
\Until{$\Lambda$ contains only the best configurations}
\end{algorithmic}
%
In each step, we identify a set $BAD$ of configurations and prune them from the hyper-parameter configuration space. Such an approach is very useful when it is very expensive to evaluate the algorithm on the entire data set. We start from a small portion of the data set and increment $n$ iteratively. 

\section{Practical Algorithm}\label{sec_prac}
There are two practically challenging problems in the procedure given in the previous section. The first one is that the size of $\Lambda$. Since it is exponential in the number of hyper-parameters, it is generally too large to search exhaustively. Furthermore, in case of continuous or unbounded domains, $|\Lambda|$ becomes infinite. The second problem is that the pruning strategy depends on the bounds that are usually too loose, therefore do not allow any pruning at all. We solve the first problem by considering only a finite subspace of $\Lambda$, and we solve the second problem by relaxing the pruning strategy with a greedy one.

\paragraph{Replace $\Lambda$ with $\Lambda^*$} We perform a heuristic simulated annealing search~\cite{rusnor09} in the $\Lambda$ space. In each iteration, we evaluate the performance of a hyper-parameter configuration with $Err$ and move to a neighbor configuration by randomly flipping the value of a hyper-parameter (with an acceptance probability). We record the set of the best hyper-parameter configurations during the search. The resulting set constitutes the $\Lambda^*$ and considered as the finite hyper-parameter configuration space in the next iterations of the hierarchical optimization algorithm. In this step, we generally use a very small portion of the training set, therefore computation of $Err$ is very cheap and allows us to search a significant portion of the search space.

\paragraph{Greedy Pruning} Since the lower and upper bounds of the true test set error is too loose, we find alternative ways to decide the $BAD$ set. We choose a simple strategy to do this. We compute the $Err$ of each configuration and then keep only the top $k$ of them. In other words, we prune the configurations which have the highest training set error. This approach is reasonable since even if we would have tighter bounds, we knew that pruning starts from the configurations that perform worst.

\section{Experimental Results}\label{sec_expr}
Our algorithm can scale to very large data sets since it computes $Err$ on small portions of the training set. In order to anticipate the performance of the algorithm on large data sets, we compare it with the grid search on small data sets. 

Figure~\ref{fig_knn} is a good illustration of the performance of the HHO on a k-NN classification task. While grid search tries every $k$ value on the large data set, HHO runs a simulated annealing algorithm by using a small portion of the data set. The red points indicate the $k$ values that are reported as the best configurations by the HHO algorithm. HHO runs 12 times faster than grid search in this example and finds a very similar set of best configurations with the ones that are found by the grid search. 

\begin{figure}[h]
\begin{center}
\includegraphics[scale=.7]{charts/knn_chart.pdf}
\end{center}
\vspace{-0.5cm}
\caption{Grid search vs. HHO.}
\label{fig_knn}
\end{figure}

We ran more extensive experiments on different standard UCI data sets with different well-known algorithms. Table~\ref{tab_res} summarizes the results. For each data set we report the machine learning algorithm that we use, minimum and maximum cross-validation accuracies found by the grid search, the cross-validation accuracy (on the entire data set) of the best configuration reported by HHO, and the gained speed-up. The results indicate that HHO runs much faster than grid search and obtains comparable results. We even sometimes find a configuration that provides a better cross-validation accuracy than the grid search.

\begin{table}
{\small
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Dataset} & \textbf{Algorithm}& \textbf{Grid Search}& \textbf{HHO}& \textbf{Speed-up}\\
&&\textbf{Accuracy Range}&\textbf{Accuracy}&\\
\hline
kdd\_synthetic\_control& K-NN & [85.2 - 97.8]\% & 96.50\% & 12.4\\
\hline
cylinder\_bands& SVM with Gaussian Kernel & [57.8 - 86.1]\% & 81.80\% & 16.6\\
\hline
breast\_cancer& Random Forest& [63.3 - 73.0]\% & 74.10\% & 2.6\\
\hline
\end{tabular}
\caption{Grid search vs. HHO.}
\label{tab_res}
}
\end{table}

\section{Discussion}\label{sec_disc}
We developed a fast method for hyper-parameter optimization that can scale to large data sets. We showed the effectiveness of the method by comparing it with a grid search approach on small data sets. These experimental results are only preliminary and testing on a wider range of data sets and learning algorithms is still needed, but that this method shows promise and its speed-up compared to traditional search methods (at least for these small data sets) is interesting. Our method is also generic and applicable to many machine learning algorithms with a little effort (e.g., approximately 50 lines to plug Weka's~\cite{halfra09} Random Forest algorithm). In that sense, it satisfies the software engineering needs of hyper-parameter optimization mentioned in~\cite{berben12}. It is also very easy to make the approach further scalable by parallelizing it with the MapReduce~\cite{deaghe08} framework, since the hierarchical procedure is very similar to the idea of map and reduce.
 
\bibliographystyle{plain}
\bibliography{cs761}
\end{document}
