\section{Tuning Parameters}
\label{sec:parameters}

\begin{figure*}[!t]
\center
\centerline{
    \includegraphics[height=3.5in]{img/lambda_epoch.eps}
}
\caption{\figtitle{Epochs until Convergence.}
Lower values of $t_0$ tend to converge quicker to the actual value regardless of
rank of approximation. For this experiment, we fixed $\mu$ to value of $1.0$.
}
\label{fig:lambda_epoch}
\end{figure*}

In our algorithm, we have several tunable parameters: the learning rate 
$\lambda$, the strength of regularization $\mu$, and the rank of matrix
factorization \paramk{}. In the following subsections, we present the
methodology we use to tune these parameters.

\subsection{Tuning the Learning Rate}
Since we had three parameters to tune and a large search space, we began by
fixing one at a time and observe the effects of varying the other two on MSE
and the convergence rate. 
The learning rate $\lambda$ determines how quickly the stochastic gradient descent
algorithm converges. As mentioned in \ref{sec:algorithm}, the learning rate is 
given by $\lambda = 1 / (t_0 + t)$, where $t0$ is the parameter we tune, 
and $t$ is the current epoch number. Figure \ref{fig:lambda_epoch} shows the 
number of epochs required by each subsequent multiplicative model in a rank \paramk{} 
approximation until the algorithm reaches convergence while varying $t_0$ and $k$, the rank
approximation. The higher values of $t_0$ mostly take longer to converge
than the smaller values of $t_0$. Consequently, we pick $t_0 = 10$ for quick
convergence. We did not, however, use $t_0 = 1$ since it produced MSE values
that were not representable using 32-bit floating point numbers.

Since $t_0$ is in the denominator, these low values of $t_0$ cause updates to
converge faster since the earlier epochs are multiplied by numbers with greater
magnitude. Initializing $t_0$ with large values makes each epoch have less
impact, so predicted values take longer to converge.

\subsection{Tuning the Strength of Regularization}
After fixing $t_0 = 10$ to cause faster convergence, we attempted to
optimize the value of $\mu$. We optimize $\mu$ to obtain lower values of MSE.
Figure \ref{fig:mu_mse} shows the effect of varying $\mu$ while also fixing 
$k = 20$. We picked $k = 20$ for this experiment since it was in the middle 
of the search space of $k$ and so we could get a sense of how
$\mu$ changed. We later show that our observed patterns hold for other values
of $k$ also. We observed that setting $\mu$ to values close to zero, which meant
almost no regularization, lowered the MSE down to \lowestMSE{}. Since the
error rates are so low; however, we believe that these
values are likely to be overfitting due to lack of regularization; however
we do not have a real test set to validate upon. MSE plateaus around $\mu =
\muplateau{}$, so raising $\mu$ beyond this value is not advantageous. We select a point on  
the slope, $\mu = \optmu{}$, as a reasonable value which both avoids
overfitting, yet lowers the MSE without giving up all of our regularization.

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{img/mu_mse.eps}
\caption{\figtitle{Identifying the Effects of $\mu$ on MSE.}
We observe that while fixing $k$ to $20$, using small values of $\mu$ drop MSE
as low as $\lowestMSE{}$; however, this is likely to be overfitting with close to no
regularization. MSE plateaus around $\mu = 1$, so larger values do not affect
MSE. We pick $\mu = \optmu{}$ to obtain a better MSE while still maintaining
regularization and avoiding overfitting.
}
\label{fig:mu_mse}
\end{figure}

Nevertheless, we still must validate that the observed pattern holds for other
values of $k$. Figure \ref{fig:k_mu_mse} shows the result of varying both $k$
and $\mu$. We observe that the behavior of MSE is very similar for all of our
values of $k$. MSE climbs steeply while $\mu$ is less than $\muplateau{}$ and plateaus
for larger values. Consequently, we are confident in our choice of $\mu$
regardless of how we vary $k$. The contour projection of the surface also shows 
that the pivot point after which $\mu$ plateaus is also nearly the same for
all values of $k$. 

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{img/k_mu_mse.eps}
\caption{\figtitle{Validating Patterns of $\mu$ on MSE Hold Across $k$}
We see that for all of the plotted values of $k$, MSE grows steeply from $0$
to $1$ and flattens out. The observed pattern holds for all of our values of
$k$.
}
\label{fig:k_mu_mse}
\end{figure}

\subsection{Tuning the Rank of Matrix Factorization}
Now that we have established good values for $t_0$ and $\mu$, we now need to 
tune the rank of matrix factorization $k$. The search space for $k$ ranges from 
$1$ to $38$. We limit $k$ to $38$ because our model has $k$ parameters for 
each user and for each movie or more formally

\begin{equation}
n_{params} = k * (users + movies)
\end{equation}

where $n_{params}$ is the number of parameters, $users$ is the number of users
and $movies$ is the number of movies. Increasing $k$ past $38$ is highly
likely to lead to overfitting since $k = 39$ creates $102,375$ parameters
when there are only $100,000$ examples to train on. Having more parameters
than examples to train on means that some parameters will be trained by the
same training example.

Figure \ref{fig:k_mse} shows varying $k$ from $1$ to $38$ with our other fixed
parameters and we observe that the MSE drops with exponential decay. This
shows us that increasing the residual fitting improves MSE as expected since
adding each model to predict the residual improves the model with diminishing
returns. 

\begin{figure}[!t]
\center
\includegraphics[width=0.5\textwidth]{img/k_mse.eps}
\caption{\figtitle{The Effect of Rank Approximation on MSE.}
We observe a decay of MSE as we increase $k$ which corroborates our
expectation that with residual fitting, we successively diminish the MSE.
}
\label{fig:k_mse}
\end{figure}


