\section{The Algorithm}
\label{sec:algorithm}
\subsection{Overview}
The algorithm we implement for making predictions of ratings is based on matrix 
factorization. In matrix factorization, each user and each movie is characterized
by a vector of \paramk{} parameters. We learn the parameters of each 
vector from the actual ratings assigned by the corresponding user to
the corresponding movie. The model predicts ratings based on the
interactions of the user parameters and movie parameters in question. In our
algorithm, we use Stochastic Gradient Descent to learn each parameter
vector. The final output of the algorithm for a given rank $k$ factorization
is the prediction for each user and each movie without any missing entries.

\subsection{Training the Model}
In our algorithm, we choose a multiplicative model to make predictions 
since it reduces the number of degrees of freedom from an additive 
model and it reduces the complexity of making a prediction to a 
multiplication. In the multiplicative model, we model user and 
movie interactions by a multiplication of their parameters as mentioned above.
We represent the user parameter as $r_i$ and the user parameter as $c_j$. 
The product of these also approximates the rating which the corresponding user
would assign to the corresponding movie.

To learn these parameters from the training examples, we use the update
rule for each parameter as presented in \cite{class_notes}. We begin by 
using $f$, the approximation function for the multiplicative model, 
as defined in equation \ref{eq:f} to define our regularized loss
function $e$ as 

\begin{equation}
\label{eq:e_abs}
e(f(r_i, c_j), x_{ij}) = \mu(r_i^2 + c_j^2) + |r_ic_j - x_{ij}|
\end{equation}
and
\begin{equation}
\label{eq:e_sq}
e(f(r_i, c_j), x_{ij}) = \mu(r_i^2 + c_j^2) + (r_ic_j - x_{ij})^2
\end{equation}

for MAE and MSE respectively, where $\mu$ is the strength of
regularization, $r_i$ is a parameter of the $i^{th}$ user and $c_j$ is a
parameter of the $j^{th}$ movie, and $x_{ij}$ is the actual rating given by
user $i$ to movie $j$.

We use optimize our model for MSE and hence use the loss function in equation \ref{eq:e_sq}.

Since minimizing error is difficult computationally, we learn the parameters 
of each user and each movie using the Stochastic Gradient Descent algorithm. 
We initialize our algorithm by generating a parameter vector for each user and 
each movie. We begin with a single parameter for each user and each movie. 
We first create a $n \times 1$ matrix, $R$, representing the parameters 
characterising the users, where $n$ is the number of users, and a 
$1 \times m$ matrix, $C$, representing the parameters
characterizing the movies, where $m$ is the number of movies. We begin 
by initializing $R$ and $C$ with values from a random uniform distribution
over the range [0, 1] with a fixed seed to be deterministic.
We update each pair of parameters, $r_i$ and $c_j$, using the following rules:

\begin{equation}
\label{eq:r_update}
r_i := r_i - 2r_i\mu\lambda - 2\lambda(r_ic_j - x_{ij})c_j
\end{equation}
and
\begin{equation}
\label{eq:e_sq}
c_j := c_j - 2c_j\mu\lambda - 2\lambda(r_ic_j - x_{ij})r_i
\end{equation}

where $\lambda$ is the learning rate. This update rule is applied to the (user, movie) pair for each rating in
the training set. After the first update, we have an initial predictor 
for $x_{ij}$; however, we still can improve our model's accuracy. We do so
by repeating the update rules for all ratings in order to improve the
approximation. Each repetition of the update loop is called an epoch $t$. In
each subsequent epoch, we update the values of $r_i$ and $c_j$ with a
smaller weights since we want our prediction values to converge to the actual
value. Consequently, we use the learning rate $\lambda$ as a function of the
current epoch number, as given by $\lambda = 1 / (t_0 + t)$ where $t_0$ is
a constant parameter which can be tuned and $t$ is the current epoch number.
We halt iterating through the epochs our predictive model converges and
subsequent iterations do not affect the prediction significantly.

\subsection{Residual Fitting}
In order to further improve the approximation achieved from training the
model with rank one approximation, we combine the multiplicative model with residual fitting. The
residual of an approximation $a_{ij}$ corresponding to an actual rating
$x_{ij}$ is given by $x_{ij} - a_{ij}$. The residual represents the direction
and magnitude of the difference between our prediction and real data.
Thus we apply the same update rules to fit the residual instead by replacing $x_{ij}$ in the
update rules with the residual.

We choose to combine the models using residual fitting by adding approximations
from each subsequent multiplication model. In the first
multiplicative model, we learn the only a single parameter for each user
$i$ and movie $j$. We obtain the approximation $a_{ij}$ from the first 
multiplicative model. Next, we repeat the learning process in the second
multiplicative model by learning another single parameter for each user and
each movie. This time, we modify the update rules to fit the residual from
the first model. If the approximation we obtain from the second model is
$b_{ij}$, then our total approximation of the rating is given by adding the
two approximations $a_{ij} + b_{ij}$. Thus, the residual for the next
multiplicative model is $x_{ij} - (a_{ij} + b_{ij})$. 

In each multiplicative model, we learn an additional parameter for each
user and each movie. Thus, in $k$ multiplicative models, we learn \paramk{}
parameters for each user and each movie. This \paramk{} is the rank of 
approximation, or the rank of matrix factorization, since we are 
characterizing each user and movie by \paramk{} parameters.

\subsection{Making Predictions}
To make a prediction for some user $i$ and movie $j$, we use the function 
$f$ as defined as follows: 

\begin{equation}
\label{eq:f}
f(r_i, c_j) = r_ic_j
\end{equation}

where $r_i$ is the vector of \paramk{} parameters for the $i^{th}$ row of
$R$, or the $i^{th}$ row of $R$, and $c_j$ is the vector parameter for the 
$j^{th}$ movie, or the $j^{th}$ column of $C$. The scalar result is our 
prediction of the rating for the user-movie pair.

\subsection{Complexity and Efficiency}
\label{sec:complexity}
In our algorithm, we maintain the training set in a sparse matrix of the
size of $O(x)$, where $x$ is the total number of ratings. Additionally, for
a rank $k$ factorization, we also generate matrices $R$ and $C$ of size $n
\times k$ and $k \times m$ respectively, where $n$ is the
number of users and $m$ is the number of movies. Thus, the total space complexity
of our algorithm is $O(x + n + m)$.

During the training process, our algorithm loops through all the ratings in
the training set until convergence criterion is met. Thus, the total number
of iterations are $O(xt)$ where $t$ is the number of epochs until
convergence. For a rank $k$ approximation, this process is repeated $k$
times. Thus, our algorithm has an overall computational complexity of
$O(kxt)$.
