\section{Condensing Information}
The Netflix data set is vast and contains a lot of information and it would take a very long time to make the predictions on the raw data. So in order to effectively preform the predictions, we have to condense or in other wording "boil" down the data set, removing all the noise that could contaminate the prediction.

When implementing \textit{Funk SVD}, the condensing of the data is a essential part of the whole method because it used to concentrate the data and make it more pure. Might also the reason for it to be included in the name of the method itself.

For preforming this condensing of the data set we utilize single value decomposition or shorten SVD. This algorithm takes the very large \textit{user-movie-rating} matrix, lets call it \textbf{R} and splits it into $U \Sigma V^T$, resulting in two small matrix $U \Sigma^{1/2}= A$ and $\Sigma^{1/2} V^T = B$. The matrix \textbf{A} contains the movies describe by a number,  lets call this number \textbf{k}, of movie-genre factors. The matrix \textbf{B} describes the users' movie ratings by k number of values. To minimize the errors produces by this squeeze, the Frobenius norm in order to learn \textbf{A} and \textbf{B} with the smallest error rate. \textbf{m} is a movie's factor in \textbf{A}, \textbf{u} is a user's value in \textbf{B} and \textbf{k} is the number of topics.

\[
\text{Error } = \sum_{m=1}^{m} \sum_{u=1}^{u} ( R_{mu} - \sum_{k=1}^{k} A_{mk}B_{ku})^2
\]