\documentclass{acm_proc_article-sp}
\usepackage{amssymb}
\usepackage{color}
\usepackage{graphicx}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{mathrsfs}
\usepackage{subfigure}
\usepackage{threeparttable}
\usepackage{multirow}
\usepackage[pdftex,pagebackref,colorlinks]{hyperref}
\usepackage{clrscode}
\usepackage{wasysym}
\begin{document}
\title{A Proposal for Article Recommendation Systems}
\subtitle{Different Recommender Systems Introductions \\and New Article Recommendation System}
\numberofauthors{3}
\author{
% 1st. author
\alignauthor
Huang Xiao\\
       \affaddr{School of Software, Tsinghua University}\\
       \affaddr{Beijing, China}\\
       \email{huangxiao09@gmail.com}
% 2nd. author
\alignauthor
Xu Hao\\
       \affaddr{School of Software, Tsinghua University}\\
       \affaddr{Beijing, China}\\
       \email{xuhao199224@gmail.com}
% 3rd. author
\alignauthor
Zhang Xiaojun\\
       \affaddr{School of Software, Tsinghua University}\\
       \affaddr{Beijing, China}\\
       \email{zhangxiaojun92@gmail.com}
}
\date{6 May 2012}

\maketitle
\begin{abstract}
This paper presents in three parts. The first part goes through an overview about the Recommender Systems. The second part of the paper illustrates some related works on the Recommender Systems and some papers introduce different ways implement the Recommender Systems. The third part comes out with an brand-new Recommender Systems to introduce new articles for users based on their interests.
\end{abstract}
\keywords{Recommender System, Collaborative Filtering, Content-based Recommendation}

\section{Related paper Reading}
This section is going to introduce some papers related to the recommender systems. We would summarize the main ideas about each paper and propose some conclusions on the strengths and shortages on different recommender systems.
\subsection{Overviews on Recommender Systems}
From those articles we could easily come with an idea that when we talk about a good recommender system, we couldn't figure out whether the system is good or bad until the user itself identify that whether the message is useful for him/her or not. I would present an article introducing two different types of recommender systems and some other information about recommender systems\cite{recommder:systems}.
\subsubsection{Summary about the article}
In this article, the authors are trying to provide use and general ideas about the modern recommender systems.
There are two different types of recommender systems. They are designed from different perspective. As we know that there are \textbf{Content-based Filtering} and \textbf{Collaborative Filtering}. Both methods are state-of-the-art methods and are widely used in our social life and academic researches.

In \cite{recommder:systems}, the author raised the idea that there are no robust system for the recommender system, but we could construct a sound learning systems for users so that our results could be more accurate. In Collaborative Filtering systems we could use an weight table to record user's habit. It is a neighborhood-based method. As the habit data growing, we could use an formula to calculate the similarity of an product with other products rated by the user.

The author raised an generalized algorithm to produce predictions for the user. The algorithm can go the following steps:
\begin{enumerate}
\item Assign a weight to all users with respect to similarity with the active user.
\item Select \textit{k} users that have the highest similarity with the active user - commonly called the \emph{neighborhood}.
\item Compute a prediction from a weighted combination of the selected neighbors' ratings.
\end{enumerate}

And nowadays there are also many model-based method to calculate the user's preference, which is also a collaborative filtering method. Model-based method assume that ratings are not just random, they could predict depend on few implicit factors. We could construct moderate models to include some factors to predict a user's preference.

The author also summarized the content-based filtering. He thinks that we could consider the specifics of individual users or items. To consider the genre of the item to recommend to users.

\subsubsection{Useful information retrieved from the article}
This article is just a summary of recent recommender systems. So we could come with idea that our article recommendation system could use one of the method. But after we considered both of the methods, we know that their is no best system to have the most accurate prediction. So our task is to use a hybrid system to predict the most suitable articles to the selected user.
\subsection{Content-based filtering}
\subsubsection{Content-based filtering Introduction}
Content-based filtering is the most understanding method for us to achieve the AR. Based the train data which user has tagged, we can
easily score the other article. Get the top five score article and then recommend to user. On our experiment, we use KNN to achieve
the content-based filtering.

\subsubsection{Concrete realization}
I can not help telling you our concrete realization. Come on.

\textbf{Preparation.}What we have already owned is the item-item matrix($d_{ij}$ indicate the similarity of document i and j).Because the matrix is too big to read memory, we break it into 16980 independent file. document i show the similarity of document i with other. In order to get the K most similar document, sort the sequence with descending order. Next we will get the user-item matrix($U_{ij}$ is 1 indicate user i like document j). Read the "user-info-train.csv" and build the user-item matrix.
\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{images/similarity.png}
\caption{\label{fig1jpeg}Example for a similarity file}
\end{figure}

\textbf{Score for candidate document.}
When we get one candidate document $di$ which we don't know the user like it or not. We find the most K similar document from the "similarity" file got above. Every document has a similarity with document i donate $ S_{ij}$. We can get whether user like the K document from "user-item-matrix". Donate $like_{uj}$ indicate if user u like document j. So easily we got the score which user u give document i.
\[Score(ui) = \sum_{j=1}^{K}like_{uj}\cdot S_{ij}\]

\textbf{Output score.} As all the candidate file of test data, we already score everyone. Find the highest score of user u,
normalize the score to 1. Output the score list to "filescore.txt", and this can be used to combine with collaborative filtering.

\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{images/filescore.png}
\caption{\label{fig1jpeg}filescore.txt file}
\end{figure}

\subsubsection{Test for better result}
\textbf{The choice of SVD dimension and K.}
The SVD dimension has a lot choice. To achieve a better result, we choose dimension 300 and 400 to test.
The number of nearest neighbor K also affect the result, we choose $K = 10$, $K = 20$, $K = 50$ and $K = 100$ to test.

Before testing, we build out test data from train data(But it is to small). And we calculate our special AP@5 standard.
The test result is blew.

\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{images/300d.png}
\caption{\label{fig1jpeg}the test result of 300 dimension}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=2.5in]{images/400d.png}
\caption{\label{fig1jpeg}the test result of 400 dimension}
\end{figure}
Based on the test result, we can get the conclusion that dimension 300 is a better choice.
And $K = 50, 100$, we have higher AP@5 value. Considering the running time and the fact that
on 400 dimension test $K = 50$ has better result than $K = 100$, so we choice $K = 50$ at last.
\subsubsection{Conclusion}
Because all the content-based algorithm is achieve by ourselves, so the efficiency is lower than
some read-made tools. The long running time makes us have no time to test more arguments.

On class, our single content-based result got the $0.22$ AP@5 value, so it is not a bad achievement.$\smiley$


\bibliographystyle{abbrv}
\bibliography{sigproc}
\end{document}
