\documentclass{acm_proc_article-sp}

\usepackage{graphics}

%\usepackage{subfig}

\begin{document}

\title{Community-Based Recommendations}
\subtitle{Project Report for Social Media Analysis Course}

\numberofauthors{2} 
\author{
%Author 1
\alignauthor
Shaghayegh (Sherry) Sahebi\\
       \affaddr{University of Pittsburgh}\\
       \affaddr{Pittsburgh, PA}\\
       \email{sahebi@cs.pitt.edu}
%Author 2
\alignauthor
Daniel Mills\\
       \affaddr{Carnegie Mellon University}\\
       \affaddr{Pittsburgh, PA}\\
       \email{dmills17@gmail.com}
}

\maketitle
\begin{abstract}
Due to the rapid growth of the internet, the ``information overload'' problem has become more and more of an issue for users. Recommendation systems have been developed as one of the possible solutions to this problem. One of the problems that recommendation systems suffer from is the ``cold start'' problem: there is relatively little information about each user, which results in an inability to draw inferences to recommend items to users. On the other hand, online social networks have recently been emerging and growing rapidly. Homophily suggests that we can take social network information into account in order to find similarities between users. We aim to use this information in order to overcome the ``cold start'' problem. The connections among people in social networks can have different dimensions: some may be friends with each other, some might have similar tastes , and some may have rated content similarly. These different dimensions can be used to detect communities among people. In this study, we propose that communities can capture the similarities of different dimensions of a social network and accordingly, they can help recommendation systems to work based on this latent similarities and improve their results.  To prove this claim, we run a latent community detection algorithm and use that to recommend items to users in comparison with the traditional collaborative filtering approach. 
\end{abstract}

\section{Introduction}

\section{Community Detection}
With the growth of social network web sites, the number of subjects within these networks has been growing rapidly. Community detection in social media analysis helps to understand more of users' collective behavior. The community detection techniques try to find subgroups among subjects in which the amount of interaction within group is more than the interaction outside it. Multiple statistical and graph-based methods have been used recently for the community detection purposes, like LDA, graph clustering approaches, or modularity-based methods. 

While the existing social networks are consisted of multiple types of subjects and interaction types among those subjects, most of these techniques focus only on one dimension of these interactions. As an example of multi-dimensional social network, we can name blogs networks in which people can connect to each other, comment on each other's posts, link to other posts in their blog post, or blog about similar subjects. If we consider just one of these dimensions, for example connections network, we will loose important information about other dimension in the network and the resulting communities will just represent a part of existing communities.



\subsection{Principal Modularity Maximization}
Modularity-based methods are developed recently and consider the strength of a community partition for real-world networks by taking into account the degree distribution of nodes. Modularity measure is defined based on how far the within-group interaction of found communities deviates from a uniform random graph with the same degree distribution. It is defined as formula \ref{eq:modularity} in which $S$ is a matrix indicating community membership ($S_{ij} = 1$ if node $i$ belongs to community $j$ and 0 otherwise) and $B$ is the modularity matrix defined in formula  \ref{eq:B}. The goal in modularity-based methods is to maximize $Q$. If we allow $S$ to be continuous, the optimal $S$ can be computed as the top $k$ eigenvectors of the modularity matrix $B$.

\begin{equation}
\label{eq:modularity}
Q = \frac{1}{2m}Tr\left(S^{T}BS\right)
\end{equation}

\begin{equation}
\label{eq:B}
B = A - \frac{dd^{T}}{2m}
\end{equation}

As said before, communities can be consisted of multiple dimensions. Principal Modularity Maximization\cite{Tang_uncoveringgroups}, is a modularity based method to find hidden communities in multi-dimensional networks. The idea is to integrate the network information of multiple dimensions in order to discover cross-dimension group structures. This method defines the modularity matrix for multi-dimensional social networks as concatenation of most important structural features of each dimension and tries to optimize that.


The method is a two-phase strategy to identify the hidden structures shared across dimensions. At fist, the structural features from each dimension of the network is extracted via modularity analysis, and then the features are integrated to find out a community structure among actors. It consists of two steps: structural feature extraction and cross-dimension integration. The assumption behind this cross-dimensional integration is that the structure of all of the dimensions in the graph should be similar to each other. In the first step, Structural features are defined as the network-extracted dimensions that are indicative of community structure. They can be computed by a low-dimensional embedding using the top eigenvectors of the modularity matrix. In Cross-Dimension Integration, the method is based on the expectation that the extracted structural features should be similar in different dimensions and result that minimizing the difference among features of various dimensions is equivalent to performing PCA on them.

In summary, this method first extracts structural features from each dimension of the network via modularity maximization; then PCA is applied on the concatenated data to select the top eigenvectors. Afterwards, a simple clustering algorithm such as K-means is used to group all the actors based on these features. 

\subsection{Use of Principal Modularity Maximization in Collaborative Filtering}
To apply PMM to the problem at hand, we need to define the various network dimensions.  The first is obvious: we can simply use the friendship network itself.  Then, we need a method for turning the book ratings and movie ratings into graphs.  To do this, we will define a function that takes two users and decides whether or not to add an edge between them, and if so, what weight to put on that edge.  Let $r_i$ be the rating vector of user $i$,  let $\sigma_x$ be the standard deviation of the non-zero elements of a vector $x$, and let $\mbox{covar}(x,y)$ be covariance of points where both $x$ and $y$ are non-zero.  Then, the similarity function is
\begin{equation}
s(r_i,r_j) = \frac{\mbox{covar}(r_i,r_j)}{\sigma_{r_i}\sigma_{r_j}}
\end{equation}
provided that $r_i$ and $r_j$ overlap at at least 3 positions and 0 otherwise.  A similarity score of 0 indicates that no edges should be added.
This function is a modified version of Pearson's Correlation Coefficient that takes into account the standard deviation of all of a user's ratings instead of just the standard deviation on the overlap with another user.  As such it is no longer constrained to the interval $[-1,1]$ and does not have a direct interpretation, but it better represents the similarity between users.  We can then use this function to create graphs from the book and movie ratings.

Once we have found latent communities in the data using PMM, we need to use this information to help with the recommendation of content to users.  Based on the findings presented above that friendship leads to a greater degree of similarity in ratings as well as choice of items to rate, we hypothesize that users within the same latent community will be better able than the community at large to inform predictions about a user.
Thus, our method is as follows: 
\begin{itemize}
\item We first run PMM on the friendship, books, and movies graphs to obtain the latent communities.
\item Then, on each community, we run collaborative filtering just using the ratings from that community.
\item Finally, we aggregate the results.
\end{itemize}

\section{Cold Start Problem}
The ``cold start'' problem \cite{Schein02methodsand} happens when there is lack of information either about some users or some items.  Usage-based recommendation systems work based on the similarity of a user's tastes to that of other users and content based recommendations take into account the similarity of items a user has consumed to other items the system already has knowledge about. When a user is a newcomer into the system, or he/she has not yet rated enough number of items, there is not enough evidence for the recommendation system to build the user profile based on his/her taste, so the user profile will not be comparable to other users or items. As a result, the recommendation system cannot recommend any items to such a user. 

Regarding the cold start problem for items, when an item is new in the usage based recommendation systems, no users have rated that item. Thus, it doesn't exist in any user profile. Since in collaborative filtering \cite{Breese98empiricalanalysis} the items consumed in similar users' profiles are recommended to the user, this new item cannot be considered for recommendation to anyone. 

\section{Dataset}
The dataset used in this study is based on an online Russian social network called \textit{Imhonet}\footnote{www.imhonet.ru}. This web site contains a multifaceted social network, including friendships, comments, and ratings of pieces of content such as movies and books. In this study, we focus on the connections between users of this web site and the ratings they gave to books and movies. The friendship network contains approximately 240,000 connections between around 65,000 users. The average number of friends users have is about 3.5. Additionally, about 16 million rating instances of the movie ratings on about 50,000 movies exist in the dataset. On about 195,000 available books in the dataset, there are more than 11.5 million user ratings. 

Figure \ref{fig:movieRatepUser} shows the number of movie ratings per user and figure \ref{fig:ratepMovie} shows the number of ratings for each movie. As we can see in these pictures, there is a small number of movies with many ratings and there are many movies with just a few ratings. These figures are cut off, only showing users who rated less than 50 movies and movies rated by less than 50 users, respectively.  As we can see, figure \ref{fig:movieRatepUser} looks like a combination of two power law distributions: there is a peak in number of users with around 20 movie ratings. That is because the \textit{Imhonet} web site asked its users to rate at least 20 movies for building more complete user profiles. We can see that some users actually followed this instructions and rated at least 20 movies. The right hand side figure shows the log-log scale figure which shows a power law distribution in figure \ref{fig:ratepMovie}, but for figure \ref{fig:movieRatepUser}, it doesn't show a power law distribution.

\begin{figure}[ht]
\centering
\begin{center}$
\begin{array}{cc}
\includegraphics[width= 1.7in]{movies_out.png}&
\includegraphics[width= 1.7in]{movies_out_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of ratings per movie, Right: Log-log plot of number of ratings per movie.}
\label{fig:ratepMovie}
\end{figure}

\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width= 1.7in]{movies_in.png}&
\includegraphics[width= 1.7in]{movies_in_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of movie ratings per user, Right: Log-log plot of number of movie ratings per user.}
\label{fig:movieRatepUser}
\end{figure}


\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width= 1.7in]{books_out.png}&
\includegraphics[width=1.7in]{books_out_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of ratings per book, Right: Log-log plot of number of ratings per book.}
\label{fig:ratepBook}
\end{figure}

\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=1.7in]{books_in.png}&
\includegraphics[width=1.7in]{books_in_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of book ratings per user, Right: Log-log plot of number of book ratings per user.}
\label{fig:bookRatepUser}
\end{figure}


\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=1.7in]{friends_in.png}&
\includegraphics[width=1.7in]{friends_out.png}
\end{array}$
\end{center}
\caption{Left: Number of users who are friends with each user, Right: Number of friends per user}
\label{fig:friends}
\end{figure}

%\begin{figure}[ht]
%\centering
%\subfloat[Part 1][Subfigure 1]{\includegraphics{friends_in.png}\label{fig:friends_a}}
%\subfloat[Part 2][Subfigure 2]{\includegraphics{friends_out.png}\label{fig:friends_b}}
%\caption{a) Number of Users who Are Friends with each User, b) Number of Friends each User Is Friend with}
%\label{fig:friends}
%\end{figure}

We can see the same thing for book ratings as shown in figures \ref{fig:ratepBook} and \ref{fig:bookRatepUser}. We can see that many users rate around 20 books, due the the fact that web site requested members to provide at least twenty ratings.

Friendship connections between users are shown in figures \ref{fig:friends}. Connections between people are directional in this dataset, so the number of users who are friends with a user might be different from the numbers of other users that a given user is a friend of. 
We can take a look at the log-log plot of the histogram of ratings to see that they do indeed follow a power-law distribution. 

To reduce the volume of the data, we just used the ratings of users who had at least one connection in the dataset. The resulting dataset contains about 9 million movie ratings of 48,000 users on 50,000 movies and 1.2 million book ratings of 13,000 users on 140,000 books.  

\section{Experiments}
\subsection{First Experiment Setup: Collaborative Filtering on the Whole Data}
For recommending books and movies to users, we use memory-based collaborative filtering. To do this, we used 10\% of the data for test data and the other 90\% for training data. The similarity measure used in this study is Pearson Correlation \cite{Resnick:1994:GOA:192844.192905}. Since the movies dataset is much bigger than the books data and thus took much longer to run, we did a 5-fold cross validation on movies and 10-fold cross validation on books. The results are shown in \ref{fig:cfResults}. As we can see in this figure, even though the volume of movie ratings was much more than that of book ratings, the nDCG results for books are significantly better than movies but the variance of the results for movies are less than variance of the nDCG results for books. This shows that most of the users who rate books have very predictable taste, but there are some users with strange tastes relative to the other users, leading to a higher variance. Also, we can conclude that behavior of users in rating movies are less predictable overall, but all equally hard to predict.

\begin{figure}[ht]
\centering
\includegraphics[width=3.0in]{NDCG_CF_Book_Movie.png}
\caption{NDCG at K for Movie and Book Recommendations using Collaborative Filtering}
\label{fig:cfResults}
\end{figure}

\subsection{Second Experiment Setup: Community Detection on Books and Connection Network using PMM}
Before using communities to test the possible improvements on the recommendation quality, we run PMM algorithm on the data to see which communities it can detect. Since the number of ratings for movies is very big and we have resource restrictions to load all of the data into the memory, we just run PMM algorithm to find communities based on two dimensions of the network: friendship and co-ratings on books. we tried to find 50 communities within the users using 2000 important structural features of the data. The sizes of the resulting communities are shown in \ref{fig:pmm-all-sizes}. On the left side, we can see the ratio of number of users in each community. As we can see, One community includes around 90\% of the users, the second one contains around 90\% of the remaining users and it goes on. So, it's a very skewed distribution with the largest community containing 90\% of the users and the smallest community containing only one user. On the right side of this figure, we can see the bar chart of size of each community, starting from all of the communities (1 to 50), and removing the larger communities gradually (The last bar is for communities 17 to 50). We visualized the resulting community structure excluding the biggest community in figure \ref{fig:bigNet}\footnote{This graph is generated using Gephi (http://gephi.org/)}. Here we can see that there are still lots of friendship connections outside the communities and there are even more intra-community friendships than inter-community friendships. This shows that the co-rating and friendship activities do not have the same structure in our graph. To figure out how this happens, we compared the average correlation of ratings between users and their friends who have at least three books in common to the average correlation of ratings between random users who have at least three books in common. The difference is quite large: 0.07\% compared to 0.001\%. The percentage of friends who have rated at least three books in common is 0.026, whereas for two arbitrary people this percentage is only 0.013. Thus, adding the information in the friendship network to a recommendation system should help in finding interesting books for users, as friends are both more likely to rate the same items and are more likely to give similar ratings.
\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=1.5in]{pmm_on_book_conn_net_pie.PNG}&
\includegraphics[width=1.5in]{pmm_on_book_conn_reduced_net_bar_all.PNG}
\end{array}$
\end{center}
\caption{Left: Size of Detected Communities by PMM Algorithm, Right: Same, omitting $k$ largest communities}
\label{fig:pmm-all-sizes}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=3.0in]{pmm_on_book_conn_reduced_net.PNG}
\caption{Network for 49 Detected Communities of Book Co-Rating and Friendships showing Friendship Edges}
\label{fig:bigNet}
\end{figure}

\subsection{Third Experiment Setup: Community Detection on Books, Movies and Connection Network using PMM}
Due to the high volume of ratings on the movies dataset and resource restrictions, we could not load the entire movie corating network into memory and as a result, performed PMM just on books and connections in previous section. In this experimental setup, we chose 10,000 users from connections data and pruned the books and movies data based on that. We run PMM on this set of users to find 10 communities. We can see the results in figures \ref{fig:pmm_all_10000} and \ref{fig:pmm_net_10000}. We can see that around 50\% of users are placed into one community and other community sizes are more or less the same. 
Unfortunately, collaborative filtering is not working properly on this data, so we do not have have results for this.

\begin{figure}[ht]
\centering
\includegraphics[width=3.0in]{pmm_on_book_conn_movie_10000_pie.PNG}
\caption{Size of Detected Communities by PMM Algorithm on Books, Movies, and Connections}
\label{fig:pmm_all_10000}
\end{figure}

\begin{figure}[ht]
\centering
\includegraphics[width=3.0in]{PMM_BOOK_MOVIE_CON_2000_NET.png}
\caption{Network for 10 Detected Communities of Book and Movie Co-Rating and Friendships showing Friendship Edges}
\label{fig:pmm_net_10000}
\end{figure}

\section{Conclusion and Future Works}
Overall, we were hampered by a lack of computing resources.  The vanilla collaborative filtering algorithm scales fairly well to large datasets, but PMM quickly becomes intractable, primarily due to the size of the input graphs and the time required to perform kmeans on a large number of large vectors.  Further, we have implemented the majority of our code in Matlab, which is not ideal for scaling to large datasets.  Future work will include finding ways of running the same experiments over again using larger data.  From the results that we have, this dataset seems well suited for use in recommendation systems, and the friendship network seems like a very promising piece of information.  We do not have conclusive evidence that it helps, but it seems likely.
\bibliographystyle{abbrv}
\bibliography{report}
\end{document}
