% This is "sig-alternate.tex" V1.9 April 2009
% This file should be compiled with V2.4 of "sig-alternate.cls" April 2009
%
% This example file demonstrates the use of the 'sig-alternate.cls'
% V2.4 LaTeX2e document class file. It is for those submitting
% articles to ACM Conference Proceedings WHO DO NOT WISH TO
% STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE.
% The 'sig-alternate.cls' file will produce a similar-looking,
% albeit, 'tighter' paper resulting in, invariably, fewer pages.
%
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V2.4) produces:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) NO page numbers
%
% as against the acm_proc_article-sp.cls file which
% DOES NOT produce 1) thru' 3) above.
%
% Using 'sig-alternate.cls' you have control, however, from within
% the source .tex file, over both the CopyrightYear
% (defaulted to 200X) and the ACM Copyright Data
% (defaulted to X-XXXXX-XX-X/XX/XX).
% e.g.
% \CopyrightYear{2007} will cause 2007 to appear in the copyright line.
% \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line.
%
% ---------------------------------------------------------------------------------------------------------------
% This .tex source is an example which *does* use
% the .bib file (from which the .bbl file % is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission, you *NEED* to 'insert'
% your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% ================= IF YOU HAVE QUESTIONS =======================
% Questions regarding the SIGS styles, SIGS policies and
% procedures, Conferences etc. should be sent to
% Adrienne Griscti (griscti@acm.org)
%
% Technical questions _only_ to
% Gerald Murray (murray@hq.acm.org)
% ===============================================================
%
% For tracking purposes - this is V1.9 - April 2009

\documentclass{sig-alternate}

\begin{document}
%
% --- Author Metadata here ---
\conferenceinfo{WOODSTOCK}{'97 El Paso, Texas USA}
%\CopyrightYear{2007} % Allows default copyright year (20XX) to be over-ridden - IF NEED BE.
%\crdata{0-12345-67-8/90/01}  % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
% --- End of Author Metadata ---

\title{Community-Based Recommendations: a Solution to the Cold Start Problem}
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
Shaghayegh Sahebi\\
       \affaddr{University of Pittsburgh}\\
       \affaddr{Pittsburgh, PA}\\
       \email{sahebi@cs.pitt.edu}
% 2nd. author
\alignauthor
Daniel Mills\\
       \affaddr{Carnegie Mellon University}\\
       \affaddr{Pittsburgh, PA}\\
       \email{dmills17@gmail.com}
}
% There's nothing stopping you putting the seventh, eighth, etc.
% author on the opening page (as the 'third row') but we ask,
% for aesthetic reasons that you place these 'additional authors'
% in the \additional authors block, viz.
\date{16 May 2011}
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.

\maketitle
\begin{abstract}
Due to the rapid growth of the internet, the ``information overload'' problem has become more and more of an issue for users. Recommendation systems have been developed as one of the possible solutions to this problem. One of the problems that recommendation systems suffer from is the ``cold start'' problem: there is relatively little information about each user, which results in an inability to draw inferences to recommend items to users. On the other hand, online social networks have recently been emerging and growing rapidly. Homophily suggests that we can take social network information into account in order to find similarities between users. The connections among people in social networks can have different dimensions: some may be friends with each other, some might have similar tastes , and some may have rated content similarly. These different dimensions can be used to detect communities among people. In this study, we use communities to capture the similarities of different dimensions of a social network and accordingly, help recommendation systems to work based on this latent similarities. Also, we use this different dimensions' information in order to overcome the ``cold start'' problem.
\end{abstract}

% A category with the (minimum) three required fields
\category{H.3.3} {Information Storage and Retrieval}{Information
Search and Retrieval}[information filtering]%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]

%\terms{}

\keywords{Recommendation, Cold-Start, Community Detection, Social Media}

\section{Introduction}

\section{Community Detection}
With the growth of social network web sites, the number of subjects within these networks has been growing rapidly. Community detection in social media analysis \cite{Fortunato201075} helps to understand more of users' collective behavior. The community detection techniques try to find subgroups among subjects in which the amount of interaction within group is more than the interaction outside it. Multiple statistical and graph-based methods have been used recently for the community detection purposes, like Bayesian generative models \cite{Delong_Erickson_2008}, graph clustering approaches, hierarchical clustering, or modularity-based methods \cite{Fortunato201075}. 

While the existing social networks are consisted of multiple types of subjects and interaction types among those subjects, most of these techniques focus only on one dimension of these interactions. As an example of multi-dimensional social network, we can name blogs networks in which people can connect to each other, comment on each other's posts, link to other posts in their blog post, or blog about similar subjects. If we consider just one of these dimensions, for example connections network, we will loose important information about other dimensions in the network and the resulting communities will just represent a part of existing ones.

\subsection{Principal Modularity Maximization}
In this paper, we use a modularity-based community detection method for multi dimensional networks presented by Tang et. al in \cite{Tang_uncoveringgroups}. Modularity-based methods consider the strength of a community partition for real-world networks by taking into account the degree distribution of nodes. Modularity measure is defined based on how far the within-group interaction of found communities deviates from a uniform random graph with the same degree distribution. It is defined as formula \ref{eq:modularity} in which $S$ is a matrix indicating community membership ($S_{ij} = 1$ if node $i$ belongs to community $j$ and 0 otherwise) and $B$ is the modularity matrix defined in formula  \ref{eq:B}. In formula \ref{eq:B}, which measures the deviation of network interactions with a random graph, $A$ represents the sparse interaction matrix between actors of the network, $d$ shows the degree of each node, and $m$ is the total number of existing edges. The goal in modularity-based methods is to maximize $Q$. If we allow $S$ to be continuous, the optimal $S$ can be computed as the top $k$ eigenvectors of the modularity matrix $B$ \cite{Tang_uncoveringgroups}.

\begin{equation}
\label{eq:modularity}
Q = \frac{1}{2m}Tr\left(S^{T}BS\right)
\end{equation}

\begin{equation}
\label{eq:B}
B = A - \frac{dd^{T}}{2m}
\end{equation}

As said before, communities can be consisted of multiple dimensions. Principal Modularity Maximization\cite{Tang_uncoveringgroups}, is a modularity based method to find hidden communities in multi-dimensional networks. The idea is to integrate the network information of multiple dimensions in order to discover cross-dimension group structures. This method defines the modularity for multi-dimensional social networks as concatenation of most important structural features of each dimension and tries to optimize that.

The method is a two-phase strategy to identify the hidden structures shared across dimensions. At fist, the structural features from each dimension of the network is extracted via modularity analysis (structural feature extraction), and then the features are integrated to find out a community structure among actors (cross-dimension integration). The assumption behind this cross-dimensional integration is that the structure of all of the dimensions in the graph should be similar to each other. In the first step, Structural features are defined as the network-extracted dimensions that are indicative of community structure. They can be computed by a low-dimensional embedding using the top eigenvectors of the modularity matrix. In Cross-Dimension Integration, the method is based on the expectation that the extracted structural features should be similar in different dimensions. Minimizing the difference among features of various dimensions is equivalent to performing Principal Component Analysis (PCA) on them.

In summary, this method first extracts structural features from each dimension of the network via modularity maximization; then PCA is applied on the concatenated data to select the top eigenvectors. This results in a community membership matrix $S$ which is continuous. To group all the actors in a discrete community membership based on these features, a simple clustering algorithm such as K-means is used on that. 


\subsection{Cold Start Problem and Using Community Detection in Recommendation Systems}
The ``cold start'' problem \cite{Schein02methodsand} happens when there is lack of information either about some users or some items.  Usage-based recommendation systems work based on the similarity of a user's tastes to that of other users and content based recommendations take into account the similarity of items a user has consumed to other items the system already has knowledge about. When a user is a newcomer into the system, or he/she has not yet rated enough number of items, there is not enough evidence for the recommendation system to build the user profile based on his/her taste, so the user profile will not be comparable to other users or items. As a result, the recommendation system cannot recommend any items to such a user. 

Regarding the cold start problem for items, when an item is new in the usage based recommendation systems, no users have rated that item. Thus, it doesn't exist in any user profile. Since in collaborative filtering \cite{Breese98empiricalanalysis} the items consumed in similar users' profiles are recommended to the user, this new item cannot be considered for recommendation to anyone. 

In this paper, we concentrate on cold start problem for new users. We propose that if a user is new in one system, but has a history in another system with an overlapping userbase, we can use his/her external profile to recommend relevant items, in the new system, to this user. As an example, consider a new user in youtube, of whom we are aware of his/her profile in facebook. A comprehensive profile of the user can be produced by the movies he/she posted, liked or commented on and this profile can be used to recommend relevant movies in youtube to the same user. In this example, the type of recommended items are the same: movies. Another hypothesis, is that users' interest in specific items, might reveal his/her interest in other items. This is the same hypothesis that exists in multidimensional network community detection: we expect multiple dimensions of a network to have a similar structure. As an example, if a user has no data in the books section of a system, but has a profile in the movies section, we can consider similar users to him/her, in terms of movie ratings, to have a similar taste on books with him/her too; or if two users are friends, we expect them to have more similar behavior in the system. Utilizing user profiles in other dimensions to predict their interests in another dimension can be used as a solution to the cold start problem. Community detection can provide us with a group of users similar to the target user in multiple dimensions. 

We can use this information in multiple ways as suggested in the following. In traditional collaborative filtering, the predicted rating of each active user $a$ on each item $j$ is calculated as a weighted sum of similar users' rankings on the same item (formula \ref{eq:CFrate}). In this formula, $n$ is the number of similar users we would like to take into account, $\alpha$ is a normalizer, $v_{i,j}$ is the vote of user $i$ on item $j$, $\bar{v}_i$ is the average rating of user $i$ and $w(a,i)$ is the weight of this $n$ similar users. 

\begin{equation}
\label{eq:CFrate}
p_{a,j} = \bar{v}_a + \alpha \sum_{i = 1}^{n}{w(a,i)(v_{i,j} - \bar{v}_i)}
\end{equation}

The value of $w(a,i)$ can be calculated in many ways. Common methods are Cosine similarity, Euclidean similarity, or Pearson Correlation on user profiles. We proposed and tried multiple approaches in community based collaborative filtering to predict user ratings. Once we have found latent communities in the data, we need to use this information to help with the recommendation of content to users.  Based on the findings presented above that friendship leads to a greater degree of similarity in ratings as well as choice of items to rate, we hypothesize that users within the same latent community will be better able than the community at large to inform predictions about a user. Possible approaches are consisted of combinations of the following:

\begin{enumerate}
\item Using a community based similarity measure to calculate $w(a,i)$.
\item Using co-community users (users within active user's community) instead of k-nearest neighbors. 
\item Recommending the interesting items to the average community of active user to him/her.
\end{enumerate}

For the first case, specifically in PMM community detection algorithm, a matrix $S$ which is indicator of multi dimensional community membership is produced. We define community-based similarity measure among users of the system as a $N * N$ matrix $W$ in formula \ref{eq:commBasedSimil} and use it as a weight function in formula \ref{eq:CFrate}. Here, $N$ is the total number of users and each element of the matrix shows the similarity between two of users.

\begin{equation}
\label{eq:commBasedSimil}
W = SS^{T}
\end{equation}

In the second case, the predicted rating would be defined as in formula \ref{eq:commrate} in which $community(a)$ indicates the community assigned to the active user by the community detection algorithm.

\begin{equation}
\label{eq:commrate}
p_{a,j} = \bar{v}_a + \alpha \sum_{i \in community(a)}{w(a,i)(v_{i,j} - \bar{v}_i)}
\end{equation}

In the third case, we use average of the assigned community rating to the active user:

\begin{equation}
\label{eq:avgcommrate}
p_{a,j} = \bar{v}_a + \alpha \sum_{i \in community(a)}{(v_{i,j} - \bar{v}_i)}
\end{equation}

In addition to using the proposed methods for cold start problem, we believe that, the second and third cases are very useful where we have a large number of users and as a result, the traditional collaborative filtering approach takes a lot of space and time to converge. Instead, we can detect the community user belongs to, and use that community members to find relevant items to users. 

\section{Dataset}
The dataset used in this study is based on an online Russian social network called \textit{Imhonet}\footnote{www.imhonet.ru}. This web site contains a multifaceted social network, including friendships, comments, and ratings of pieces of content such as movies and books. In this study, we focus on the connections between users of this web site and the ratings they gave to books and movies. The friendship network contains approximately 240,000 connections between around 65,000 users. The average number of friends users have is about 3.5. Additionally, about 16 million rating instances of the movie ratings on about 50,000 movies exist in the dataset. On about 195,000 available books in the dataset, there are more than 11.5 million user ratings. 

%Figure \ref{fig:movieRatepUser} shows the number of movie ratings per user and figure \ref{fig:ratepMovie} shows the number of ratings for each movie. As we can see in these pictures, there is a small number of movies with many ratings and there are many movies with just a few ratings. These figures are cut off, only showing users who rated less than 50 movies and movies rated by less than 50 users, respectively.  As we can see, figure \ref{fig:movieRatepUser} looks like a combination of two power law distributions: there is a peak in number of users with around 20 movie ratings. That is because the \textit{Imhonet} web site asked its users to rate at least 20 movies for building more complete user profiles. We can see that some users actually followed this instructions and rated at least 20 movies. The right hand side figure shows the log-log scale figure which shows a power law distribution in figure \ref{fig:ratepMovie}, but for figure \ref{fig:movieRatepUser}, it doesn't show a power law distribution.

Figure \ref{fig:bookRatepUser} shows the number of book ratings per user and figure \ref{fig:ratepBook} shows the number of ratings for each book. The right hand side figures shows the log-log scale figures which shows a power law distribution in figure \ref{fig:ratepBook}, but for figure \ref{fig:bookRatepUser}, it doesn't show a power law distribution. These figures are cut off, only showing users who rated less than 50 books and books rated by less than 50 users, respectively.  As we can see, figure \ref{fig:bookRatepUser} looks like a combination of two power law distributions: there is a peak in number of users with around 20 book ratings. That is because \textit{Imhonet} web site asked its users to rate at least 20 books for building more complete user profiles. We can see that a large number users actually followed this instructions and rated at least 20 books. 
%\begin{figure}[ht]
%\centering
%\begin{center}$
%\begin{array}{cc}
%\includegraphics[width= 1.7in]{movies_out.png}&
%\includegraphics[width= 1.7in]{movies_out_ll.png}
%\end{array}$
%\end{center}
%\caption{Left: Histogram of number of ratings per movie, Right: Log-log plot of number of ratings per movie.}
%\label{fig:ratepMovie}
%\end{figure}
%
%\begin{figure}[ht]
%\begin{center}$
%\begin{array}{cc}
%\includegraphics[width= 1.7in]{movies_in.png}&
%\includegraphics[width= 1.7in]{movies_in_ll.png}
%\end{array}$
%\end{center}
%\caption{Left: Histogram of number of movie ratings per user, Right: Log-log plot of number of movie ratings per user.}
%\label{fig:movieRatepUser}
%\end{figure}
\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width= 1.7in]{books_out.png}&
\includegraphics[width=1.7in]{books_out_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of ratings per book, Right: Log-log plot of number of ratings per book.}
\label{fig:ratepBook}
\end{figure}
\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=1.7in]{books_in.png}&
\includegraphics[width=1.7in]{books_in_ll.png}
\end{array}$
\end{center}
\caption{Left: Histogram of number of book ratings per user, Right: Log-log plot of number of book ratings per user.}
\label{fig:bookRatepUser}
\end{figure}
\begin{figure}[ht]
\begin{center}$
\begin{array}{cc}
\includegraphics[width=1.7in]{friends_in.png}&
\includegraphics[width=1.7in]{friends_out.png}
\end{array}$
\end{center}
\caption{Left: Number of users who are friends with each user, Right: Number of friends per user}
\label{fig:friends}
\end{figure}
%\begin{figure}[ht]
%\centering
%\subfloat[Part 1][Subfigure 1]{\includegraphics{friends_in.png}\label{fig:friends_a}}
%\subfloat[Part 2][Subfigure 2]{\includegraphics{friends_out.png}\label{fig:friends_b}}
%\caption{a) Number of Users who Are Friends with each User, b) Number of Friends each User Is Friend with}
%\label{fig:friends}
%\end{figure}
%We can see the same thing for movie ratings as shown in figures \ref{fig:ratepBook} and \ref{fig:bookRatepUser}. We can see that many users rates around 20 books which is because the web site asked them to do so.
If we look at movie rating distribution (which we omitted due to the space restrictions), we can see the same behavior. We can see that many users rates around 20 books, due to the request of the web site.

Friendship connections between users are shown in figures \ref{fig:friends}. Connections between friends are directional in this dataset, so the number of users who are friends with a user might be different from the numbers of users that user is a friend of. 

To reduce the volume of the data, we used the ratings of users who had at least one connection in the dataset. The resulting dataset contains about 9 million movie ratings of 48,000 users on 50,000 movies and 1.2 million book ratings of 13,000 users on 140,000 books.  Then, we picked 10,000 random users among these users.

\section{Experiments}
We separated out 10\% of users as test users and used the reminder as training users. To address the cold start problem, we removed all the book ratings of test users from the dataset and tried to predict these book ratings for them. We performed 10-fold cross-validation on this data. 

To apply PMM to the problem at hand, we need to define the various network dimensions.  The first is obvious: we can simply use the friendship network itself.  Then, we need a method for turning the book ratings and movie ratings into graphs.  To do this, we will define a function that takes two users and decides whether or not to add an edge between them, and if so, what weight to put on that edge.  Let $r_i$ be the rating vector of user $i$,  let $\sigma_x$ be the standard deviation of the non-zero elements of a vector $x$, and let $\mbox{covar}(x,y)$ be covariance of points where both $x$ and $y$ are non-zero.  Then, the similarity function is
\begin{equation}
s(r_i,r_j) = \frac{\mbox{covar}(r_i,r_j)}{\sigma_{r_i}\sigma_{r_j}}
\end{equation}
provided that $r_i$ and $r_j$ overlap at at least 3 positions and 0 otherwise.  A similarity score of 0 indicates that no edges should be added.
This function is a modified version of Pearson's Correlation Coefficient that takes into account the standard deviation of all of a user's ratings instead of just the standard deviation on the overlap with another user.  As such it is no longer constrained to the interval $[-1,1]$ and does not have a direct interpretation, but it better represents the similarity between users.  We can then use this function to create graphs from the book and movie ratings. Once we had different dimensions of the network, we can run PMM on the friendship, books, and movies graphs to obtain the latent communities. Our experiments are combinations of different cases introduced in section 2.2. Here is a list of out experimental setups:

\begin{enumerate}
\item{considering a vector space model for book and movie ratings and build user profiles by concatenating these two vectors in a combined space; then, performing traditional collaborative filtering using Pearson Correlation for that (baseline),}
\item{As described in case $1$, perform collaborative filtering for all users considering their community memberships as a similarity measure,}
\item{As in case $2$, perform traditional collaborative filtering within the community,}
\item{Perform collaborative filtering using the community based similarity measure within the community (combination of cases $1$ and $2$),}
\item{Recommending based on average community ratings as in case $3$.}
\end{enumerate}

\section{Conclusions and Future Works}
%ACKNOWLEDGMENTS are optional
\section{Acknowledgments}
We thank Imhonet company who kindly provided data for the experiments. Also, we would like to thank Dr. Peter Brusilovsky and Dr. William Cohen for their advice during this study.
%
% The following two commands are all you need in the
% initial runs of your .tex file to
% produce the bibliography for the citations in your paper.
\bibliographystyle{abbrv}
\bibliography{recsys}  % sigproc.bib is the name of the Bibliography in this case
% You must have a proper ".bib" file
%  and remember to run:
% latex bibtex latex latex
% to resolve all references
%
% ACM needs 'a single self-contained file'!
%
%APPENDICES are optional
%\balancecolumns
\end{document}
