% Chapter 4 - Contextual Recommendations

\chapter{Folksonomic Recommendation}
\label{chapter4}
\lhead{Chapter 4. \emph{Folksonomic Recommendation}}

One of the defining characteristics of any social bookmarking system that supports social tagging, is the emergence of a folksonomy. This collaboratively generated categorization of items in a system serves to bind users and items together through an extra annotation layer. This extra layer of information can have many different applications, as discussed in Chapter \ref{chapter2}. For instance, both searching and browsing through the content in a system can be improved by using the tags assigned to items by users. This leads us to our first research question.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 1}   & How can we use the information represented by the folksonomy 
                 to support and improve the recommendation performance? \\
\end{tabularx}\end{center}

More specifically, we are interested in how we can improve the performance of collaborative filtering algorithms, which base their recommendations on the opinions or actions of other like-minded users as opposed to item content. %Conventional CF algorithms operate on transaction patterns or ratings, which represent which items a user has added, purchased, rated, or otherwise interacted with.
We have two options for incorporating the tagging information contained in the entire folksonomy in CF algorithms. The first option is to treat the tag layer as an extra, refined source of usage information on top of the normal bipartite graph of users and items. Here, we take a standard Collaborative Filtering (CF) algorithm as our starting point and examine a number of ways that tags can be used to find like-minded users, or to re-rank the recommendations. We will refer to this as Tag-Based Collaborative Filtering (TBCF). The second option is to use a CF algorithm designed to operate on the entire tripartite graph. We will refer to such approaches as Graph-Based Collaborative Filtering (GBCF). We discuss the most important related work on GBCF approaches and compare two of the state-of-the-art GBCF algorithms to our own tag-based algorithms.

This chapter is organized as follows. We start in Section \ref{4:sec:preliminaries} by establishing the notation and definitions that we will use throughout this chapter. In Section \ref{4:section:popularity} we establish a popularity-based recommendation baseline that is currently employed by several social bookmarking websites. Section \ref{4:sec:cf} establishes a stronger baseline by applying two CF algorithms to our data sets. We examine a number of ways in which we can incorporate tags into our CF algorithms and present these TBCF algorithms in Section \ref{4:sec:using-tags}. Section \ref{4:sec:related-work} discusses the related work. on recommendation for social bookmarking websites. In Section \ref{4:sec:comparison-related-work} we look at two state-of-the-art GBCF approaches from the related work and compare them to our own TBCF algorithms. In Section \ref{4:sec:summary} we answer RQ 1 and present our conclusions.





\section{Preliminaries}
\label{4:sec:preliminaries}

We start by establishing the notation and definitions that will be used throughout this chapter. To be consistent with other work on recommendation for social bookmarking, we base our notation in part on the work by \cite{Wang:2006}, \cite{Clements:2008}, and \cite{Tso-Sutter:2008}. In the social bookmarking setting that we focus on in this thesis, users post items to their personal profiles and can choose to label them with one or more tags. We recall that, in Chapter \ref{chapter2}, we defined a folksonomy to be a tripartite graph that emerges from this collaborative annotation of items. The resulting ternary relations that make up the tripartite graph can be represented as a 3D matrix, or third-order tensor, of users, items, and tags. The top half of Figure \ref{figure:graph-to-3d-matrix} illustrates this matrix view. We refer to the 3D matrix as $\mathbf{D}(u_{k}, i_{l}, t_{m})$. Here, each element $d(k,l,m)$ of this matrix indicates whether user $u_{k}$ (with $k = \{1, \dots, K\}$) tagged item $i_{l}$  (with $l = \{1, \dots, L\}$) with tag $t_{m}$ (with $m = \{1, \dots, M\}$), where a value of $1$ indicates the presence of the  ternary relation in the folksonomy. %The absence of a relation is not represented and the value for the absent elements is $\emptyset$.

  \begin{figure}[h]
    \centering
      \includegraphics[scale=0.7]{./figures/diagram-tripartite-to-matrix.pdf}
    \caption[Representing the folksonomy graph as a 3D matrix]{Representing the tripartite folksonomy graph as a 3D matrix. The ratings matrix {\bf R} is derived from the tripartite graph itself and directly represents what items were added by which users. Aggregation over the tag dimension of {\bf D} gives us matrix {\bf UI}, containing the tag counts for each user-item pair. By binarizing the values in {\bf UI} we can obtain {\bf R} from {\bf UI}. The figure is adapted from \cite{Clements:2008} and \cite{Tso-Sutter:2008}.}
    \label{figure:graph-to-3d-matrix}
  \end{figure}

In conventional recommender systems, the user-item matrix contains rating information. The ratings can be {\em explicit}, when they are entered directly by the user, or {\em implicit}, when they are inferred from user behavior. In our case we have implicit, unary ratings where all items that were added by a user receive a rating of 1. All non-added items do not have a rating and are indicated by $\emptyset$. We extract this ratings matrix $\mathbf{R}(u_{k}, i_{l})$ for all user-item pairs directly from the tripartite graph. We denote its individual elements by $x_{k,l} = \{1, \emptyset\}$. As evident from Figure \ref{figure:graph-to-3d-matrix}, we can also extract a user-item matrix from {\bf D} by aggregating over the tag dimension. We then obtain the $K \times L$ user-item matrix $\mathbf{UI}(u_{k}, i_{l}) = \sum_{m=1}^{M} \mathbf{D}(u_{k}, i_{l}, t_{m})$, specifying how many tags each user assigned to each item. Individual elements of {\bf UI} are denoted by $x_{k,l}$. We can define a binary version of this matrix $\mathbf{UI}_{\mathit{binary}}(u_{k}, i_{l})$ as $\mathit{sgn} \sum_{m=1}^{M} \mathbf{D}(u_{k}, i_{l}, t_{m})$ where the $\mathit{sgn}$ function sets all values $>0$ to 1. Because we filtered our data sets to include only tagged content, as described in Chapter \ref{chapter3}, our ratings matrix {\bf R} is the same as $\mathbf{UI}_{\mathit{binary}}$\footnote{Note that this would not be the case if {\bf R} contained explicit, non-binary ratings.}.%, or if we had not filtered on tagged content. If untagged content was included, it would  be possible for {\bf R} to contain pairs that are not in {\bf UI}.}.

Similar to the way we defined {\bf UI} we can also aggregate the content of {\bf D} over the user and the item dimensions. We define the $K \times M$ user-tag matrix $\mathbf{UT}(u_{k}, t_{m}) = \sum_{l=1}^{L} \mathbf{D}(u_{k}, i_{l}, t_{m})$, specifying how often each user used a certain tag to annotate his items. Individual elements of {\bf UT} are denoted by $y_{k,m}$. We define the $L \times M$ item-tag matrix $\mathbf{IT}(i_{l}, t_{m}) = \sum_{k=1}^{K} \mathbf{D}(u_{k}, i_{l}, t_{m})$, indicating how many users assigned a certain tag to an item. Individual elements of {\bf IT} are denoted by $z_{l,m}$. We can define binary versions of {\bf UT} and {\bf IT} in a manner similar to $\mathbf{UI}_{\mathit{binary}}$. Figure \ref{figure:aggregated-matrices} visualizes the 2D projections in the users' and items' tag spaces.

  \begin{figure}[h]
    \centering
      \includegraphics[angle=90,scale=0.5]{./figures/diagram-c4-aggregated-matrices.pdf}
    \caption[Deriving tagging information at the user level and the item level]{Deriving tagging information at the user level as {\bf UT}, and the item level as {\bf IT}, by aggregating over the item and user dimensions respectively.}
    \label{figure:aggregated-matrices}
  \end{figure}

The ratings matrix {\bf R} can be represented by its row vectors:

\begin{displaymath}
  \mathbf{R} = [\overrightarrow{u_{1}}, \ldots, \overrightarrow{u_{k}}]^{T}, \; \overrightarrow{u_{k}} = [x_{k,1}, \ldots, x_{k,L}]^{T}, \; k = 1, \ldots, K, 
\end{displaymath}

where each row vector $\overrightarrow{u_{k}}^{T}$ corresponds to a user profile, which represents the items that user added to his profile. {\bf R} can also be decomposed into column vectors:

\begin{displaymath}
  \mathbf{R} = [\overrightarrow{i_{1}}, \ldots, \overrightarrow{i_{l}}], \; \overrightarrow{i_{l}} = [x_{1,l}, \ldots, x_{K,l}]^{T}, \; l = 1, \ldots, L, 
\end{displaymath}

where each column vector $\overrightarrow{i_{l}}$ represents an item profile, containing all users that have added that item. We can decompose the {\bf UI}, {\bf UT}, and {\bf IT} matrices in a similar fashion. We will also refer to the user and item profiles taken from the {\bf UI} matrix as $\overrightarrow{u_{k}}$ and $\overrightarrow{i_{l}}$. We decompose the {\bf UT} and {\bf IT} matrices, which have the same number of columns $M$, into row vectors in the following way:

\begin{displaymath}
  \mathbf{UT} = [\overrightarrow{d_{1}}, \ldots, \overrightarrow{d_{k}}]^{T}, \; \overrightarrow{d_{k}} = [y_{k,1}, \ldots, y_{k,M}]^{T}, \; k = 1, \ldots, K
\end{displaymath}

\begin{displaymath}
  \mathbf{IT} = [\overrightarrow{f_{1}}, \ldots, \overrightarrow{f_{l}}]^{T}, \; \overrightarrow{f_{l}} = [z_{l,1}, \ldots, z_{l,M}]^{T}, \; l = 1, \ldots, L
\end{displaymath}

The vectors $\overrightarrow{d_{k}}$ and $\overrightarrow{f_{l}}$ are the tag count vectors for the users and items respectively. %The next two sections will establish baseline recommendation runs using only the ratings matrix {\bf R}. In Sections \ref{4:sec:using-tags} and \ref{4:sec:graph-based} we look at how we can use the entire matrix {\bf D} and its associated 2D projections {\bf UI}, {\bf UT}, and {\bf IT} to improve recommendations. \antal{Maybe leave out these last two forward-referencing sentences?}

Formally, the goal of each of the recommendation algorithms discussed in this chapter is to rank-order all items that are not yet in the profile of the active user $u_{k}$ (so $x_{k,l} = \emptyset$) so that the top-ranked item is most likely to be a good recommendation for the active user. To this end, we predict a rating or score $\widehat{x}_{k,l}$ for each item that user $u_{k}$ would give to item $i_{l}$. In our social bookmarking scenario we do not have explicit ratings information (e.g., 4 out of 5 stars), so we try to predict whether a user will like an item or not. % on a $[0, 1]$ scale. 
The final recommendations for a user $\mathit{RECS}(u_{k})$ are generated by rank-ordering all items $i_{l}$ by their predicted rating $\widehat{x}_{k,l}$ as follows:

\begin{equation}
\label{eq:rec-ranking}
\mathit{RECS}(u_{k}) = \{ \; i_{l} \; | \; \mathit{rank} \; \widehat{x}_{k,l}, \; x_{k,l} = \emptyset\ \}.
\end{equation}

Only items not yet in the user's profile $\overrightarrow{u_{k}}$ are considered as recommendations ($x_{k,l} = \emptyset$). Regardless of how the ratings $\widehat{x}_{k,l}$ are predicted, we always generate the recommendations according to Equation \ref{eq:rec-ranking}.





\section{Popularity-based Recommendation}
\label{4:section:popularity}

One of the most straightforward recommendation strategies is to recommend the most popular content in a system to every user. Such a popularity-based recommendation algorithm ranks the items in the system by popularity, and presents every user with this list of recommendations, minus the items the user already owns. Recommending items based solely on popularity in the system without any regard for personal preferences can be expected to produce poor recommendations. The list of most popular items reflects the combined tastes of all users in the system, and thus represents the `average' user of the system. Rather few actual users will be similar to this average user. 

In general, most users have only a few of the most popular items in their profile. The majority of their items, however, are in the long tail of the item distribution, and not in the list of most popular content. As a popularity-based algorithm cannot reach into the long tail, it cannot be expected to provide novel recommendations. We therefore consider popularity-based recommendations to serve as a weak baseline. We include the algorithm in this chapter, because popularity-based recommendations have become a standard feature of many social news and social bookmarking websites, and therefore a staple algorithm to compare new algorithms against \citep{Nakamoto:2008,Zanardi:2008,Wetzker:2009}. \del\ is one example of such a social bookmarking website that uses popularity-based recommendation\footnote{According to {\tt \href{http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-news-algorithms-exposed}{http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-}\\\href{http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-news-algorithms-exposed}{news-algorithms-exposed}} (last visited: July 29, 2009).}. We therefore include it as our weak baseline to show how much more sophisticated algorithms can improve upon a popularity-based algorithm. 

Formally, we define popularity-based recommendation as calculating a normalized popularity score $\widehat{x}_{k,l}$ for each item according to Equation \ref{eq:popular-based}:

\begin{equation}
\label{eq:popular-based}
\widehat{x}_{k,l} = \frac{ \big \vert \{ \; \overrightarrow{i_{l}} \; | \; x_{a,l} \neq \emptyset \; \} \big \vert }{K}, 
\end{equation}

where the total number of users that have added item $i_{l}$ is counted and normalized by dividing it by the total number of users $K$ in the data set. 
Table \ref{table:results-popularity-baseline} contains the results of popularity-based recommendation runs on our four data sets. We reiterate here for the sake of the reader that we refer to the \bib\ data sets containing scientific articles and Web bookmarks as \dba\ and \dbb\ respectively. For the sake of convenience we include them once in Table \ref{table:results-popularity-baseline} as well.

\begin{table}[htp]
  \caption[Results of the popularity-based baseline]{Results of the popularity-based baseline. Reported are the MAP scores. }
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c||c|c}
  \hline
  \multirow{3}{*}{{\bf Run}} &  \multicolumn{2}{c||}{{\bf bookmarks}} & \multicolumn{2}{c}{{\bf articles}} \\
  \cline{2-5}
  ~  & {\bf BibSonomy} & {\bf Delicious} & {\bf BibSonomy} & {\bf CiteULike} \\
  ~            & (\dbb)       & (\dd)        & (\dba)       & (\dc) \\
  \hline
  \hline
  Popularity baseline       & 0.0044   & 0.0022   & 0.0122   & 0.0040 \\
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:results-popularity-baseline}
\end{table}

As expected, the results show that popularity-based recommendation does not achieve very high scores. Popularity-based recommendation achieves its highest MAP scores on the two smallest data sets, \dbb\ and \dba, and the lowest scores are achieved on the \dd\ data set. However, all scores are unlikely to be of practical value for most users however. Popularity-based recommendation achieves its highest P@10 score on \dba\ at 0.0143, which means that on average only 0.14 correct items can be found among the top 10 of recommended items. The highest MRR rank score on \dba\ is 0.0565. This means that, on average, the first relevant withheld item is found at rank 18. The lowest MRR score on the \dd\ data set means that the first relevant item there is found at rank 51 on average. Most users are unlikely to go through the first 50 recommended items to find only one that would be of value to them. 

The MAP scores on the \dc\ data set are high in comparison to \dd, the other large  data set. It is easier for a popularity-based algorithm to deal with data sets with a smaller number of total items. With less items in total, the probability of a user's item to be part of the fixed-size set of most popular items is larger. Another pattern we see is that the scores on the data sets covering scientific articles are higher than those on the bookmarks data sets at corresponding data set sizes. We believe this is a reflection of the greater topical diversity of social bookmarking websites over social reference managers as we mentioned earlier in Subsection \ref{3:subsec:filtering}. The greater diversity is reflected in a more difficult prediction task with lower scores. 





\section{Collaborative Filtering}
\label{4:sec:cf}

A common and well-understood source of information for recommendations are usage patterns: who added or rated what content in the system? As mentioned earlier in \mbox{Chapter \ref{chapter2}}, the class of algorithms that operate on such transaction data are called Collaborative Filtering algorithms (CF). We distinguished between two classes of CF algorithms---memory-based and model-based---and described the strengths and weaknesses of both types. In the experiments described in this chapter we focus on the $k$-Nearest Neighbor ($k$-NN) algorithm, one of the memory-based CF algorithms, and extend it in various ways to include tagging information in our TBCF algorithms. In this section we describe and establish the $k$-NN algorithm without tagging information as our strong baseline, and evaluate its performance on our four data sets. We implement and evaluate both the user-based and the item-based variants of the $k$-NN algorithm as introduced earlier in Subsection \ref{2:subsec:collaborative-filtering}.

We pick the $k$-NN algorithm because it is a well understood algorithm that can intuitively be extended to incorporate other additional information \citep{Herlocker:1999, Burke:2002}. While model-based algorithms such as PLSA and matrix factorization have been shown to outperform memory-based algorithms in several cases \citep{Hofmann:2004, Koren:2008}, it is not always clear how to include extra information such as tags elegantly into these algorithms. Furthermore, memory-based algorithms better allow for the generation of intuitive explanations of why a certain item was recommended to the user. We see such functionality as an important component of social bookmarking systems, which rely heavy on user interaction and ease of use.

In the next Subsection \ref{4:subsec:cf-algorithm} we formally define both variants of the $k$-NN algorithm. We present the results of the experiments with these algorithms on our data sets in Subsection \ref{4:subsec:cf-results}, and discuss these results in Subsection \ref{4:subsec:cf-discussion}.





\subsection{Algorithm}
\label{4:subsec:cf-algorithm}

The $k$-NN algorithm uses the behavior of similar users or items (the nearest neighbors) to predict what items a user might like, and comes in two `flavors'. In  {\em user-based filtering} we locate the users most similar to the active users, and then look among their items to generate new recommendations for the active users. In {\em item-based filtering} we locate the items most similar to items in the active user's profile, and order and present those similar items to the active user as new recommendations. In both cases the recommendation process is comprised of two steps: (1) calculating the similarity between the active object and other objects, and (2) using the $N$ most similar neighboring objects to predict item ratings for the active user\footnote{Note that since we reserved the letter $k$ to index the users, we use $N$ instead to denote the number of nearest neighbors.}. In the first step we calculate the similarities between pairs of users or pairs of items. Many different similarity metrics have been proposed and evaluated over time, such as Pearson's correlation coefficient and cosine similarity \citep{Herlocker:1999, Breese:1998}. We use cosine similarity in our experiments because it has often been used successfully on data sets with implicit ratings \citep{Breese:1998,Sarwar:2001}.



\shrink


\paragraph{User-based CF}

We first describe the user-based CF algorithm. User similarities are calculated on the user profile vectors $\overrightarrow{u_{k}}$ taken from the {\bf R} matrix. We define the cosine similarity $\mathit{sim}_{cosine}(u_{k}, u_{a})$ between two users $u_{k}$ and $u_{a}$ as

\begin{equation}
  \mathit{sim}_{cosine}(u_{k}, u_{a}) = \frac{\overrightarrow{u_{k}} \cdot \overrightarrow{u_{a}}}{||\overrightarrow{u_{k}}||\;||\overrightarrow{u_{a}}||}.
  \label{eq:cf-ub-cosine}
\end{equation}

The next step in user-based filtering is determining the top $N$ similar users (or items) for user $u_{k}$. We denote this set as the Set of Similar Users $\mathit{SSU}(u_{k})$ and define it as

\begin{equation}
\mathit{SSU}(u_{k}) = \{ \; u_{a} \; | \; \mathit{rank} \; \mathit{sim}_{cosine}(u_{k}, u_{a}) \le N, \; x_{a,l} \neq \emptyset\ \},
\end{equation}

where we rank all users $u_{a}$ on their cosine similarity $\mathit{sim}_{cosine}(u_{k}, u_{a})$ to user $u_{k}$ (or on another similarity metric), and take the top $N$. Consequently, $|\mathit{SSU}(u_{k})| = N$. For each user $u_{a}$, we only consider those items that $u_{a}$ added to his profile ($x_{a,l} \neq \emptyset$). The next step is to do the actual predictions for each item and generate the list of recommendations. The predicting score $\widehat{x}_{k,l}$ of item $i_{l}$ for user $u_{k}$ is defined as

\begin{equation}
\widehat{x}_{k,l} = \displaystyle \sum_{u_{a} \; \in \; \mathit{SSU(u_{k})}} \mathit{sim}_{cosine}(u_{k}, u_{a}),
% CHANGED THE FORMULA BELOW TO REMOVE THE PART BELOW THE LINE, SINCE THIS IS 
% NOT NECESSARY AND NOT INCLUDED IN ITEM-BASED EITHER
% RANKING
%\widehat{x}_{k,l} = \frac{ \displaystyle \sum_{u_{a} \; \in \; \mathit{SSU(u_{k}), \; x_{a,l} \; \neq \; \emptyset}} \mathit{sim}_{cosine}(u_{k}, u_{a}) }{ \quad \displaystyle \sum_{u_{a} \; \in \; \mathit{SSU(u_{k})}} \quad \mathit{sim}_{cosine}(u_{k}, u_{a}) } 
\end{equation}

where the predicted score is the sum of the similarity values (between 0 and 1) of all $N$ nearest neighbors that actually added item $i_{l}$ (i.e., $x_{a,l} \; \neq \; \emptyset$). When applying user-based filtering to data sets with explicit ratings, it is common to scale $\widehat{x}_{k,l}$ to a rating in the original ratings scale \citep{Sarwar:2001}. We do not scale our prediction $\widehat{x}_{k,l}$ as we are working with unary ratings and are only interested in rank-ordering the items by their predicted score\footnote{It is also common practice in CF to normalize the user's ratings by subtracting the user's average rating \citep{Herlocker:1999}. However, we do not need to normalize in this manner, since we are working with implicit, unary ratings.}.

A recurring observation from the literature about CF algorithms is that universally liked items are not as useful in capturing similarity between users as less common items, see e.g., \cite{Breese:1998}. Items that are added, rated, or purchased frequently can dominate the search for similar items, making it difficult to provide the user with novel recommendations. Adapted from the popular \tfidf\ term weighting algorithm from the field of IR \citep{Salton:1988}, we also try mitigating the influence of frequently occurring items by weighting the elements of $\overrightarrow{u_{k}}$ with the {\em inverse user frequency} of the user's items\footnote{We refer to both the inverse user frequency and the inverse item frequency with $idf$ for clarity and consistency with previous work.}. We define the inverse user frequency of item $i_{l}$ as

\begin{equation}
  \mathit{idf}(i_{l}) = \log \frac{K}{| \{ \; \overrightarrow{i_{l}} \; | \; x_{a,l} \neq \emptyset \; \} |}.
  \label{eq:idf-weighting}
\end{equation}

We then define user profile vectors $\overrightarrow{u'_{k}}$ weighted by the inverse user frequency as $\overrightarrow{u'_{k}} = [x_{k,1} \cdot \mathit{idf}(i_{1}), \ldots, x_{k,L} \cdot \mathit{idf}(i_{L})]^{T}$. Similarity between two {\em idf}-weighted user vectors is also calculated using the cosine similarity. We will refer to these {\em idf}-weighted runs as \run{u-idf-sim} and to the runs without {\em idf}-weighting, which effectively use binary vectors, as \run{u-bin-sim}.



\shrink


\paragraph{Item-based CF}

The item-based $k$-NN algorithm follows the same general principle as the user-based filtering algorithm. Instead of comparing users directly, we try to identify the best recommendations for each of the items in a user's profile. In other words, for item-based filtering we calculate the similarities between the test items of the active user $u_{k}$ and the other items that $u_{k}$ has not yet added (so $x_{k,b} = \emptyset$). Item similarities are calculated on the item profile vectors $\overrightarrow{i_{l}}$ taken from the {\bf R} matrix. Similar to user similarity, we define cosine similarity $\mathit{sim}_{cosine}(i_{l}, i_{b})$ between two items $i_{l}$ and $i_{b}$ as

\begin{equation}
  \mathit{sim}_{cosine}(i_{l}, i_{b}) = \frac{\overrightarrow{i_{l}} \cdot \overrightarrow{i_{b}}}{||\overrightarrow{i_{l}}||\;||\overrightarrow{i_{b}}||}.
  \label{eq:cf-ib-cosine}
\end{equation}

Analogous to user-based filtering, we can also suppress the influence of the most prolific users, i.e., users that have added a disproportionately large number of items to their profile, such as bots or spam users. This {\em inverse item frequency} of a user $u_{k}$ is defined as

\begin{equation}
  \mathit{idf}(u_{k}) = \log \frac{L}{| \{ \; \overrightarrow{u_{k}} \; | \; x_{k,b} \neq \emptyset \; \} |}.
\end{equation}

We then define item profile vectors $\overrightarrow{i'_{l}}$ weighted by the inverse item frequency as $\overrightarrow{i'_{l}} = [x_{1,l} \cdot \mathit{idf}(u_{1}), \ldots, x_{K,l} \cdot \mathit{idf}(u_{K})]^{T}$. Again, we calculate the similarity between two {\em idf}-weighted item vectors using cosine similarity. The next step is to identify the neighborhood of most similar items. We define the top $N$ similar items as the Set of Similar Items $\mathit{SSI}(i_{l})$

\begin{equation}
\mathit{SSI}(i_{l}) = \{ \; i_{b} \; | \; \mathit{rank} \; \mathit{sim}_{cosine}(i_{l}, i_{b}) \le N, \; x_{k,b} \neq \emptyset\ \}, 
\end{equation}

where we rank all items $i_{b}$ on their cosine similarity $\mathit{sim}_{cosine}(u_{k}, u_{a})$ to item $i_{l}$ (or on another similarity metric), and take the top $N$. For each item $i_{b}$, we only consider those items that are most similar to the items $u_{k}$ added to his profile ($x_{k,b} \neq \emptyset$). The next step is to do the actual predictions for each item and generate the list of recommendations. The predicted score $\widehat{x}_{k,l}$ of item $i_{l}$ for user $u_{k}$ is defined as

\begin{equation}
\widehat{x}_{k,l} = \displaystyle \sum_{i_{b} \; \in \; \mathit{SSI(i_{l})}} \mathit{sim}_{cosine}(i_{l}, i_{b}),
% CHANGED THE FORMULA BELOW TO REMOVE THE PART BELOW THE LINE, SINCE THIS IS 
% NOT EQUAL ACROSS ALL ITEMS OF A USER AND THEREFORE INFLUENCES THE FINAL 
% RANKING
%\widehat{x}_{k,l} = \frac{ \displaystyle \sum_{i_{b} \; \in \; \mathit{SSI(i_{l}), \; x_{k,b} \; \neq \; \emptyset}} \mathit{sim}_{cosine}(i_{l}, i_{b}) }{ \quad \displaystyle \sum_{i_{l} \; \in \; \mathit{SSI(i_{l})}} \quad \mathit{sim}_{cosine}(i_{l}, i_{b}) } 
\end{equation}

where the predicted score is the sum of the similarity values (between 0 and 1) of all the most similar items that were added by user $u_{k}$ (i.e., $x_{k,b} \; \neq \; \emptyset$). The final recommendations $\mathit{RECS}(u_{k})$ for user $u_{k}$ for both algorithms are then generated as described in Section \ref{4:sec:preliminaries}. We refer to these item-based CF runs with and without {\em idf}-weighting runs as \run{i-idf-sim} and \run{i-bin-sim} respectively. For the convenience of the reader, we have included a glossary in Appendix \ref{app:glossary} that lists all of the run names used in Chapter \ref{chapter4} with a brief description.


\shrink


\paragraph{Determining the Optimal Number of Nearest Neighbors}

After the user and item similarities are calculated, the top $N$ neighbors are used to generate the recommendations. For $k$-NN classifiers, the neighborhood size is an algorithm parameter that can significantly influence prediction quality. Using too many neighbors might smooth the pool from which to draw the predictions too much in the direction of the items with the highest general popularity, whereas not considering sufficient neighbors might result in basing too many decisions on accidental similarities.

We use our 10-fold cross-validation setup to optimize the number of neighbors $N$ as described in Subsection \ref{3:subsec:evaluation}. The questions remains of what values of $N$ should we examine? Examining all possible values from one single neighbor to considering all users or items as neighbors (where $N = K$ or $L$ respectively) would be a form of overfitting in itself, as well as computationally impractical on the larger data sets. We therefore follow an iterative deepening approach as described by \cite{Van-den-Bosch:2004b} for selecting the values of $N$. We construct a clipped, pseudo-quadratic series of 100 iterations. For each iteration $q$, the $N$ value is determined by $N = 1.1^{q}$, rounded off to the nearest integer. As an example we list the first 40 unique values for $N$ (corresponding to 55 iterations):

\begin{center}
   \noindent
   \mbox{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 14, 
         15, 17, 19, 21, 23, 25, 28, 30, 34, 37, }
   \noindent
   \mbox{41, 45, 49, 54, 60, 66, 72, 80, 88, 97, 106, 
         117, 129, 142, 156, 171, 189, $\dots$}
\end{center}

We employ the following heuristic optimization procedure to find the optimal number of neighbors. We keep doing additional iterations and evaluating values of $N$ as long as the MAP score keeps increasing. We stop if the MAP score remains the same or decreases for a new value of $N$. To prevent the problem of ending up in a local maximum, we always take 5 more additional steps. If the MAP score has not increased statistically significantly anywhere in those 5 steps, then we take as our $N$ the value corresponding to the maximum MAP score we encountered so far. If there is a significant increase in MAP within those 5 extra steps, we assume we found a local maximum and keep evaluating additional iterations. When we reach the maximum for $N$ (the number of users or items in the data set, depending on the algorithm) we stop and take as $N$ the value corresponding to the highest MAP score. For example, if the MAP score stops increasing at when $N = 13$, we examine the next 5 $N$ values from 14 to 21. If none of those additional runs have MAP scores significantly higher than the MAP for $N = 13$, we take the $N$ with the maximum MAP score in the interval $[13, 21]$.





\subsection{Results}
\label{4:subsec:cf-results}

The relatively low scores achieved by the popularity-based algorithm of the previous section mean that there is much room for improvement. Table \ref{table:results-collaborative-filtering} shows the results of the two user-based and item-based variants of the $k$-NN algorithm. As expected, the $k$-NN algorithm outperforms popularity-based recommendation on all four data sets, with improvements ranging from a 109\% increase on \dd\ to a more than twenty-fold increase on \dc. These improvements are statistically significant on all data sets except \dd. 

\begin{table}[htp]
  \caption[Results of the $k$-Nearest Neighbor algorithm]{Results of the $k$-Nearest Neighbor algorithm. Reported are the MAP scores as well as the optimal number of neighbors $N$. Best-performing runs for each data set are printed in bold. The percentage difference between the best popularity-based run and the best CF run is indicated in the bottom row of the table.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c|c|c||c|c|c|c}
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{4}{c||}{{\bf bookmarks}} & \multicolumn{4}{c}{{\bf articles}} \\
  \cline{2-9}
  ~  & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c||}{{\bf Delicious}} & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c}{{\bf CiteULike}}  \\
  \cline{2-9}
  ~                         & MAP & $N$ & MAP & $N$ & MAP & $N$ & MAP & $N$ \\
  \hline
  \hline
  Popularity baseline       & 0.0044\nosign & -          & 0.0022\nosign & -       
                            & 0.0122\nosign & -          & 0.0040\nosign & - \\
  \hline
  \run{u-bin-sim}           & 0.0258\upblack & 6   & {\bf 0.0046}\nosign & 15  
                            & {\bf 0.0865}\upblack & 4   & 0.0746\upblack & 15 \\ 
  \run{u-bin-idf-sim}       & {\bf 0.0277}\upblack & 13   & 0.0040\nosign & 15  
                            & 0.0806\upblack & 4    & 0.0757\upblack & 15 \\ 
  \run{i-bin-sim}           & 0.0224\upblack & 13   & 0.0027\nosign & 25  
                            & 0.0669\upblack & 37   & 0.0826\upblack & 117 \\ 
  \run{i-bin-idf-sim}       & 0.0244\upblack & 34   & 0.0025\nosign & 14  
                            & 0.0737\upblack & 49   & {\bf 0.0887}\upblack & 30 \\ 
  \hline
  \% Change                 & \multicolumn{2}{c|}{+529.5\%}
                            & \multicolumn{2}{c||}{+109.1\%}
                            & \multicolumn{2}{c|}{+609.0\%}
                            & \multicolumn{2}{c}{+2117.5\%}  \\ 
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:results-collaborative-filtering}
\end{table}

User-based filtering outperforms item-based filtering on three of four data sets; only on \dc\ does item-based filtering work better, where this difference is also statistically significant ($p < 0.05$). The other differences between user-based and item-based filtering are not significant. There appears to be no clear advantage to applying {\em idf}-weighting to the profile vectors: there are small differences in both directions, but none of the differences between the runs with and without {\em idf}-weighting are significant. In general, it seems that bookmark recommendation is more difficult than article recommendation: even within the same collection, recommending \bib\ bookmarks on \dbb\ achieves MAP scores that are nearly three times as low as recommending \bib\ articles using \dba. While the \dba\ and \dc\ performance numbers are about equal despite the size difference between the two data sets, the difference between \dbb\ and \dd\ is much larger. Finally, we see that the optimal number of neighbors tends to be larger for item-based filtering than for user-based filtering, but the actual number seems to be data set dependent.





\subsection{Discussion}
\label{4:subsec:cf-discussion}

Our first observation is that memory-based CF algorithms easily outperform a recommendation approach based solely on popularity. This was to be expected as recommending items based solely on popularity in the system without any regard for personal preferences can be expected to produce poor recommendations. In contrast, CF learns the personal preferences of users, leading to better, more personalized recommendations. The user-based filtering algorithm achieved higher MAP scores on three of our four data sets, although these differences were not statistically significant. What could be the explanation for these differences between user-based and item-based filtering? %Typically, item-based filtering methods work best on data sets that have more users than items \citep{Sarwar:2001, Linden:2003}. 
A possible explanation for this is that the average number of items per user is much higher than the average number of users per item. Since there are more items to potentially match, calculating a meaningful overlap between user profile vectors could be easier than between item profile vectors. This could also explain why item-based filtering works worst on \dd, as it has the lowest average at 1.6 users per item.

The difference in the optimal number of neighbors between user-based and item-based can also be explained by this. The average number of users per item is much lower than the average number of items per users, which makes it more difficult to calculate meaningful overlap between item profile vectors. It is then understandable that the item-based filtering algorithm would need more nearest neighbors to generate correct predictions. However, the user profile vectors in our data sets are slightly more sparse than the item profile vectors, which would put user-based filtering at a disadvantage as there might be less overlap between the different users. In practice, these two forces are in balance, leading to the lack of significant differences between user-based and item-based filtering.

A second observation was that recommending bookmarks appears to be more difficult than recommending scientific articles: MAP scores on the article data sets are nearly three times as high as the MAP scores on the bookmarks data sets. We believe this reflects that bookmark recommendation is a more difficult problem because of the open domain. In our article recommendation task there may be many difficult research areas and topics, but in general the task and domain is pretty well-defined: academic research papers and articles. In contrast, the \dd\ and \dbb\ data sets cover bookmarked Web pages, which  encompass many more topics than scientific articles tend to do. Users can be expected to have more different topics in their profile, making it more difficult to recommend new, interesting bookmarks based on their profiles. 

To determine whether this explanation is correct, we need to show that user profiles containing Web bookmarks are topically more diverse that profiles containing only scientific articles. In all of our data sets we have topic representations of the content in the form of tags. Tags often represent the intrinsic properties of the items they describe, and we can use these tags to estimate how topically diverse the user profiles are in our four data sets. We can expect users with topically diverse profiles to have a smaller average tag overlap between their items. One metric that can be used to represent topical diversity is the average number of unique tags per user as reported in Table \ref{table:statistics-datasets-filtered}. These averages are 203.3 and 192.1 for \dbb\ and \dd\ respectively, which is significantly higher than the 79.2 and 57.3 for \dba\ and \dc. 

However, it is possible that \dd\ and \dbb\ users simply use more tags to describe their bookmarks on average than users who describe scientific articles, but still have a low topical diversity. To examine this, we calculate the average Jaccard overlap %\citep{Jaccard:1901} 
between the tags assigned to the item pairs for each user profile separately, and then generate a macro-averaged tag overlap score for each data set. This number represents an approximation of the topical diversity of a data set. We find that the average tag overlap for \dbb\ and \dd\ is 0.0058 and 0.0001 respectively, whereas for \dba\ and \dc\ it is 0.0164 and 0.0072 respectively. This seems to be in line with our assumption that bookmark recommendation is a more difficult task, because the user profiles are more diverse, and therefore harder to predict. The difference between performance on \dbb\ and \dd\ might be explained by the fact that \bib, as a scientific research project, attracts a larger proportion of users from academia. In contrast, the scientific community is only a subset of the user base of \del, which could again lead to a greater diversity in topics.

% What do we see?
% - There is no clear advantage to applying {\em idf}-weighting
% - delicious verbetering relatief erg laag...waarom is dat?
% - citeulike erg hoog...waarom is dat? vooral IB hier goed
%   * not something we can glean from looking at statistics and predicting
%     things. Maybe another explanation?





\section{Tag-based Collaborative Filtering}
\label{4:sec:using-tags}

If we only applied the standard memory-based CF algorithms to our data sets, we would be neglecting the extra layer of information formed by the tags, which could help us produce more accurate recommendations. As mentioned before, we consider two options for incorporating the tags: (1) extending existing CF algorithms to create TBCF algorithms, or (2) recommending based on the entire tripartite graph (GBCF algorithms). In this section we propose three different TBCF algorithms, corresponding to three different ways of extending the standard $k$-NN algorithm to incorporate tag information and find like-minded users.

The first TBCF algorithm employs user and item similarities based on the overlap in tags assigned to items. We discuss this algorithm in Subsection \ref{4:subsec:tagging-overlap}. Subsection \ref{4:subsec:tagging-intensity} describes the second type of TBCF algorithm, which calculate user and item similarities based on the overlap in tagging intensity, without looking at the actual overlap between tags. The third TBCF algorithm, described in Subsection \ref{4:subsec:similarity-fusion}, combines tagging information with usage information by fusing together (1) the similarities based on tagging overlap, from the first type of TBCF algorithm, and (2) the similarities based on usage information, i.e., from regular CF. We present the results of our experiments with these three TBCF algorithms in Subsection \ref{4:subsec:tbcf-results} and discuss the results in Subsection \ref{4:subsec:tbcf-discussion}.





\subsection{Tag Overlap Similarity}
\label{4:subsec:tagging-overlap}

The folksonomies present in our four data sets carry with them an extra layer of connections between user and items in the form of tags. The tag layer can be used to examine other ways of generating similarities between users or items. For instance, users that assign many of the same tags and thus have more tag overlap between them, can be seen as rather similar. Items that are often assigned the same tags are also more likely to be similar than items that share no tag overlap at all. We propose calculating the user and item similarities based on overlap in tagging behavior as opposed to usage information that only describes what items a user has added. We will refer to a TBCF algorithm using tag overlap similarities for CF as TOBCF.

What do we expect from such an approach? In the previous section we signaled that sparsity of user profile and item profile vectors can be a problem for the standard memory-based CF algorithms. As users tend to assign multiple tags to an item---with averages ranging from 3.1 to 8.4 in our data sets---when they post it to their personal profile, this means a reduced sparsity, which could lead to better predictions. We expect this effect to be the strongest for item-based filtering. On average, the number of tags assigned to an item is 2.5 times higher than the number of users who have added the item. This means that, on average, item profile vectors from the {\bf IT} matrix are less sparse than item profile vectors from the {\bf UI} matrix. This difference is not as well-pronounced for the items per user and tags per user counts: in some data sets users have more items than tags on average, and more tags than items in other data sets. This leads us to conjecture that using tags for user-based filtering will not be as successful.

Many different similarity metrics exist, but we restrict ourselves to comparing three metrics: Jaccard overlap, Dice's coefficient, and the cosine similarity. The only difference between this approach and the standard CF algorithm is in the first step, where the similarities are calculated. 


\shrink


\paragraph{User-based Tag Overlap CF}

For user-based TOBCF, we calculate tag overlap on the {\bf UT} matrix or on the binarized version $\mathbf{UT}_{\mathit{binary}}$, depending on the metric. These matrices are derived as shown in Figure \ref{figure:aggregated-matrices}. Both the Jaccard overlap and Dice's coefficient are set-based metrics, which means we calculate them on the binary vectors from the $\mathbf{UT}_{\mathit{binary}}$ matrix. The Jaccard Overlap $\mathit{sim}_{\mathit{Jaccard}}(d_{k}, d_{a})$ between two users $d_{k}$ and $d_{a}$ is defined as

\begin{equation}
  \mathit{sim}_{\mathit{UT-Jaccard}}(d_{k}, d_{a}) = \frac{|\overrightarrow{d_{k}} \cap \overrightarrow{d_{a}}|}{|\overrightarrow{d_{k}} \cup \overrightarrow{d_{a}}|}.
\end{equation}

Likewise, Dice's coefficient $\mathit{sim}_{\mathit{Dice}}(d_{k}, d_{a})$ is defined as

\begin{equation}
  \mathit{sim}_{\mathit{UT-Dice}}(d_{k}, d_{a}) = \frac{2|\overrightarrow{d_{k}} \cap \overrightarrow{d_{a}}|}{|\overrightarrow{d_{k}}| + |\overrightarrow{d_{a}}|}.
\end{equation}

We refer to the user-based runs with Jaccard overlap and Dice's coefficient as \run{ut-jaccard-sim} and \run{ut-dice-sim} respectively. The cosine similarity is calculated in three different ways. First, we calculate it on the regular tag count vectors from {\bf UT} as $\mathit{sim}_{\mathit{UT-cosine-tf}}(d_{k}, d_{a})$, and on the binary vectors from the $\mathbf{UT}_{\mathit{binary}}$ matrix as $\mathit{sim}_{\mathit{UT-cosine-binary}}(d_{k}, d_{a})$. In addition to this, we also experiment with {\em idf}-weighting of the tags in the user tag count vectors according to Equation \ref{eq:idf-weighting}. We then calculate the cosine similarity $\mathit{sim}_{\mathit{UT-cosine-tfidf}}(d_{k}, d_{a})$ between these weighted user profile vectors. In each case, the cosine similarity is calculated according to

\begin{equation}
  \mathit{sim}_{\mathit{UT-cosine}}(d_{k}, d_{a}) = \frac{\overrightarrow{d_{k}} \cdot \overrightarrow{d_{a}}}{||\overrightarrow{d_{k}}||\;||\overrightarrow{d_{a}}||}.
\end{equation}

We refer to these three runs as \run{ut-tf-sim}, \run{ut-bin-sim}, and \run{ut-tfidf-sim} respectively. 


\shrink


\paragraph{Item-based Tag Overlap CF}

For item-based TOBCF, we calculate the item-based versions of the similarity metrics $\mathit{sim}_{\mathit{IT-Jaccard}}(f_{l}, f_{b})$, $\mathit{sim}_{\mathit{IT-Dice}}(f_{l}, f_{b})$, and $\mathit{sim}_{\mathit{IT-cosine-binary}}(d_{k}, d_{a})$ on $\mathbf{IT}_{\mathit{binary}}$. We calculate the tag frequency and weighted tag frequency vectors similarities $\mathit{sim}_{\mathit{IT-cosine-tf}}(d_{k}, d_{a})$ and $\mathit{sim}_{\mathit{IT-cosine-tfidf}}(d_{k}, d_{a})$  on {\bf IT}. We refer to these five item-based runs as \run{it-jaccard-sim}, \run{it-dice-sim}, \run{it-bin-sim}, \run{it-tf-sim}, and \run{it-tfidf-sim} respectively.





\subsection{Tagging Intensity Similarity}
\label{4:subsec:tagging-intensity}

Instead of looking at tag overlap as a measure of user or item similarity, we can also look at another aspect of tagging behavior to locate kindred users or items: {\em tagging intensity}. Adding a tag costs the user a small amount of effort, which could signal that users are more invested in those items they to which assign many tags. Tagging intensity could therefore be seen as an approximation to a user actively rating his items. Here, more effort invested in tagging corresponds to a higher rating for that item. If two users both assign many tags to the same items and only few tags to other items, they can be thought of as being more similar in tagging behavior. 

Naturally, users might very well assign many different tags to an item for different reasons. Some items might simply be too complex to describe by just one or two tags. Furthermore, the assumption that more richly tagged items would also be rated more highly by the users is not without its caveats. Indeed, \cite{Clements:2008} investigated this for their data set based on the LibraryThing\footnote{\url{http://www.librarything.com}} data set. They compared actual book ratings to the number of tags assigned to the books and found only a weak positive correlation between the two. We can not repeat this, since we only have unary ratings in our data sets. However, the idea has its merits, since \citeauthor{Clements:2008}\ did not find any significant differences between using actual ratings for recommendation compared to using the tag counts.

We propose our own user (and item) similarity metrics that compare users (and items) on the intensity of their tagging behavior. We will refer to the algorithms that use these similarity metrics as TIBCF algorithms. The {\bf UI} matrix contains the tag counts associated with each post in a data set. These represent how many tags were assigned to a certain item by a user. For user-based TIBCF we calculated these tag intensity-based similarities on the user profile vectors from {\bf UI} according to Equation \ref{eq:cf-ub-cosine}. For item-based TIBCF we calculate the item similarities on the item profile vectors from {\bf UI} according to Equation \ref{eq:cf-ib-cosine}. The rest of the approach follows the standard $k$-NN algorithm described in Subsection \ref{4:subsec:cf-algorithm}.





\subsection{Similarity Fusion}
\label{4:subsec:similarity-fusion}

The third TBCF algorithm we propose is one that combines two different algorithms: (1) the standard $k$-NN CF algorithm from Section \ref{4:sec:cf}, which uses data about all users' preferences for items to generate recommendations, and (2) the TOBCF algorithm from \mbox{Subsection \ref{4:subsec:tagging-overlap}}, which looks at the overlap in tagging behavior to identify the like-minded users and generate recommendations. As both approaches use different information to generate their recommendations, it is possible that an approach that combines the best of both worlds will outperform the individual approaches. 

There are many different ways of combining different approaches. We propose one possibility here, {\em similarity fusion} , where we linearly combine different sets of user similarities into a single set of similarities, as illustrated in Figure \ref{figure:similarity-fusion}. The same principle holds for item-based filtering: there, we linearly combine the item similarities. We will refer to this approach as SimFuseCF.

  \begin{figure}[h]
    \centering
      \includegraphics[angle=90,scale=0.55]{./figures/diagram-similarity-fusion.pdf}
    \caption[Fusing the usage-based and tag-based similarity matrices]{Fusing the usage-based and tag-based similarity matrices for user-based TBCF. The highlighted cells show how individual user-user similarities are combined. Similarity between a user and himself is always equal to 1.}
    \label{figure:similarity-fusion}
  \end{figure}

In our SimFuseCF algorithm, we take as input the similarity matrices from two different approaches and linearly combine them element by element using a weighting parameter $\lambda$ of which the value lies in the range [0,1]. The similarities were normalized into the $[0, 1]$ range for each set of similarities separately. Fusing the two different similarity sources is done according to

\begin{equation}
  \mathit{sim}_{\mathit{fused}} = \lambda \cdot \mathit{sim}_{\mathit{usage}} + (1 - \lambda) \cdot \mathit{sim}_{\mathit{tag-overlap}},
\end{equation}

where $\mathit{sim}_{\mathit{usage}}$ and $\mathit{sim}_{\mathit{tag-overlap}}$ are the usage-based and tag-based similarities for either user pairs or items pairs, depending on the choice for user-based or item-based filtering. By varying $\lambda$, we can assign more weight to one type of similarity or the other. The optimal $\lambda$ value as well as the optimal number of neighbors $N$ were optimized doing parameter sweeps using our 10-fold cross-validation setup. For each data set, we determined the optimal value of $\lambda$ using a parameter sweep, examining all values between 0 and 1 with increments of 0.1. Setting $\lambda$ to either 0 or 1 corresponds to using the similarities from the TOBCF or the CF algorithm respectively. Linearly combining the different similarity types is not the only possible combination approach. However, while more sophisticated feature combination and fusion approaches have been proposed for collaborative filtering in the past \citep{Wang:2006}, we leave a more principled combination of usage and tag information for future work. 

We also considered two alternative fusion schemes. In the first, we combined usage-based similarities from standard CF with the similarities generated for the TIBCF algorithm in Subsection \ref{4:subsec:tagging-intensity}. The second fusion scheme combines the similarities from the TOBCF algorithm with the TIBCF algorithm, in effect combining the similarities based on tag overlap and tagging intensity. Preliminary experiments suggested, however, that these two fusion schemes do not produce any acceptable results. We therefore restricted ourselves to fusing the usage-based similarities from CF with the tag overlap similarities from TOBCF.





\subsection{Results}
\label{4:subsec:tbcf-results}

Below we present the results for our three algorithms based on tag overlap (TOBCF), tagging intensity (TIBCF), and similarity fusion (SimFuseCF).


\shrink


\paragraph{Tag Overlap Similarity}

Table \ref{table:results-tag-overlap-filtering} shows the results of the user-based and item-based variants of the TOBCF algorithm. We consider the best-performing CF runs from Section \ref{4:sec:cf} runs as the baseline runs and compare them to the TOBCF runs. What we see in Table \ref{table:results-tag-overlap-filtering} is that item similarities based on tag overlap work well for item-based filtering, as three of our four data sets show considerable improvements over the best CF baseline runs. Performance increases range from 27\% on \dba\ to almost 120\% on \dd, but these are only statistically significant on the \dd\ data set. A first observation is the opposite trend for user-based filtering, where tag overlap results in significantly worse scores for almost all variants on all data sets, with performance decreases ranging from 40\% to 63\%. This means that using tag overlap in item-based filtering  now makes item-based filtering outperform user-based filtering on all four data sets. A second observation is that on the bookmark collections tag overlap performs even worse for user-based filtering than on the data sets containing scientific articles. The reverse seems to be true for item-based filtering. It seems that the domain of the social bookmarking website influences the effectiveness of the TOBCF algorithm.

\begin{table}[htp]
  \caption[Results of the $k$-NN algorithm with tag overlap similarities]{Results of the TOBCF algorithm. Reported are the MAP scores as well as the optimal number of neighbors $N$. Best-performing tag overlap runs for both user-based and item-based are printed in bold. The percentage difference between the best baseline CF runs and the best tag overlap runs are indicated after each filtering type.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c|c|c||c|c|c|c}
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{4}{c||}{{\bf bookmarks}} & \multicolumn{4}{c}{{\bf articles}} \\
  \cline{2-9}
  ~  & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c||}{{\bf Delicious}} & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c}{{\bf CiteULike}}  \\
  \cline{2-9}
  ~                         & MAP & $N$ & MAP & $N$ & MAP & $N$ & MAP & $N$ \\
  \hline
  \hline
  \multirow{2}{*}{Best UB CF run} & 0.0277\nosign & 13 & 0.0046\nosign & 15
                                  & 0.0865\nosign & 4  & 0.0757\nosign & 15 \\
  \cdashline{2-9}[1pt/1pt]
  ~                         & \multicolumn{2}{c|}{(\run{u-bin-idf-sim})}
                            & \multicolumn{2}{c||}{(\run{u-bin-sim})}
                            & \multicolumn{2}{c|}{(\run{u-bin-sim})}
                            & \multicolumn{2}{c}{(\run{u-bin-idf-sim})} \\
  \hline
  \run{ut-jaccard-sim}      & 0.0070\nosign & 8 & 0.0015\nosign & 11  
                            & {\bf 0.0459}\nosign & 6 & {\bf 0.0449}\downblack & 5 \\ 
  \run{ut-dice-sim}         & 0.0069\downtriangle & 6   & 0.0007\downblack & 6  
                            & 0.0333\downtriangle & 4   & 0.0439\downblack & 2 \\ 
  \run{ut-bin-sim}         & {\bf 0.0102}\nosign & 5 & {\bf 0.0017}\nosign & 11  
                           & 0.0332\downtriangle & 4 & 0.0452\downblack & 3 \\ 
  \run{ut-tf-sim}           & 0.0069\downtriangle & 2   & 0.0015\downtriangle & 25  
                            & 0.0368\nosign & 4   & 0.0428\downblack & 8 \\ 
  \run{ut-tfidf-sim}        & 0.0018\downtriangle & 6   & 0.0013\downtriangle & 17  
                            & 0.0169\downblack & 2   & 0.0400\downblack & 2 \\ 
  \hline
  \% Change over best UB CF run  & \multicolumn{2}{c|}{-63.2\%}
                            & \multicolumn{2}{c||}{-63.0\%}
                            & \multicolumn{2}{c|}{-46.9\%}
                            & \multicolumn{2}{c}{-40.7\%}  \\ 
  \hline
  \hline
  \multirow{2}{*}{Best IB CF run} & 0.0244\nosign  & 34   & 0.0027\nosign  & 25  
                            & 0.0737\nosign  & 49   & 0.0887\nosign  & 30 \\
  \cdashline{2-9}[1pt/1pt]
  ~                         & \multicolumn{2}{c|}{(\run{i-bin-idf-sim})}
                            & \multicolumn{2}{c||}{(\run{i-bin-sim})}
                            & \multicolumn{2}{c|}{(\run{i-bin-idf-sim})}
                            & \multicolumn{2}{c}{(\run{i-bin-idf-sim})} \\
  \hline
  \run{it-jaccard-sim}      & {\bf 0.0370}\nosign & 3 & 0.0083\uptriangle & 21  
                            & 0.0909\nosign & 6 & 0.0810\nosign & 14 \\ 
  \run{it-dice-sim}         & 0.0317\nosign & 2 & 0.0089\uptriangle & 25  
                            & 0.0963\nosign & 8 & {\bf 0.0814}\nosign & 8 \\ 
  \run{it-bin-sim}          & 0.0334\nosign & 2  & {\bf 0.0101}\uptriangle & 23  
                            & 0.0868\nosign & 5  & 0.0779\nosign & 10 \\ 
  \run{it-tf-sim}           & 0.0324\nosign & 4  & 0.0100\uptriangle & 11  
                            & 0.0823\nosign & 4  & 0.0607\downblack & 17 \\ 
  \run{it-tfidf-sim}        & 0.0287\nosign & 8  & 0.0058\nosign & 7  
                            & {\bf 0.1100}\nosign & 7   & 0.0789\nosign & 21 \\ 
  \hline
  \% Change over best IB CF run & \multicolumn{2}{c|}{+51.6\%}
                            & \multicolumn{2}{c||}{+274.1\%}
                            & \multicolumn{2}{c|}{+49.3\%}
                            & \multicolumn{2}{c}{-8.2\%}  \\ 
  \hline
  \hline
  \% Change over best CF run & \multicolumn{2}{c|}{+33.6\%}
                            & \multicolumn{2}{c||}{+119.6\%}
                            & \multicolumn{2}{c|}{+27.2\%}
                            & \multicolumn{2}{c}{-8.2\%}  \\ 
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:results-tag-overlap-filtering}
\end{table}

The results of the different tag overlap metrics tend to be close together and differences between them are not statistically significant. Even though the best performing metrics are dependent on the data set, we do see that the metrics operating on the binary vectors from the $\mathbf{UT}_{\mathit{binary}}$ and $\mathbf{IT}_{\mathit{binary}}$ matrices are among the top performing metrics. It is interesting to note that although that the runs with {\em idf}-weighting tend to perform worst of all five metrics, the \run{it-tfidf-sim} produces the best results on the \dba\ collection. 

A final observation concerns the number of nearest neighbors $N$ used in generating the recommendations. In general, the optimal number of neighbors is slightly lower than when using similarities based on usage data. For the item-based runs on \dbb\ and \dba\ the number of neighbors is lower than for user-based which is unexpected, given the neighborhood sizes of regular CF in Table \ref{table:results-collaborative-filtering}.
   

\shrink


\paragraph{Tagging Intensity Similarity}

Table \ref{table:results-tagging-intensity} shows the results of the two user-based and item-based variants of the $k$-NN algorithm that use similarities based on tagging intensity. Tagging intensity does not appear to be representative of user or item similarity: 14 out of 16 runs perform worse than the best CF runs on those data sets, but most of them not significantly. It is interesting to note that tagging intensity similarities do produce small improvements in MAP scores on the user-based runs on \dd. If we disregard these statistically not significant improvements, using tagging intensity as a source of user or item similarity decreases performance by around 20\% on average. In fact, the performance of item-based filtering on \dd\ is even worse than the popularity-based algorithm.

\begin{table}[htp]
  \caption[Results of the $k$-NN algorithm with tagging intensity similarities]{Results of the TIBCF algorithm. Reported are the MAP scores as well as the optimal number of neighbors $N$. Best-performing tagging intensity runs for each data set are printed in bold. The percentage difference between the best baseline CF run and the best CF run with tagging intensity similarities is indicated in the bottom row of the table.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c|c|c||c|c|c|c}
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{4}{c||}{{\bf bookmarks}} & \multicolumn{4}{c}{{\bf articles}} \\
  \cline{2-9}
  ~ & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c||}{{\bf Delicious}} & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c}{{\bf CiteULike}}  \\
  \cline{2-9}
  ~                         & MAP & $N$ & MAP & $N$ & MAP & $N$ & MAP & $N$ \\
  \hline
  \hline
  \multirow{2}{*}{Best CF run} & 0.0277\nosign  & 13   & 0.0046\nosign & 15  
                            & 0.0865\nosign  & 4    & 0.0887\nosign  & 15 \\
  \cdashline{2-9}[1pt/1pt]
  ~                         & \multicolumn{2}{c|}{(\run{u-bin-idf-sim})}
                            & \multicolumn{2}{c||}{(\run{u-bin-sim})}
                            & \multicolumn{2}{c|}{(\run{u-bin-sim})}
                            & \multicolumn{2}{c}{(\run{i-bin-idf-sim})} \\
  \hline
  \run{u-tf-sim}            & 0.0229\nosign & 13   & {\bf 0.0061}\nosign & 60  
                            & {\bf 0.0711}\nosign & 11   & 0.0700\downblack & 14 \\ 
  \run{u-tfidf-sim}         & {\bf 0.0244}\nosign & 8   & 0.0052\nosign & 45  
                            & 0.0705\nosign & 9   & 0.0709\downblack & 37 \\ 
  \run{i-tf-sim}            & 0.0179\nosign & 21   & 0.0004\downblack & 10  
                            & 0.0624\nosign & 17   & 0.0774\downblack & 34 \\ 
  \run{i-tfidf-sim}         & 0.0140\nosign & 21   & 0.0013\downtriangle & 10  
                            & 0.0654\nosign & 17   & {\bf 0.0800}\downblack & 34 \\ 
  \hline
  \% Change                 & \multicolumn{2}{c|}{-11.9\%}
                            & \multicolumn{2}{c||}{+32.6\%}
                            & \multicolumn{2}{c|}{-17.8\%}
                            & \multicolumn{2}{c}{-9.8\%}  \\ 
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:results-tagging-intensity}
\end{table}


\shrink


\paragraph{Similarity Fusion}

Table \ref{table:results-similarity-fusion-cf} shows the results of the  experiments with the user-based and item-based filtering variants of the SimFuseCF algorithm. Fusing the different similarities together does not unequivocally produce better recommendations. For user-based filtering we see modest improvements of up to 26\% for the two bookmark data sets \dbb\ and \dd. Similarity fusion, however, does not help us on the \dba\ and \dc\ data sets. For item-based filtering, we see small improvements in MAP scores for two runs, \dbb\ and \dd\ after fusing the item-based similarities from the two best component runs. The other two item-based SimFuseCF runs perform worse than the best component runs. However, none of the performance increases or almost none of the decreases are statistically significant. It is interesting to note that in two cases the SimFuseCF algorithm actually performs worse than both component runs. Apparently, the combined similarities cancel each other out to a certain extent there.

Finally, when we look at the optimal $\lambda$ values we see that for user-based filtering the $\lambda$ values are closer to 1, while for item-based filtering the $\lambda$ values are closer to 0. This is to be expected as it corresponds to assigning more weight to the similarities of the best performing component run of each fusion pair.

\begin{table}[htp]
  \caption[Results of the $k$-NN algorithm with similarity fusion]{Results of the SimFuseCF algorithm. Reported are the MAP scores as well as the optimal number of neighbors $N$ and the optimal value of $\lambda$. We report each of the best performing component runs and print the best-performing SimFuseCF  runs in bold for both user-based and item-based filtering. The percentage difference between the best component run and the best fused run is indicated in the bottom rows of the two tables.}
  \begin{center}
  \begin{scriptsize}
  \begin{tabular}{l||c|c|c|c|c|c||c|c|c|c|c|c}
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{6}{c||}{{\bf bookmarks}} & \multicolumn{6}{c}{{\bf articles}} \\
  \cline{2-13}
  ~ & \multicolumn{3}{c|}{{\bf BibSonomy}} & \multicolumn{3}{c||}{{\bf Delicious}} & \multicolumn{3}{c|}{{\bf BibSonomy}} & \multicolumn{3}{c}{{\bf CiteULike}}  \\
  \cline{2-13}
  ~                         & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ \\
  \hline
  \hline
  \multirow{2}{*}{Best UB CF run} & 0.0277\nosign & 13 & - & 0.0046\nosign & 15 & - 
                            & {\bf 0.0865}\nosign & 4  & - & {\bf 0.0757}\nosign & 15 & -\\
  \cdashline{2-13}[1pt/1pt]
  ~                         & \multicolumn{3}{c|}{(\run{u-bin-idf-sim})} 
                            & \multicolumn{3}{c||}{(\run{u-bin-sim})}
                            & \multicolumn{3}{c|}{(\run{u-bin-sim})}
                            & \multicolumn{3}{c}{(\run{u-bin-idf-sim})} \\
  \hline
  \multirow{2}{*}{Best UB tag run}           & 0.0102\nosign & 5  & - & 0.0017\nosign & 11 & - 
                            & 0.0459\nosign & 6  & - & 0.0452\nosign & 3 & - \\
  \cdashline{2-13}[1pt/1pt]
  ~                         & \multicolumn{3}{c|}{(\run{ut-bin-sim})} 
                            & \multicolumn{3}{c||}{(\run{ut-bin-sim})}
                            & \multicolumn{3}{c|}{(\run{ut-jaccard-sim})}
                            & \multicolumn{3}{c}{(\run{ut-bin-sim})} \\
  \hline
  User-based fusion         & {\bf 0.0350}\nosign & 8 & 0.8   & {\bf 0.0056}\nosign & 45 & 0.7  
                            & 0.0319\downtriangle & 8 & 0.8   & 0.0554\downblack & 25 & 0.7 \\ 
  \hline
  \% Change                 & \multicolumn{3}{c|}{+26.4\%}
                            & \multicolumn{3}{c||}{+21.7\%}
                            & \multicolumn{3}{c|}{-63.1\%}
                            & \multicolumn{3}{c}{-26.8\%}  \\ 
  \hline

  \multicolumn{13}{c}{~}  \\
  \multicolumn{13}{c}{~}  \\
  
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{6}{c||}{{\bf bookmarks}} & \multicolumn{6}{c}{{\bf articles}} \\
  \cline{2-13}
  ~ & \multicolumn{3}{c|}{{\bf BibSonomy}} & \multicolumn{3}{c||}{{\bf Delicious}} & \multicolumn{3}{c|}{{\bf BibSonomy}} & \multicolumn{3}{c}{{\bf CiteULike}}  \\
  \cline{2-13}
  ~                         & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ 
                            & MAP & $N$ & $\lambda$ \\
  \hline
  \hline
  \multirow{2}{*}{Best IB CF run}   & 0.0244\nosign & 34 & - & 0.0027\nosign & 25 & - 
                            & 0.0737\nosign & 49 & - & {\bf 0.0887}\nosign & 30 & -\\ 
  \cdashline{2-13}[1pt/1pt]
  ~                         & \multicolumn{3}{c|}{(\run{i-bin-idf-sim})} 
                            & \multicolumn{3}{c||}{(\run{i-bin-sim})}
                            & \multicolumn{3}{c|}{(\run{i-bin-idf-sim})}
                            & \multicolumn{3}{c}{(\run{i-bin-idf-sim})} \\
  \hline
  \multirow{2}{*}{Best IB tag run} & {\bf 0.0370}\nosign & 21 & - & 0.0101\nosign & 23 & - 
                            & 0.1100\nosign & 5  & - & 0.0814\nosign & 34 & -\\ 
  \cdashline{2-13}[1pt/1pt]
  ~                         & \multicolumn{3}{c|}{(\run{it-jaccard-sim})} 
                            & \multicolumn{3}{c||}{(\run{it-bin-sim})}
                            & \multicolumn{3}{c|}{(\run{it-tfidf-sim})}
                            & \multicolumn{3}{c}{(\run{it-dice-sim})} \\
  \hline
  Item-based fusion         & 0.0348\nosign & 15 & 0.3   & {\bf 0.0102}\nosign & 25 & 0.3  
                            & {\bf 0.1210}\nosign & 14 & 0.1   & 0.0791\nosign & 28 & 0.4 \\ 
  \hline
  \% Change                 & \multicolumn{3}{c|}{-5.9\%}
                            & \multicolumn{3}{c||}{+1.0\%}
                            & \multicolumn{3}{c|}{+10.0\%}
                            & \multicolumn{3}{c}{-10.8\%}  \\ 
  \hline
  \end{tabular}
  \end{scriptsize}
  \end{center}
  \label{table:results-similarity-fusion-cf}
\end{table}





\subsection{Discussion}
\label{4:subsec:tbcf-discussion}

Below we discuss the experimental results of our three algorithms based on tag overlap (TOBCF), tagging intensity (TIBCF), and similarity fusion (SimFuseCF).


\shrink


\paragraph{Tag Overlap Similarity}

Earlier we saw that the benefits of using tag overlap are dependent on the type of TOBCF algorithm. While user-based TOBCF does not seem to get a boost from user-user similarities based on tag overlap, item-based TOBCF improves markedly by using tag overlap between items as its similarity metric. 

Why do we see this performance dichotomy? Earlier, in Subsection \ref{4:subsec:tagging-overlap}, we put forward that the reduction in sparsity from using tag overlap could produce better recommendations for item-based filtering. On average, the number of tags assigned to an item is 2.5 times higher than the number of users who have added the item. This means that, on average, item profile vectors from the {\bf IT} matrix are less sparse than item profile vectors from the {\bf UI} matrix. Using more values in the similarity calculation leads to a better estimate of the real similarity between two items. We believe this is why using tag overlap works better for item-based filtering. For user-based TOBCF this difference is not as well-pronounced: in some data sets users have more items than tags on average, and more tags than items in other data sets. This explains why we do not see the same performance increase for the user-based filtering runs based on tag overlap.
% However, this does not explain why performance for user-based drops then, just why the advantage switches to item-based. 

We also noted earlier that the performance on the bookmarks data sets shows the strongest reaction to using tag overlap: item-based TOBCF performs best on the bookmark data sets, while user-based TOBCF performs worst on them. The explanation for the lower performance of user-based TOBCF on bookmarks is that, as we showed earlier, the profiles of bookmark users are more diverse topically speaking and therefore harder to match against other profiles. As the tags assigned by the users to their bookmarks are about as numerous as their items, this does not reduce the sparsity, thereby not providing any help in matching the users up correctly. For item-based TOBCF, in the bookmarks case the sparsity is reduced by a factor 3 on average, while in the articles case sparsity is only reduced by a factor of 1.5. This means that sparsity is reduced more for bookmarks, and subsequently that items can be matched better on tag overlap in the bookmarks case, thereby improving performance more than in the article case. In addition, performance on the bookmarks data sets was already lower, making it easier to achieve a bigger percentage-wise improvement. We do not have a specific explanation for the particularly large increase in performance of item-based TOBCF on the \dd\ data set. However, it should be noted that even the best scores on \dd\ are still considerably lower than on the other data sets; the size of the \del\ data set with its magnitude more items makes recommendation difficult, no matter what metrics or algorithms are used.
% i.e., er is een grotere winst te halen hier...hoe verwoord ik dit beter?

% SHOULD I MENTION SOME OF THESE?
%* meer tags gemiddeld per item toegekend --> performance beter met tag overlap? ==> nee, geen correlatie, heel zwak negatief
% * meer tags in profiel --> performance slechter? ==> weak negative correlation
% * meer items in profiel ---> performance slechter? ==> only negligible to weak correlations between profile size and MAP score for UB or IB, usually a little bit negative (so profile size gets higher, then MAP score gets lower).

% - Best metrics are close together and differences are not statistically significant, but dependent on the data set which is best.
% - Interessant dat aantal neighbors vaak lager ligt, vooral voor de bibsonomy data sets. Daar is N soms zelfs lager dan voor UB. 
% - Wat gebeurt er precies met idf-weighting en waarom?


\shrink


\paragraph{Tagging Intensity Similarity}

We found that the tagging intensity, i.e., the number of tags users assign to items, is not a good source of user and item similarity. Most of our TIBCF runs that used tagging intensity similarities performed considerably worse than the baseline $k$-NN algorithm that uses transaction data to locate the nearest neighbors. How can we explain this? The simplest and most likely explanation is the number of tags simply is not a good predictor for interest in an item, and that the topical overlap between items cannot be captured by the number of tags assigned to them. 

One way of investigating this is by looking at the topical overlap between users and items as we did in Subsection \ref{4:subsec:cf-discussion} using the tags. We assume that, more often than not, a user will be interested in new items that share some overlap in topics with the items already owned by that user. If tagging intensity is indeed a good indicator of topical overlap, then the similarity between two users in tagging intensity should be a good predictor of the topical similarity between those users, and vice versa. We therefore correlate (1) the cosine similarity between two users based on the tags they assigned with (2) the cosine similarity between two users based on how intensively they tagged their items\footnote{Effectively, this means we are correlating the user similarities of the \run{ut-tf-sim} run with the user similarities of the \run{u-tf-sim} run.}. What we find is that there is a negligible correlation between these similarities: the correlations range from -0.017 to 0.076. 

We posited earlier that the latent complexity of the items might govern how many tags are needed to describe them---more complex items might simply require more tags. We therefore also correlated the item similarities based on tag overlap with the item similarities based on how intensely they were tagged\footnote{Effectively correlating the item similarities of \run{it-tf-sim} runs with item similarities of \run{i-tf-sim} runs.}. Here again, we found similar, weak correlations between the two similarity sets. 
%Here again, we found weak, negative correlations, ranging from -0.184 to -0.433. This means our reasoning is actually slightly reversed: more topical overlap between items means they are not tagged with the same intensity. 
These findings lead us to conclude that looking at tagging intensity is not a good way of locating nearest neighbors in social bookmarking systems. Finally, it is interesting to note that tagging intensity similarities did produce a small improvement in MAP scores on the user-based runs on \dd. We do not have a clear explanation for this.


\shrink


\paragraph{Similarity Fusion}

The results of our SimFuseCF algorithm were not clear-cut: some runs on some data sets saw an increase in performance, while others saw performance drops, sometimes even below the scores of the original component runs. User-based SimFuseCF worked best on the two bookmark data set \dbb\ and \dd. Here, the sum of the parts is greater than the whole as both usage-based information and tag overlap are best combined to match users. There appears to be no clear pattern in when similarity fusion yields improvements and when it does not. From the lack of statistically significant results we may conclude that similarity fusion is neither an effective nor an efficient method of recommendation: scores are not improved significantly, which does not warrant the double effort of calculating two sets of similarities and merging them---computationally the most expensive part of the $k$-NN algorithm. We believe this to be because the distributions of the two set of similarities are too different even after normalization. If the optimal number of neighbors is very different for the two algorithms using the two sets of similarities, fusing the similarities themselves does not results in the best of both worlds, but rather a sub-optimal compromise between the optimal neighborhood sizes.

% beste CF scores correleren met beste tag overlap scores per user --> sterke correlatie?

% incredible improvement for some users in bib-boo:  
%   21 0.0757 0.0192 0.2029
%   870 0.0019 0.0495 0.1038
% Maybe examine this user as an example? How exactly?

%\story{Why are there less tags for the articles than for the bookmarks? Reiterate the reason from chapter 3: people need to organize their bookmarks to specify different context which are already evident in the social reference managers.} ==> STAAT DIT AL ERGENS?





\section{Related work}
\label{4:sec:related-work}

Social bookmarking and social tagging in general are relatively new phenomena and as a result there is not a large body of related work. We saw earlier in Chapter \ref{chapter2} that the majority of the literature so far has focused on the information seeking and organization aspects of tags and on the benefits of tags for information retrieval and Web search. In terms of recommendation, most of the research efforts have focused on tag recommendation. With the exception of the 2008 Discovery Challenge, there is also an striking absence of large scale experimentation and evaluation initiatives such as TREC or CLEF in the field of IR. This lack of standardized data sets combined with the novelty of social bookmarking makes for a relatively small space of related work on item recommendation for social bookmarking. We discuss three different types of related work: graph-based, memory-based, and model-based recommendation for social bookmarking


%\shrink


\paragraph{Graph-based Recommendation}

% GRAPH-BASED APPROACHES
One of the first approaches to recommendation for social bookmarking websites was done by \cite{Hotho:2006}, who proposed a graph-based algorithm called {\em FolkRank}. They start by constructing an undirected tripartite graph of all users, items, and tags, and perform the same aggregations over these three dimensions as we described in Section \ref{4:sec:preliminaries} to arrive at the 2D projections {\bf UI}, {\bf UT}, and {\bf IT}. They combine these into a single square matrix with as many rows as nodes in the original tripartite graph. Like PageRank \citep{Page:1998}, the FolkRank algorithm is based on a random walk model that calculates the fully converged state transition probabilities by taking a walk of infinite length. The probabilities then represent the rank-ordering of the users, items, and tags on popularity. FolkRank also allows for the incorporation of a preference vector, similar to the teleporting component of PageRank. In this preference vector, specific users, items, or tags can be assigned a higher weight to generate user- or topic-specific rankings. They empirically evaluate their algorithm on a self-crawled \del\ data set, making it difficult to compare with other approaches. \cite{Clements:2008} %and \cite{Clements:2009} 
also proposed a random walk model for item recommendation. They test their approach on a data set based on LibraryThing, which includes both tagging and rating information. They construct a similar matrix like \citeauthor{Hotho:2006}, but include the ratings matrix {\bf R} from their data set instead of the {\bf UI} matrix. They also incorporate self-transition probabilities in the matrix and use the walk length as an algorithm parameter. We describe this approach in more detail in the next section when we compare it directly with our own approaches.


\shrink


\paragraph{Memory-based Recommendation}

% MEMORY-BASED APPROACHES
Adaptations of memory-based algorithms that include information about the tags assigned by users to items have also been proposed. \cite{Szomszor:2007} proposed ranking items by the tag overlap with the active user's tag cloud and compare it to popularity-based recommendation. They take on the task of movie recommendation based on the Netflix data set\footnote{\url{http://www.netflixprize.com/}} and harvest the tags belonging to each movie from the IMDB\footnote{\url{http://www.imdb.com/}}. Their approach corresponds to calculating the tag overlap on the regular {\bf UT} matrix using $\mathit{sim}_{\mathit{UT-cosine}}$ and \tfidf\ weighting. They found that tag overlap outperformed popularity-based recommendation. \cite{Yin:2007} also calculate direct user-item similarities in their approach to recommending scientific literature. % using the same, independently proposed approach. 
\cite{Nakamoto:2007} augmented a user-based $k$-NN algorithm with tag overlap. They calculate the similarities between users using cosine similarity between the user tag profiles (i.e., $\mathit{sim}_{\mathit{UT-cosine}}$ on the regular {\bf UT} matrix with tag frequency weighting). They evaluated their approach in \cite{Nakamoto:2008}, where they compared it with popularity-based recommendation, which was outperformed by their tag-augmented approach. In their paper, \cite{Tso-Sutter:2008} also propose a tag-aware $k$-NN algorithm for item recommendation. In calculating the user and item similarities they include the tags as additional items and users respectively. They then calculate cosine similarity on these extended profile vectors and fuse together the predictions of the user-based and item-based filtering runs. We describe this approach in more detail in the next section when we compare it directly with our own approaches. \cite{Zanardi:2008} propose an approach called Social Ranking for tag-based search in social bookmarking websites. Inspired by CF techniques, to find content relevant to the query tags, they first identify users with similar interests to the active users. Content tagged by those users is scored higher, commensurate with the similarity between users based on cosine similarity of the tag profiles (i.e., $\mathit{sim}_{\mathit{UT-cosine}}$ on the regular {\bf UT} matrix with tag frequency weighting). In addition, they expand the original query tags with related tags to improve recall. Tag similarity is calculated on the {\bf TI} matrix using cosine similarity and item frequency weighting. Their algorithm showed promising performance on a \cul\ data set compared to popularity-based rankings of content. %While Social Ranking was designed for tag search, the method could also be expanded to include all of the tags in a user's profile, thereby performing item recommendation. 
Finally, \cite{Amer-Yahia:2008} explore the use of item overlap and tag overlap to serve live recommendations to users on the \del\ website. They focus especially on using the nearest neighbors for explaining the recommendations: why was a certain item or user recommended?


\shrink


\paragraph{Model-based Recommendation}

% MODEL-BASED APPROACHES (I.E. DIMENSIONALITY REDUCTION)
\cite{Symeonidis:2008} were among the first to propose a model-based approach that incorporates tagging information. They proposed using tensor decomposition on the third-order folksonomy tensor. By performing higher-order SVD, they approximate weights for each user-item-tag triple in the data set, which can then be used to support any of the recommendation tasks. They evaluated both item and tag recommendation on a Last.FM data set \citep{Symeonidis:2008, Symeonidis:2008b}. Comparing it to the FolkRank algorithm \citep{Hotho:2006}, they found that dimensionality reduction based on tensor decomposition outperforms the former approach. \cite{Wetzker:2009} take a Probabilistic Latent Semantic Analysis (PLSA) approach, which assumes a latent lower dimensional topic model. They extend PLSA by estimating the topic model from both user-item occurrences as well as item-tag occurrences, and then linearly combine the output of the two models. They test their approach on a large crawl of \del, and find it significantly outperforms a popularity-based algorithm. They also show that model fusion yields superior recommendation independent of the number of latent factors.





\section{Comparison to Related Work}
\label{4:sec:comparison-related-work}

While several different classes of recommendation algorithms are reported to have been modified successfully to include tag, it remains difficult to obtain an overview of best practices. Nearly every approach uses a different data set, crawled from a different social bookmarking website in a different timeframe. Looking closer, we can also find a large variation in the way these data sets are filtered on noise in terms of user, item, and tag thresholds, and the majority of approaches are filtered more strongly than we proposed in Subsection \ref{3:subsec:filtering}. There is also a lack of a common evaluation methodology, as many researchers construct and motivate their own evaluation metric. Finally, with the exception of \cite{Symeonidis:2008} who compared their approach with FolkRank, there have been no other comparisons of different recommendation algorithms on the same data sets using the same metric, making it difficult to draw any definite conclusions about the algorithms proposed. 

In this thesis, we attempt to alleviate some of these possible criticisms. With regard to the data sets, we have enabled the verification of our results by selecting publicly available data sets. In addition, we follow the recommendations of \cite{Herlocker:2004} in selecting the proper evaluation metric. In this section, we will compare two of the state-of-the-art graph-based CF (GBCF) approaches with our own TBCF approaches on our data sets with the same metrics to properly compare the different algorithms.

In Subsections \ref{4:subsec:tag-aware-fusion} and \ref{4:subsec:random-walk} we describe two GBCF algorithms in more detail. We present the experimental results of the GBCF algorithms in Subsection \ref{4:subsec:comparison-results}, and contrast them with the results of our best-performing TBCF algorithms. We discuss this comparison in Subsection \ref{4:subsec:gbcf-discussion}.





\subsection{Tag-aware Fusion of Collaborative Filtering Algorithms}
\label{4:subsec:tag-aware-fusion}

The first GBCF approach to which we want to compare our work is that of \cite{Tso-Sutter:2008}. In their paper, they propose a tag-aware version of the standard $k$-NN algorithm for item recommendation on social bookmarking websites. We elect to compare our work to this algorithm, because it bears many similarities, yet calculates the user and item similarities in a different manner. Their approach consists of two steps: (1) {\em similarity calculation} and (2) {\em similarity fusion}. In the first step, they calculate the similarities between users and between items based on the {\bf R} matrix, but extend this user-item matrix by including user tags as items and item tags as users. Effectively, this means they concatenate a user's profile vector $\overrightarrow{u_{k}}$ with that user's tag vector $\overrightarrow{d_{k}}$, which is taken from $\mathbf{UT}_{\mathit{binary}}$. For item-based filtering the item profile vector $\overrightarrow{i_{l}}$ is extended with the tag vector for that item $\overrightarrow{f_{l}}$, also taken from $\mathbf{IT}_{\mathit{binary}}$. Figure \ref{figure:matrix-tag-extension} illustrates this process.

  \begin{figure}[h]
    \centering
    \includegraphics[angle=90,scale=0.56]{./figures/diagram-tag-aware-fusion.pdf}
    \caption[Extending the user-item matrix for tag-aware fusion]{Extending the user-item matrix for tag-aware fusion. For user-based filtering, the {\bf UT} matrix is appended to the normal {\bf UI} matrix so that the tags serve as extra items to use in calculating user-user similarity. It does so by including user tags as items and item tags as users. For item-based filtering, the transposed {\bf IT} matrix is appended to the normal {\bf UI} matrix so that the tags serve as extra users to use in calculating item-item similarity. Adapted from \cite{Tso-Sutter:2008}.}
    \label{figure:matrix-tag-extension}
  \end{figure}

By extending the user and item profile vectors with tags, sparsity is reduced when calculating the user or item similarities, compared to using only transaction data from {\bf R} to calculate the similarities. Adding the tags also reinforces the transaction information that is already present in $\overrightarrow{u_{k}}$ and $\overrightarrow{i_{l}}$. At the end of this phase they use the $k$-NN algorithm with cosine similarity to generate recommendations using both user-based and item-based filtering. When generating recommendations, the tags are removed from the recommendation lists; only items are ever recommended. While this is related to our idea of similarity fusion, it is not completely the same. We fuse together the similarities calculated on the separate $\overrightarrow{u_{k}}$ and $\overrightarrow{d_{k}}$ vectors in the case of, for instance, user-based filtering\footnote{I.e. the similarities from the \run{u-bin-sim} and \run{ut-bin-sim} runs.}. \citeauthor{Tso-Sutter:2008} first fuse together the $\overrightarrow{u_{k}}$ and $\overrightarrow{d_{k}}$ vectors and then calculate the similarity between profile vector pairs. The difference is identical for item-based filtering.

In the second phase of their approach, similarity fusion, \cite{Tso-Sutter:2008} fuse the predictions of the user-based and item-based filtering algorithms together, to try to effectively capture the 3D correlations between users, items, and tags in social bookmarking data sets. Their fusion approach was inspired by \cite{Wang:2006}, who proposed two types of combinations: (1) fusing user- and item-based predictions, and (2) using the similar item ratings generated by similar users. \cite{Tso-Sutter:2008} only considered the first type of combinations as the second type did not provide better recommendations for them. We also employed this restriction. They fused the user- and item-based predictions by calculating a weighted sum of the separate predictions. In our comparison experiments, we evaluated both the fused predictions as well as the separate user-based and item-based filtering runs using the extended similarities. We refer to the latter two runs as \run{u-tso-sutter-sim} and \run{i-tso-sutter-sim}. The optimal combination weights were determined using our 10-fold cross-validation setup. \cite{Tso-Sutter:2008} tested their approach on a self-crawled data set from Last.FM against a baseline $k$-NN algorithm based on usage similarity. They reported that in their experiments they found no improvements in performance using these extended similarities in their separate user-based and item-based runs. They did report significant improvements of their fused approach over their baseline runs, showing that their fusion method is able to capture the 3D relationship between users, items, and tags effectively. We refer the reader to \cite{Tso-Sutter:2008} for more details about their work.





\subsection{A Random Walk on the Social Graph}
\label{4:subsec:random-walk}

The second GBCF approach against which we wish to compare our work is the random walk method of \cite{Clements:2008}. They propose using a personalized Markov random walk on the tripartite graph present in social bookmarking websites. While others have used random walks for recommendation in the past \citep{Aggarwal:1999, Yildirim:2008, Baluja:2008}, applying them to the tripartite social graph is new. Furthermore, the model allows for the execution of many different recommendation tasks, such as recommending related users, interesting tags, or similar items using the same elegant model. \cite{Clements:2008} represent the tripartite graph of user, items, and tags, created by all transactions and tagging actions, as a transition matrix {\bf A}. A random walk is a stochastic process where the initial condition is known and the next state is given by a certain probability distribution. {\bf A} contains the state transition probabilities from each state to the other. A random walk over this social graph is then used to generate a relevance ranking of the items in the system. The initial state of the walk is represented in the {\em initial state vector} $\overrightarrow{v}_{0}$. Multiplying the state vector with the transition matrix gives us the transition probabilities after one step; multi-step probabilities are calculated by repeating $\overrightarrow{v}_{n + 1} = \overrightarrow{v}_{n} \cdot \mathbf{A}$ for the desired walk length $n$. The number of steps taken determines the influence of the initial state vector versus the background distribution: a longer walk increases the influence of {\bf A}. A walk of infinite length ($\overrightarrow{v}_\infty$) results in the steady state distribution of the social graph, which reflects the background probability of all nodes in the graph. This is similar to the PageRank model \citep{Page:1998} for Web search and similar, but not identical, to the FolkRank algorithm \citep{Hotho:2006}. The transition matrix {\bf A} is created by combining the usage and tagging information present in the {\bf R}, {\bf UT}, and {\bf IT} matrices into a single matrix. In addition, \cite{Clements:2008} include the possibility of self-transitions, which allows the walk to stay in place with probability $\alpha \in [0,1]$ . Figure \ref{figure:transition-matrix-construction} illustrates how the transition matrix {\bf A} is constructed.

  \begin{figure}[h]
    \centering
    \includegraphics[scale=0.7]{./figures/diagram-random-walk-matrix.pdf}
    \caption[Construction of the random walk transition matrix]{The transition matrix {\bf A} is constructed by combining the {\bf R}, {\bf UT}, and {\bf IT} matrices and their transposed versions. Self-transitions are incorporated by super-imposing a diagonal matrix of ones {\bf S} on the transition matrix, multiplied by the self-transition parameter $\alpha$. In the initial state vector the $\theta$ parameter controls the amount of personalization for the active user. The highlighted part of the final probability vector $\overrightarrow{v}_{n}$ after $n$ steps are the item state probabilities; these are the final probabilities that we rank the items on for the active users.}
    \label{figure:transition-matrix-construction}
  \end{figure}

{\bf A} is a row-stochastic matrix, i.e., all rows of {\bf A} are normalized to 1. \cite{Clements:2008} introduce a third model parameter $\theta$ that controls the amount of personalization of the random walk. In their experiments with personalized search, two starting points are assigned in the initial state vector: one selecting the user $u_{k}$ and one selecting the tag $t_{m}$ they wish to retrieve items for. The $\theta$ parameter determines the influence of the personal profile versus this query tag. In our case, we are only interested in item recommendation based on the entire user profile, so we do not include any tags in the initial state vector. This corresponds to setting $\theta$ to 0 in \cite{Clements:2008}. In addition, we set $\alpha$, the self-transition probability, to 0.8 as recommended by \citeauthor{Clements:2008}. We optimize the walk length $n$ using our 10-fold cross-validation setup. After $n$ steps, the item ranking is produced by taking item transition probabilities from $\overrightarrow{v}_n$ for the active user ($\overrightarrow{v}_n(K + 1, \dots\, K + L)$) and rank-ordering them by probability after removal of the items already owned by the active user. An advantage of the random walk model of \cite{Clements:2008} is that it can support many different recommendation tasks without changing the algorithm. Supporting tag recommendation instead of item recommendation, for instance, can be done by simply selecting a user and an item in the initial state vector, and then ranking the tags by their probabilities from the final result vector. Item recommendation for entire user groups could also be supported by simply selecting the group users in the initial state vector, and then ranking by item probabilities. 

\cite{Clements:2008} tested their approach on a data set based on LibraryThing\footnote{\url{http://www.librarything.com/}}, which includes both tagging and rating information. In addition, they artificially constructed a narrow version of this LibraryThing folksonomy to compare the effectiveness of their method on collaborative and individual tagging systems. They compared different parameter settings of their random walk algorithm and found that, because of the lower density of the narrow folksonomy, it is difficult to retrieve and recommend items in an individual tagging system. We refer the reader to \cite{Clements:2008} for more details about their work.





\subsection{Results}
\label{4:subsec:comparison-results}

Table \ref{table:results-comparisons} shows the results of the tag-aware fusion and random walk algorithms on our four data sets. The best-performing CF runs are also listed in the table. What we see, is that of the two runs using the extended user-item matrices for similarity comparison, \run{i-tso-sutter-sim} performs better than \run{u-tso-sutter-sim}. While \run{u-tso-sutter-sim} performs better than using simple usage-based similarities from Section \ref{4:sec:cf} on three of the data sets, it does not improve upon our best CF runs. The \run{i-tso-sutter-sim}, however, does: the extended item similarities outperform the standard tag overlap similarities on every data set. The performance increases there range from 16\% to 37\%. 

\begin{table}[htp]
  \caption[Results of the tag-aware and random walk approaches]{Results of the tag-aware and random walk approaches. Reported are the MAP scores as well as the optimal number of neighbors $N$. For the random walk model, $N$ corresponds to the walk length $n$. In addition, we report the best-performing CF runs using usage-based similarity and tag overlap similarity. The best-performing runs for each data set are printed in bold. The percentage difference between the best approaches from related work and our best CF runs is indicated in the bottom rows of the two tables.}
  \begin{center}
%  \begin{footnotesize}
  \begin{scriptsize}
  \begin{tabular}{l||c|c|c|c||c|c|c|c}
  \hline
  \multirow{3}{*}{{\bf Runs}} &  \multicolumn{4}{c||}{{\bf bookmarks}} & \multicolumn{4}{c}{{\bf articles}} \\
  \cline{2-9}
  ~ & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c||}{{\bf Delicious}} & \multicolumn{2}{c|}{{\bf BibSonomy}} & \multicolumn{2}{c}{{\bf CiteULike}}  \\
  \cline{2-9}
  ~                         & MAP & $N$ & MAP & $N$ & MAP & $N$ & MAP & $N$ \\
  \hline
  \hline
  \multirow{2}{*}{Best CF run} & 0.0370\nosign & 3 & 0.0101\nosign & 23
                               & 0.1100\nosign & 7  & 0.0887\nosign & 30 \\
  \cdashline{2-9}[1pt/1pt]
  ~                         & \multicolumn{2}{c|}{(\run{it-jaccard-sim})}
                            & \multicolumn{2}{c||}{(\run{it-bin-sim})}
                            & \multicolumn{2}{c|}{(\run{it-tfidf-sim})}
                            & \multicolumn{2}{c}{(\run{i-bin-idf-sim})} \\
  \hline
  \run{u-tso-sutter-sim}    & 0.0303\nosign & 13  & 0.0057\nosign & 54  
                            & 0.0829\nosign & 13  & 0.0739\downtriangle & 14 \\ 
  \run{i-tso-sutter-sim}    & 0.0468\nosign &  7  & 0.0125\nosign & 13  
                            & 0.1280\nosign & 11  & 0.1212\upblack & 10 \\ 
  Tag-aware fusion          & {\bf 0.0474}\nosign & $\lambda$ = 0.5 & {\bf 0.0166}\nosign & $\lambda$ = 0.3
                            & {\bf 0.1297}\nosign & $\lambda$ = 0.2 & {\bf 0.1268}\upblack & $\lambda$ = 0.2 \\ 
%  \hline
  Random walk model         & 0.0182\nosign & 5   & 0.0003\downblack & 3  
                            & 0.0608\nosign & 8   & 0.0536\downblack & 14 \\ 
  \hline
  \% Change                 & \multicolumn{2}{c|}{+28.1\%}
                            & \multicolumn{2}{c||}{+64.4\%}
                            & \multicolumn{2}{c|}{+17.9\%}
                            & \multicolumn{2}{c}{+43.0\%} \\ 
  \hline
  \end{tabular}
%  \end{footnotesize}
  \end{scriptsize}
  \end{center}
  \label{table:results-comparisons}
\end{table}

Fusing the user- and item-based predictions produces the best results so far. For all four data sets, the fused predictions improve performance by 18\% to 64\%, depending on the data set. Performance on \dc\ is improved significantly. For most of the data sets, the optimal $\lambda$ assigns more weight to the item-based predictions, which also yield the best results on their own. The differences between the fused run and the \run{i-tso-sutter-sim} run are not statistically significant. 

The random walk model does not fare as well on any of our four data sets: performance is lower than our best CF run and it is especially bad on the \dd\ data set. The random walk method does outperform the popularity-based baseline on the \dbb, \dba, and \dc\ data sets. 





\subsection{Discussion}
\label{4:subsec:gbcf-discussion}

According to our results on the four data sets, extending the user-item matrix by including user tags as items and item tags as users is the superior way of calculating user and item similarities. Both for user-based and item-based filtering, combining usage and tag information before computing the similarities outperforms using just one of those similarity sources. We believe sparsity reduction to be a main reason for this. As user profile vectors increase in length, they also gain more non-zero elements. The density of the user profile vectors increases by 54\% for the \dd\ data set and by approximately 6\% for the other data sets. This means that, on average, larger parts of the vectors can be matched against each other in the user similarity calculation, leading to more accurate user similarities. For item-based filtering, we see the same thing happening, leading to improved performance over the separate CF or TOBCF runs. As for the item-based runs with tag overlap, richer item descriptions lead to better matched items and therefore higher quality recommendations. These results are different from those reported by \cite{Tso-Sutter:2008}, who did not find CF with their extended similarities to outperform the standard CF algorithm. This effect might have been specific to their data set. 

Fusing the user-based and item-based predictions leads to the best results, superior to any of our own CF-based runs. This is to be expected, as it combines the best of both worlds. We can confirm with our experiments that tag-aware fusion of CF algorithms is able to capture the three-dimensional correlations between users, items and tags. Although the tag-aware fusion method does not significantly improve over its component runs, it does provide consistent improvements. An interesting question is why our own proposed SimFuseCF algorithm did not show such consistent improvements. We believe this to be because the distributions of the two set of similarities were too different even after normalization. If the optimal number of neighbors is very different for the two algorithms using the two sets of similarities, fusing the similarities themselves does not result in the best of both worlds, but rather a sub-optimal compromise between the optimal neighborhood sizes. In Chapters \ref{chapter5} and \ref{chapter6} we will take another look at other possibilities for combining different recommendation techniques.

The random walk method is not competitive on our data sets with our best CF runs or with the tag-aware fusion method. We can think of several reasons for this. First, we performed more strict filtering our of data sets than \cite{Clements:2008} did, and the higher density of their LibraryThing data set could have led to better results. For instance, we removed only untagged posts from our crawls, whereas \citeauthor{Clements:2008}\ required all tags to occur at least five times. Increasing the density of our data sets through stricter filtering is likely to improve the results of all the approaches discussed so far. A second possible reason for the poor performance is the lack of explicit ratings. While \citeauthor{Clements:2008}\ have explicit, five-star ratings in their LibraryThing data set, we only have implicit transaction patterns to represent item quality. However, in their work they did not find any significant differences when using the ratings from the {\bf UI} matrix. Finally, we did not perform extensive parameter optimization: we only optimized the walk length $n$, but not the self-transition probability $\alpha$, which could have positively influenced performance on our data sets. 

The random walk method performs much better on the article data sets than on the bookmark data sets. We believe this to be because the \dba\ and \dc\ data sets comprise a specific domain of scientific articles. We reported earlier that the user profiles of \dba\ and \dc\ users are more homogeneous than the bookmark users, whose profiles reflect a larger variety of topics. The larger number of tags per user on the bookmark data sets, mean that the random walk has more nodes it can visit. This means the transition probabilities are spread out over more nodes, making it more difficult to distinguish between good and bad items for recommendation. This effect is especially pronounced on the \dd\ data set, as is evident in the significantly lower MAP score there. Narrower domains such as scientific articles, books, or movies lead to a more compact network and make it easier for the random walk model to find the best related content. The data set used by \cite{Clements:2008} has a similarly `narrow' domain as our \dba\ and \dc\ data sets, making it easier to find related content and generate recommendations.





\section{Chapter Conclusions and Answer to RQ 1}
\label{4:sec:summary}

Collaborative filtering algorithms typically base their recommendations on  transaction patterns such as ratings or purchases. In a social bookmarking scenario, the broad folksonomy provides us with an extra layer of information in the form of tags. In this chapter we focused on answering our first research question.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 1}   & How can we use the information represented by the folksonomy 
                 to support and improve recommendation performance? \\
\end{tabularx}\end{center}

We started our exploration by comparing a popularity-based algorithm with the user-based and item-based variants of the standard nearest-neighbor CF algorithm. The only information used here consisted of the user-item associations in the folksonomy, and these experiments showed that personalized recommendations are preferable over ranking items by their overall popularity. 

We then extended both nearest-neighbor CF variants with different tag similarity metrics, based on either the overlap in tags (TOBCF) or the overlap in how intensely items were tagged (TIBCF). We found that the performance of item-based filtering can be improved by using the item similarities based on the overlap in the tags assigned to those items. The reason for this is reduced sparsity in the item profile vectors; something we did not find in user-based filtering, which as a result did not benefit from using tag similarity. We found that bookmark recommendation was affected more strongly than reference recommendation. Using tagging intensity as a source of user or item similarity did not produce good recommendations, because the amount of tags a user assigns to an item is not correlated with its perceived value. We also examined merging usage-based and tag-based similarities together to get the best of both worlds. However, the results of this SimFuseCF algorithm were inconclusive, probably due to the different distributions of the sets of similarities. 

To promote repeatability and verifiability of our experiments, we used publicly available data sets, and we compared our algorithms with two state-of-the-art GBCF algorithms. The first algorithm, based on random walks over the social graph, performed worse than our best tag-based approaches. The second approach was a tag-aware $k$-NN algorithm which merged usage and tag data before the similarity calculation instead of after. By combining two different representations and combining the results of two different algorithms, this approach outperformed our TOBCF, TIBCF, and SimFuseCF algorithms. From these results we may conclude that tags can be used successfully to improve performance, but that usage data and tagging data have to be combined to achieve the best performance.
