% Chapter 9 - Discussion and Conclusions

\chapter{Discussion and Conclusions}
\label{chapter10}
\lhead{Chapter 9. \emph{Discussion and Conclusions}}



In this thesis we have examined how recommender systems can be applied to the domain of social bookmarking. More specifically, we have investigated the task of item recommendation, where interesting and relevant items---bookmarks or scientific articles---are retrieved and recommended to the user, based on a variety of information sources about and characteristics of the user and the items. The recommendation algorithms we proposed were based on two different characteristics: the usage data contained in the folksonomy, and the metadata describing the bookmarks or articles on a social bookmarking website. We formulated the following problem statement for the thesis in Chapter \ref{chapter1}.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf PS}~~~~   & {\em How can the characteristics of social bookmarking websites be exploited to produce the best possible item recommendations for users?} \\
\end{tabularx}\end{center}

In this chapter we conclude the thesis. We start by answering our five main research questions in Section \ref{10:sec:answers}. In Section \ref{10:sec:rec-for-rec} we offer a set of practical recommendations for social bookmarking services seeking to implement a recommender system. We summarize our five main contributions in Section \ref{10:sec:contributions}. We conclude this chapter in Section \ref{10:sec:future}, where we list future research directions, drawing on the work described in this thesis.





\section{Answers to Research Questions}
\label{10:sec:answers}

Our problem statement led us to formulate five main research questions. Along the way, seven additional subquestions were formulated as well. In this section we summarize the answers to those twelve research questions. The first three research questions focused on how the two key characteristics of social bookmarking websites---the folksonomy and the metadata---can be utilized to produce the best possible recommendations for users? 

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 1}   & How can we use the information represented by the folksonomy 
                 to support and improve recommendation performance? \\
\end{tabularx}\end{center}

In answering our first research question we focused on using the tags present in the broad folksonomy of social bookmarking systems, which describe the content of an item and can therefore be used in determining the similarity between two objects. We extended a standard nearest-neighbor CF algorithm with different tag similarity metrics to produce our TOBCF and TIBCF algorithms. We found that the performance of item-based filtering can be improved by using the item similarities based on the overlap in the tags assigned to those items (TOBCF). The reason for this is reduced sparsity in the item profile vectors; something we did not find in user-based TOBCF, which as a result did not benefit from using tag similarity. Bookmark recommendation is affected more strongly than reference recommendation. Our TIBCF algorithms did not produce competitive results. We may therefore conclude that tagging intensity is not a good measure of user and item similarity.

An examination of merging different types of similarity in our SimFuseCF algorithm yielded inconclusive results. We compared our algorithms with two state-of-the-art GBCF approaches: a graph-based algorithm using random walks and a tag-aware fusion algorithm. Here, we observed that our algorithm outperformed the random walk algorithm. The tag-aware fusion approach outperformed our own TBCF algorithms by fusing different representations and algorithms. On the basis of these results we may conclude (1) that tags can be used successfully to improve performance, and (2) that usage data and tagging data have to be combined to achieve the best performance.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 2}   & How can we use the item metadata available in social 
                 bookmarking systems to provide accurate recommendations to 
                 users? 
\end{tabularx}\end{center}

To answer RQ 2, we proposed four different algorithms, divided into two classes: two content-based filtering approaches and two hybrid approaches. In content-based filtering, a profile-centric approach, where all of the metadata assigned by a user is matched against metadata representations of the items, performed better than matching posts with each other because of sparseness issues. We also compared two hybrid CF approaches that used the metadata representations to calculate the user and item similarities. Here, we found that item-based filtering with the metadata-derived similarities performed best. What the best overall metadata-based algorithm is, is dependent on the data set. In Chapter \ref{chapter5}, we formulated the following two additional subquestions.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 2a}   & What type of metadata works best for item recommendation? \\
  ~             & ~ \\
  {\bf RQ 2b}   & How does content-based filtering using metadata compare with 
                  folksonomic recommendations? \\
\end{tabularx}\end{center}

We found that while sparsity of the metadata field does have an influence on recommendation performance, the quality of the information is just as important. Based on the experimental results we may conclude that combining all intrinsic metadata fields together tends to give the best performance, whereas extrinsic information, i.e., information not directly related to the content, does not (RQ 2a). Finally, compared to the folksonomic recommendation algorithms proposed, recommending using metadata works better on three of our four data sets. Hence, we may conclude that it is viable choice for recommendation despite being underrepresented in the related work so far (RQ 2b).

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 3}   & Can we improve performance by combining the recommendations 
                 generated by different algorithms? \\
\end{tabularx}\end{center}

The answer to this third research question is positive: combining different recommendation runs yielded better performance compared to the individual runs on all data sets (RQ 3). We formulated an additional, more specific research question in Chapter \ref{chapter6}.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 3a}   & What is the best recipe for combining the different 
                  recommendation algorithms? \\
\end{tabularx}\end{center}

In our experiments, weighted fusion methods consistently outperformed the unweighted ones, because not every run contributes equally to the final result. A second ingredient for successful fusion is using a combination method that rewards documents that show up in more of the individual runs, harnessing the {\em Chorus/Authority} effect, and improving the ranking of those items retrieved by more runs. Thirdly, the most successful combinations came from fusing the results of recommendation algorithms and representations that touched upon different aspects of the item recommendation process. Hence we may conclude that the theory underlying data fusion in IR is confirmed for recommender systems. 

Our first three research questions were answered by taking a quantitative, system-based approach to evaluation, i.e., we simulated the user's interaction with our proposed recommendation algorithms in a laboratory setting. Such an idealized perspective does not take into account the growing pains that accompany the increasing popularity of social bookmarking websites: spam and duplicate content. We focused on these problems by formulating RQ 4 and RQ 5. The fourth research question addresses the problem of spam.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 4}   & How big a problem is spam for social bookmarking 
                 services?  \\
\end{tabularx}\end{center}

We examined two of our collections, \cul\ and \bib, for spam and found that these data sets contain large amounts of spam, ranging from 30\% to 93\% of all users marked as spammers (RQ 4). We formulated two additional research questions in \mbox{Chapter \ref{chapter8}}.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}  
  {\bf RQ 4a}   & Can we automatically detect spam content? \\
  ~             & ~ \\
  {\bf RQ 4b}   & What influence does spam have on the recommendation
                  performance? \\
\end{tabularx}\end{center}

We showed that it is possible to train a classifier to detect spam users in a social bookmarking system automatically by comparing all of the metadata they have added together, to the metadata added by genuine users and by other spammers. This is best done at the user level of granularity instead of at a more fine-grained level (RQ 4a). Finally, we examined the influence of spam on recommendation performance by extending our \bib\ bookmarks data set with the spam entries we filtered  out earlier. We tested a collaborative filtering and a content-based approach and found that spam has a negative effect on recommendation. Based on our experimental results we may conclude that the content-based approach was affected most by the spam presence. However, all result lists were unacceptably polluted with spam items, proving the necessity of adequate spam detection techniques (RQ 4b).

To address the problem of duplicate content on social bookmarking websites, we formulated the fifth research question and two additional, more specific research questions in \mbox{Chapter \ref{chapter9}}.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 5}   & How big a problem is the entry of duplicate content for 
                 social bookmarking services? \\
  ~             & ~ \\
  {\bf RQ 5a}   & Can we construct an algorithm for automatic duplicate 
                  detection? \\
  ~             & ~ \\
  {\bf RQ 5b}   & What influence do duplicates have on recommendation 
                  performance? \\
\end{tabularx}\end{center}

We examined one of our data sets, \cul, to quantify the problem of duplicate content (RQ 5). We constructed a training set and trained a duplicate identification classifier which found a small percentage of duplicates (RQ 5a). We found that these duplicate items follow a Zipfian distribution with a long tail, just as regular items do, which means that certain duplicates can be quite widespread (RQ 5). Finally, we examined the influence of duplicates on recommendation performance by creating a deduplicated version of our \cul\ data set. We tested a collaborative filtering and a content-based approach, but did not find any clear effect of deduplication on recommendation, because our duplicate identification classifier was not sufficiently adequate (RQ 5b). 





\section{Recommendations for Recommendation}
\label{10:sec:rec-for-rec}

Based on the answers to our research questions, we can offer a set of recommendations for social bookmarking services seeking to implement a recommender system. Note that our findings are specific to the task of recommending relevant items to users based on their profile; we cannot guarantee that our recommendations hold for other tasks such as personalized search or filling out reference lists.

Social bookmarking websites have two options at their disposal that both work equally well: recommending items using only the broad folksonomy or using the metadata assigned to the items to produce recommendations. The latter option of recommendation based on metadata is a good option when the website already has a good search infrastructure in place. In that case it is relatively easy to implement a metadata-based recommender system. An added advantage of having a good search engine is that it is also useful for detecting spam users and content with high accuracy. Recommendation using the information contained in the folksonomy is a good approach as well: here, we recommend implementing an item-based filtering algorithm that uses tag overlap between items to calculate the similarity. However, since not all users tag their items, it would be even better to merge the tag information with the usage information before calculating the item similarities as suggested by \cite{Tso-Sutter:2008}. Depending on efficient implementation, performance can be greatly increased by combining the recommendations of different algorithms before they are presented to the user. It is important here to combine approaches that focus on different aspects of the task, such as different representations or different algorithms, preferably all.

To provide the user with a satisfactory experience it is important to perform spam and duplicate detection. While they may not influence the recommendations to a strong degree, their presence in the results list can be enough to for a user to lose trust in the recommender system. This illustrates that the success of any recommender system depends on the users, and whether or not {\em they} are satisfied with the system as a whole. Proper user testing of the system is therefore essential, and we will come back to this in Section \ref{10:sec:future}.





\section{Summary of Contributions}
\label{10:sec:contributions}

In this thesis we performed a principled investigation of the usefulness of different algorithms and information sources for recommending relevant items to users of social bookmarking services. Below we list the following five contributions we have made.

\begin{enumerate}

  \item We examined different ways of using the information present in a folksonomy for recommendation, by extending a standard class of Collaborative Filtering algorithms with information from the folksonomy. These extensions were then compared to other state-of-the-art approaches, and shown to be competitive. 

  \item We determined the best way of using item metadata for recommendation, and proposed several new and hybrid algorithms. These algorithms were shown to be competitive with the more popular usage-based approaches that use the folksonomy. We were the first to perform such a comparison of content-based recommendation with collaborative filtering for social bookmarking services. 

  \item Compared to related work, we took a critical look at different methods for combining recommendations from different algorithms on the same data set. We showed that combining different algorithms and different representations, that all cover different aspects of the recommendation task, yields the best performance, confirming earlier work in the field of IR.

  \item We have performed our experiments on publicly available data sets based on three different social bookmarking services covering two different domains of Web pages and scientific articles for a thorough evaluation of our work. This enhanced the generalizability of our findings.
 
  \item We examined two problems associated with the growth of a social bookmarking websites: spam and duplicate content. We showed how prevalent these phenomena are. Moreover, we proposed methods for automatically detecting these phenomena, and examined the influence they might have on the item recommendation task.
    
\end{enumerate}





\section{Future Directions}
\label{10:sec:future}

In any research endeavor there is always room for improvement and the work described in this thesis is no different. While we have covered many different aspects of recommending items on social bookmarking websites in this thesis, we believe that the work we have done is but the tip of the proverbial iceberg. In particular, we acknowledge three fruitful and desired directions for future research.


\shrink


\paragraph{User-based Evaluation}

We remarked already in the first chapter of this thesis that our choice for system-based evaluation---while necessary to whittle down the overwhelming number of possible algorithms and representations---leaves out perhaps the most  important component of a recommender system: the user. \cite{Herlocker:2004} was among the first to argue that user satisfaction is influenced by more than just recommendation accuracy, and \cite{McNee:2006} followed up on this work with extensive user studies of recommendation algorithms. Similarly, we believe it is essential to follow up our work with an evaluation with real users in realistic situations. Ideally, such experiments would have to be run in cooperation with one of the more popular social bookmarking websites to attract a large enough group of test subjects to be able to draw statistically significant conclusions about any differences in performance. Typically, such live user studies are done by performing so-called {\em A/B testing}, also known as randomized experiments or Control/Treatment testing \citep{Kohavi:2009}. In A/B testing, two or more variants of a website are randomly assigned to the Web page visitors. With enough test subjects, meaningful conclusions can be drawn about, for instance, differences in clickthrough rates or purchases. To follow up on the work described in this thesis, it would be fruitful to compare different recommendation algorithms, such as the best variants of user-based filtering and item-based filtering, and the best content-based and hybrid filtering methods. Such user-based evaluation might see different algorithms rise to the top that were not the best-performing ones in the system-based experiments. For example, if users turn out to prefer serendipitous recommendations, we might see user-based CF as the algorithm with the highest clickthrough rate of the presented recommendations as suggested by \cite{McNee:2006b}. Other influences on user satisfaction, and just as testable through A/B testing, could include the presences of explanations: why was a recommendation made. This has been shown to be important to user satisfaction as well \citep{Herlocker:2000}.

%Say that we attempted to check the results manually, with mixed results, representative of our MAP scores, but ultimately it is up to the user to decide relevance: we can only guess...


\shrink


\paragraph{Task Differentiation}

In our experiments we focused on one specific task: item recommendation based on all of the bookmarks or articles added by a user in the past. This is far from the only task that could be supported on a social bookmarking website. Figure \ref{figure:research-tasks} shows a matrix of possible tasks, each in the form of selecting an object type and then finding related objects, possibly of a different type, to go with them.

  \begin{figure}[h]
    \centering
      \includegraphics[scale=0.6,angle=90]{./figures/diagram-research-tasks.pdf}
    \caption[An overview of possible research tasks on social bookmarking websites]{An overview of possible research tasks on social bookmarking websites, loosely adapted from \cite{Clements:2007}. Most of the related work so far has focused on `Tag recommendation', `Guided search', and `Item recommendation' (shaded cells). In this thesis we focused on the latter task.}
    \label{figure:research-tasks}
  \end{figure}
  
So far, related work on social tagging and social bookmarking has focused on `Tag recommendation' and `Guided search', and in this thesis we covered `Item recommendation'. Other interesting and useful tasks still remain largely unexplored, such as finding like-minded users or selecting specific items to get recommendations for---commonly known as `More like this' functionality---which could be very useful on a social bookmarking website. Some of our algorithms already perform some of these functions, such as finding similar users or items for the memory-based CF algorithms. However, more specialized algorithms might be better at this. 

In deciding which tasks to tackle it is essential to profile the users: what do they want, how and what are they currently using the system for? We believe that focusing on real-world tasks in research can drive successful innovation, but only if the tasks under investigation are also desired by the users. 


\shrink


\paragraph{Algorithm Comparison}

In the experimental evaluation of our work, we have focused on using publicly available data sets and comparing our work against state-of-the-art approaches, something which is lacking from much of the related work. We are aware, however, that our comparisons are by no means complete as we picked only two promising approaches to compare our work with. Model-based collaborative filtering algorithms, for instance, were lacking from our comparison. In `regular' recommendation experiments in different domains, model-based algorithms have been shown to hold a slight edge over memory-based algorithms, but without proper comparison on multiple social bookmarking data sets we cannot draw any conclusions about this. One natural extension of the work would therefore be to extend the comparison we made to all of the relevant related work discussed in Section \ref{4:sec:related-work}. Such a comparison would have to include at least the approaches by \cite{Hotho:2006}, \cite{Symeonidis:2008}, \cite{Wetzker:2009}, \cite{Zanardi:2008}, and \cite{Wang:2006c}. 
