% Chapter 6 - Combining Recommendations

\chapter{Combining Recommendations}
\label{chapter6}
\lhead{Chapter 6. \emph{Combining Recommendations}}



In Chapters \ref{chapter4} and \ref{chapter5} we learned that combinations of different algorithms and representations tend to outperform individual approaches. Guided by our third research question, we examine this phenomenon in more detail in this chapter.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}  
  {\bf RQ 3}   & Can we improve performance by combining the recommendations 
                 generated by different algorithms? \\
\end{tabularx}\end{center}

The problem of effective item recommendation is too complex for any individual solution to capture in its entirety, and we expect that by combining different aspects of this problem we can produce better recommendations. Earlier, we saw evidence for the potential success of combination in the approach by \cite{Tso-Sutter:2008} who successfully fused together the predictions of different algorithms using different representations of the data. We already proposed two combination approaches ourselves in the previous two chapters: our similarity fusion algorithm in Subsection \ref{4:subsec:similarity-fusion} and our hybrid filtering algorithm in Subsection \ref{5:subsec:hybrid-filtering}. On some data sets these produced superior recommendations, but the results were not conclusive. This naturally leads us to the following subquestion.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}
  {\bf RQ 3a}   & What is the best recipe for combining the different 
                  recommendation algorithms? \\
\end{tabularx}\end{center}

In this chapter we will examine the possibilities of data fusion\footnote{The term `data fusion' can be ambiguous to a certain extent. In this chapter, we take it to mean output fusion, i.e., fusing the recommendation lists from different runs, analogous to the use of the term in the field of IR.} in more detail. Instead of augmenting or combining features for recommendation, we will examine the effectiveness of combining the output of different recommendation runs (RQ 3), and compare the results against the other fusion variants we have proposed so far (RQ 3a).

Chapter \ref{chapter6} is organized as follows. We start in Section \ref{6:sec:related-work} by discussing related work on data fusion in the fields of recommender systems, IR and machine learning, and highlight some of the reasons why fusion is often successful. Then, in Sections \ref{6:sec:fusing-recommendations} and \ref{6:sec:run-selection}, we describe our approach to fusing recommendations and which individual runs we select for this. Section \ref{6:sec:results} then describes our results, addressing RQ 3 and RQ 3a. In addition, we compare them to the similarity fusion and hybrid filtering techniques proposed in the previous two chapters (RQ 3a). We end with a discussion of the results in Section \ref{6:sec:discussion}.





\section{Related Work}
\label{6:sec:related-work}

We start in Subsection \ref{6:subsec:fusing-recommendations} by discussing related work on fusing different recommendation algorithms to improve upon the performance of the individual algorithms. Then, in Subsection \ref{6:subsec:other-fields}, we discuss successful approaches to data fusion in two other fields: machine learning and IR. We conclude in Subsection \ref{6:subsec:why-does-it-work} by looking at the underlying  explanations for the success of data fusion to aid in the analysis of our own experiments with combining recommendations.





\subsection{Fusing Recommendations}
\label{6:subsec:fusing-recommendations}

In the past decade, the field of recommender systems has already seen several different approaches to combining different recommendation algorithms. \cite{Burke:2002} presented a taxonomy of seven different methods for creating hybrid recommendation algorithms, which we reproduce here in Table \ref{table:hybridization-methods}. We briefly describe them below.

\begin{table}[htp]
  \caption[A taxonomy of recommender system combination methods]{A taxonomy of recommender system combination methods, as given by \cite{Burke:2002}.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabularx}{0.84\linewidth}{l|X}
  \hline
  {\bf Hybridization method} & {\bf Description}  \\
  \hline
  \hline
  Mixed                & Recommendations from several different recommenders 
                         are presented at the same time. \\
  Switching            & The system switches between recommendation techniques 
                         depending on the current situation. \\
  Feature combination  & Features from different recommendation data sources 
                         are thrown together into a single recommendation 
                         algorithm. \\
  Cascade              & One recommender refines the recommendations given by 
                         another. \\
  Feature augmentation & The output from one technique is used as an input
                         feature to another technique. \\
  Meta-level           & The model learned by one recommender is used as input 
                         to another. \\
  Weighted             & The scores of several recommendation techniques are  
                         combined together to produce a single 
                         recommendation. \\
  \hline
  \end{tabularx}
  \end{footnotesize}
  \end{center}
  \label{table:hybridization-methods}
\end{table}

The {\em mixed} hybridization method, arguably one of the most straightforward methods, presents all outputs of the different individual algorithms at the same time. The practical applicability of this technique is dependent on the scenario in which recommendations have to be produced; if a single results list is called for, then the recommendations will have to be merged. A {\em switching} hybrid algorithm switches between the different component algorithms based on certain criteria. For instance, the cold-start problem we mentioned in \mbox{Subsection \ref{2:subsec:collaborative-filtering}} could be a reason to base initial recommendations on the output of a content-based filtering algorithm. As soon as sufficient ratings are collected from the users, the system could then switch to a CF algorithm. In a {\em feature combination} approach, features from different types of algorithms (i.e., collaborative information, content-based, or knowledge-based) are combined and used as the input feature set for a single recommendation algorithm. Our similarity fusion approach from \mbox{Subsection \ref{4:subsec:similarity-fusion}} is an example of a feature combination approach, as it uses both collaborative and topic-based information in the form of tags. \citeauthor{Burke:2002} also describes two hybridization approaches that sequence two different recommendation algorithms. In the {\em cascaded} hybrid approach, one recommendation algorithm is first used to produce a coarse ranking of the candidate items, and the second algorithm then refines or re-ranks this candidate set into the final list of recommendations. In contrast, in a {\em feature augmented} hybrid algorithm one technique is employed to produce a rating of an item, after which that rating is used as an input feature for the next recommendation technique. As mentioned before, we view our hybrid filtering approach from \mbox{Subsection \ref{5:subsec:hybrid-filtering}} as a feature augmentation approach. The {\em meta-level} hybrid approach takes this a step further by using the entire model generated by the first algorithm as input for the second algorithm. Finally, a popular and straightforward way of combining algorithms is by producing a {\em weighted combination} of the output lists of the individual algorithms, where the different algorithms are all assigned separate weights. 

While each of these seven combination methods has its merits and drawbacks, there has not been a systematic comparison of them on the same data sets using the same experimental setup. The lack of such a comparison makes it difficult to draw conclusions about which method is most effective in which situation. Most of the related work on recommender systems fusion has focused on combining content-based filtering with collaborative filtering. We discussed the most important of these approaches in Section \ref{5:sec:related-work}, and these range from feature combination \citep{Basu:1998} and feature augmentation approaches \citep{Mooney:2000} to weighted combination algorithms. The latter combination method is one we will examine in more detail in this chapter, and the focus of the remainder of this related work subsection. For instance, \cite{Claypool:1999} presented a weighted hybrid recommender system that calculated a weighted average of the output of two separate CF and content-based filtering components. The CF component receives a stronger weight as the data sets grows denser, gradually phasing out the influence of the content-based component. They did not find any significant differences between the performance of the separate components or the combined version. \cite{Pazzani:1999} combined three different recommendation algorithms: a CF algorithm, content-based filtering, and recommendation based on demographic information. They then used a majority-voting scheme to generate the final recommendations which increases precision over the individual approaches. Finally, the tag-aware fusion approach by \cite{Tso-Sutter:2008} that we examined earlier, is also an example of a weighted hybrid recommender system, as it calculates a linearly weighted combination of separate user-based and item-based filtering predictions. This fusion approach was originally inspired by the work of \cite{Wang:2006}, who formulated a generative probabilistic framework for the memory-based CF approach. \citeauthor{Wang:2006}\ generated the final ratings by not only fusing predictions based on three sources: (1) ratings of the same item by other users (i.e., user-based filtering); (2) different item ratings made by the same user (i.e., item-based filtering); and (3) data from other but similar users that rated other but similar items. Their original model showed a significant performance increase compared to standard user-based and item-based filtering, and an improved resistance to ratings sparseness.





\subsection{Data Fusion in Machine Learning and IR}
\label{6:subsec:other-fields}

Data fusion has been shown to improve performance not only in the field of recommender systems, but also in related fields. We discuss related work in two such fields: machine learning and IR. 


\shrink


\paragraph{Machine Learning}

In machine learning we see similar methods for combination and hybridization as those discussed by \cite{Burke:2002}, but under different names. We discuss three of the most popular combination methods in machine learning. The first is {\em stacking}, where the output of one classifier serves as input feature for the next classifier \citep{Wolpert:1992}. This method is known to be capable of recognizing and correcting recurring errors of the first-stage classifier, and corresponds to \citeauthor{Burke:2002}'s feature augmentation method. A second combination method is {\em bagging}, which generates multiple versions of a predictor by making bootstrap replicates of the original data sets \citep{Breiman:1996}. The set of predictors are then used to obtain an aggregated predictor by doing majority voting when predicting an output class. Bagging is similar to \citeauthor{Burke:2002}'s weighted combination method. Third, {\em boosting} involves learning an optimally weighted combination of a set of weak classifiers to produce a strong classifier \citep{Freund:1996}. The process is iterative, as repeatedly misclassified instances receive a higher weight in the next round of learning the weak classifier. This way the entire data set can be covered correctly by the resulting strong classifier. 


\shrink


\paragraph{Information Retrieval}

Throughout this thesis we have taken an IR perspective on recommendation in our evaluation and algorithm design. We turn our attention to discussing related work on data fusion in IR in this section. An important distinction to make here is the one between {\em results fusion}, where the results of {\em different} retrieval algorithms on the {\em same} collection are combined, and {\em collection fusion}, where the results of one or more algorithms on {\em different} document collections are integrated into a single results list. We are not interested in the latter approach, and refer the interested reader to, for instance, \cite{Voorhees:1995} for more information. 

In IR, there are two prevalent approaches to results fusion: (1) combining retrieval runs that were generated using {\em different query representations} but with the same algorithm, or (2) combining retrieval runs that were generated using the same query, but with {\em different algorithms}. In our social bookmarking scenario, the first type of data fusion corresponds to using different representations of the user profile for recommendations---such as transaction patterns, tagging behavior, or assigned metadata---and then combining those different recommendation runs. The second approach corresponds to combining different recommendation algorithms---such as CF and content-based filtering---and fusing those predicted items into a single list of recommended items. Most approaches in IR also fall in one of these two categories, with some approaches spanning both. Our contribution in this chapter is that we will investigate the usefulness of both approaches for social bookmarking recommendation in answering RQ 3 and RQ 3a.
  
The earliest approaches to data fusion in IR stem from the 1990s when \cite{Belkin:1993} investigated the effect of combining the result lists retrieved using different query representations of the same information need. They showed that progressive combination of query formulations leads to progressively improving retrieval performance, and extended their own work in \cite{Belkin:1995} with an additional set of experiments confirming their earlier findings. Later work on combining different query and document representations includes the work by \cite{Ingwersen:2005}, who view the fusion problem from a cognitive IR perspective. They formulated the principle of {\em polyrepresentation} in which each query or document representation, searcher, and retrieval model can be seen as a different representation of the same retrieval process \citep{Ingwersen:1994, Ingwersen:1996}. The validity of this principle has been confirmed for queries, documents, searchers, and retrieval algorithms in \cite{Skov:2008} and \cite{Larsen:2009}.

Some of the earliest work on fusing the results of different retrieval algorithms includes \cite{Croft:1987}, who fused a probabilistic retrieval model with a vector space model. \cite{Bartell:1994} also examined results fusion using different retrieval algorithms. They proposed a linear combination of retrieval runs using different variants of the same IR algorithm, and showed significant improvements over the individual runs. \cite{Vogt:1998} later revisited this work and used linear regression to determine the optimal combination of run weights. \cite{Fox:1994} investigated a set of unweighted combination methods that have become standard methods for data fusion in IR. They tested three basic combination methods \combmax, \combmin, and \combmed\ that respectively take the maximum, minimum, and median similarity values of a document from among the different runs. In addition, they also proposed three methods \combsum, \combmnz, and \combanz\ that have consistently shown to provide good data fusion results. The \combsum\ method fuses runs by taking the sum of similarity values for each document separately; the \combmnz\ and \combanz\ methods do the same but respectively boost and discount this sum by the number of runs that actually retrieved the document. \cite{Fox:1994} showed that the latter three methods were among the best performing fusion methods. This work was later extended by \cite{Lee:1997} with a more thorough analysis, and they found that \combsum\ and \combmnz\ were again the best-performing methods. Periodically, these combination methods have been re-examined in different settings, such as monolingual retrieval \citep{Kamps:2004}, or against different probabilistic fusion methods \citep{Croft:2000, Aslam:2001, Renda:2003}. The \combsum\ and \combmnz\ methods have been shown to consistently improve upon the performance of the individual retrieval runs.





\subsection{Why Does Fusion Work?}
\label{6:subsec:why-does-it-work}

In this section, we introduce various reasons that have been proposed in the related work to explain the success of data fusion. Because of our IR perspective on recommendation, we focus exclusively on explanations from this field. We will use these explanations later on in the analysis of our experiments to explain what is happening in recommender systems fusion.

\cite{Belkin:1993} argue that the success of query result fusion is due to the fact that the problem of effective representation and retrieval is so complex that individual solutions can never capture its complexity entirely. By combining different representations and retrieval models, more aspects of this complex situation are addressed and thus more relevant documents will be retrieved. This is similar to the explanation from the polyrepresentation point of view \citep{Ingwersen:2005}, which states that using different representations and retrieval models will retrieve different sets of information objects from the same collection of objects with a certain amount of overlap. Merging cognitively different representations and retrieval models corresponds to modeling different aspects of the task as suggested by \cite{Belkin:1993}, and the overlapping documents are therefore seen as more likely to be relevant.

The latter effect of overlapping documents having a higher likelihood of being relevant was dubbed the {\em Chorus} effect by \cite{Vogt:1998}. The {\em Chorus} effect is also related to what \citeauthor{Vogt:1998}\ define as the {\em Skimming} effect: %This happens when ``retrieval approaches that represent their collection items differently may retrieve different relevant items, so that a combination method that takes the top-ranked items from each of the retrieval approaches will push non-relevant items down in the ranking''. 

\begin{fquote}
  The Skimming Effect happens when retrieval approaches that represent their
  collection items differently may retrieve different relevant items, so that a 
  combination method that takes the top-ranked items from each of the retrieval 
  approaches will push non-relevant items down in the ranking.
\end{fquote}

A third, contradictory explanation by \citeauthor{Vogt:1998}\ for the success of fusion is the {\em Dark Horse} effect, which states that certain retrieval models might be especially suited to retrieving specific types of relevant items compared to other approaches. 

\cite{Spoerri:2007} describes the two of these effects, the {\em Chorus} and {\em Skimming} effects, under different names: the {\em Authority} effect and the {\em Ranking} effect. The {\em Authority} effect describes the phenomenon that the potential relevance of a document increases as the number of systems retrieving it increases, while the {\em Ranking} effect describes the observation that documents higher up in ranked lists and found by more systems are more likely to be relevant. They provide empirical evidence for these effects as the cause for the success of fusion. In addition, they show that the two effects can be observed regardless of the type of query representation. 

It is clear from all of the above definitions that it is possible for one effect to conflict with another. For instance, although one algorithm might be particularly well-suited for retrieving specific types of relevant items (i.e., the {\em Dark Horse} effect), they might be pushed down too far in the ranking by other items, relevant or not, that occur in multiple retrieval runs (i.e., the {\em Chorus} effect).





\section{Fusing Recommendations}
\label{6:sec:fusing-recommendations}

When combining the output of different recommendation algorithms, a decision needs to be made about what to combine: the scores or ratings assigned to the recommended items, or the ranks of the items in the list of recommendations. These two options are commonly referred to as {\em score-based fusion} and {\em rank-based fusion} in the related work. Earlier studies reported on the superiority of using retrieval scores over document ranks for data fusion \citep{Lee:1997}, but later studies have re-examined this and found few significant differences between the two \citep{Renda:2003}. We opt for using score-based fusion in our experiments, since we could find no conclusive evidence to suggest that rank-based fusion is better. The decision between score-based and rank-based fusion can also be seen as a decision of what should be normalized: the item ranks or the item scores. In the field of IR, different retrieval runs can generate wildly different ranges of similarity values, so a normalization method is typically applied to each retrieval result to map the score into the range $[0, 1]$. We find the same variety in score ranges when fusing different recommendation runs, so we also perform normalization of our recommendation scores. Typically, the original recommendation scores $\mathit{score}_{\mathit{original}}$ are normalized using the maximum and minimum recommendation scores $\mathit{score}_{\mathit{max}}$ and $\mathit{score}_{\mathit{min}}$ according to the formula proposed by \cite{Lee:1997}:

\begin{equation}
  \label{eq:score-normalization}
  \mathit{score}_{\mathit{norm}} = \frac{\mathit{score}_{\mathit{original}} - \mathit{score}_{\mathit{min}}}{\mathit{score}_{\mathit{max}} - \mathit{score}_{\mathit{min}}}.
\end{equation}

Several other normalization methods have also been proposed, such as Z-score normalization and Borda rank normalization \citep{Aslam:2001, Renda:2003}. However, none of these methods have been proven to result in significantly better performance, so we use simple score-based normalization according to Equation \ref{eq:score-normalization}.

We introduced six standard fusion methods in our discussion of the related work in Subsection \ref{6:subsec:other-fields}. We select the three methods for our experiments that have shown the best performance in related work: \combsum, \combmnz, and \combanz. These standard combination methods are defined as follows. Let us consider a set of $N$ different recommendation runs $R$ for a specific user that we want to fuse together. Each run $r_{n}$ in the set $R$ consists of a ranking of items, and each item $i$ has a normalized recommendation score $\mathit{score}(i, r_{n})$ in that run $r_{n}$. Let us also define the number of hits of an item in $R$ as $h(i,R) = |\{r \in R : i \in r \}|$, i.e., the number of runs that $i$ occurs in. We can then represent all three combination methods \combsum, \combmnz, and \combanz\ using the following equation:

\begin{equation}
  \mathit{score}_{\mathit{fused}}(i) = h(i,R)^{\gamma} \; \cdot \sum_{n = 1}^{N} \mathit{score}(i, r_{n}).
\end{equation}

The $\gamma$ parameter governs which combination method we use and can take one of three values. Setting $\gamma$ to $0$ is equal to the \combsum\ method, where we take the sum of the scores of the individual runs for an item $i$. For \combmnz, we take the sum of the scores of the individual runs for an item $i$, multiplied by the number of hits of an item $h(i,R)$. Here, $\gamma$ is set to 1. Finally, setting $\gamma$ to $-1$ results in the \combanz\ combination method, where we take the sum of the scores of the individual runs for an item $i$ and divide it by the number of hits of an item $h(i,R)$. In other words, we calculate the average recommendation score for each item. 

These three combination methods are all unweighted, i.e., each run has an equal weight in the fusion process. However, a different common fusion approach in both recommender systems and IR research is to do a {\em linear combination} of the individual runs as proposed by \cite{Bartell:1994} and \cite{Vogt:1998}.  The benefit of weighting different runs separately is obvious: not every run exhibits the same level of performance, and would therefore not be assigned the same weight in the optimal combination. When we linearly combine runs, each run is assigned a preference weight $w_{n}$ in the range $[0, 1]$. We also test weighted versions of the \combsum, \combmnz, and \combanz\ combination methods. The weighted versions of the methods are defined as:

\begin{equation}
  \mathit{score}_{\mathit{fused}}(i) = h(i,R)^{\gamma} \; \cdot \sum_{n = 1}^{N} \; w_{n} \cdot \mathit{score}(i, r_{n}).
\end{equation}

In the situations where we combine the results of two or more recommendation runs, the optimal combination weights could be determined by a simple exhaustive parameter sweep, as we did for our similarity fusion approach in Subsection \ref{4:subsec:similarity-fusion}. When combining more than two runs, however, performing an exhaustive search for the optimal weights quickly becomes intractable, as it is exponential in the number of weights. We therefore used a random-restart hill climbing algorithm to approximate the optimal weights for all our fusions runs \citep{Russel:2003}. We randomly initialized the weights for each run, then varied each weight between 0 and 1 with increments of 0.1. We selected the value for which the MAP score is maximized and then continued with the next weight. The order in which run weights were optimized was randomized, and we repeated the optimization process until the settings converged. We repeated this process 100 times, because the simple hill climbing algorithm is susceptible to local maxima. We used our 10-fold cross-validation setup to determine these optimal weights. We then selected the weights that result in the best performance and generated the recommendations on our final test set using these optimal weights.





\section{Selecting Runs for Fusion}
\label{6:sec:run-selection}

After deciding {\em how} to fuse recommendation runs together in the previous section, we then need to determine {\em which} runs we should fuse together. We let ourselves be guided here by the intuition brought forward in the work of \cite{Belkin:1993} and \cite{Ingwersen:2005}, who argue that recommendations generated using cognitively dissimilar representations and algorithms, i.e., that touch upon different aspects of the item recommendation process yield the best fused results. We consider two aspects of recommendation in our selection of recommendation runs: representations and algorithms. For instance, we consider item-based filtering runs that use transaction patterns as a source of item similarity to be the same algorithmically as item-based filtering runs that use tag overlap similarities, but different in the way they represent the users and items in the system. Our two hybrid filtering approaches on the other hand use the same metadata representation of the system content, but are different algorithms\footnote{Even though they are both memory-based algorithms, we consider user-based and item-based filtering to be different algorithms.}. We do not consider fusing runs that do not differ on at least one of these dimensions. For instance, combining two user-based filtering runs that both use transaction patterns for finding the nearest neighbors and only differ in the type of weighting is not likely to result in improved recommendations after fusion. Table \ref{table:fusion-overview} shows which fusion experiments we perform.

\begin{table}[htp]
  \caption[An overview of our fusion experiments]{An overview of our ten fusion experiments. The second and third columns denote if the fusion experiment fuses together runs using different representations or different algorithms respectively.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabularx}{0.95\linewidth}{l|c|c|X}
  \hline
  {\bf Run ID} & {\bf Diff.\ repr.} & {\bf Diff.\ alg.} & {\bf Description}  \\
  \hline
  \hline
  {\bf Fusion A}  & No & Yes & Best user-based and item-based filtering runs based  on usage similarity (from Section \ref{4:sec:cf}). \\
  {\bf Fusion B}  & No & Yes & Best user-based and item-based filtering runs based  on tag overlap similarity (from Subsection \ref{4:subsec:tagging-overlap}). \\
  {\bf Fusion C}  & Yes & Yes & Best usage-based and tag overlap runs together.\\
  \hline
  {\bf Fusion D}  & No & Yes & Best content-based filtering runs (from Subsection \ref{5:subsec:content-based-filtering}). \\
  {\bf Fusion E}  & No & Yes & Best user-based and item-based filtering runs based on metadata-based similarity (from Subsection \ref{5:subsec:hybrid-filtering}). \\
  {\bf Fusion F}  & Yes & Yes & Best content-based and hybrid filtering runs together.\\
  \hline
  {\bf Fusion G}  & Yes & Yes & Best folksonomic and metadata-based runs together. \\
  \hline
  {\bf Fusion H}  & Yes & Yes & All four best runs from fusion experiments A and B together. \\
  {\bf Fusion I}  & Yes & Yes & All four best runs from fusion experiments D and E together. \\
  {\bf Fusion J}  & Yes & Yes & All eight best runs from fusion experiments A, B, D, and E.\\
  \hline
  \end{tabularx}
  \end{footnotesize}
  \end{center}
  \label{table:fusion-overview}
\end{table}

In the first seven of our fusion experiments we fuse only pairs of best-performing runs within a specific set of runs. From Chapter \ref{chapter4}, for instance, we combine the best user-based and item-based runs for the similarity representations: usage-based and tag overlap. We also combine the best runs of each representation type. We perform three similar fusion experiments for Chapter \ref{chapter5}. Fusion experiment G then combines the best runs from Chapter \ref{chapter4} and \ref{chapter5} together. In addition to these seven pairwise fusion runs, we also experiment with fusing the best four runs from each chapter together in fusion experiments H and I respectively. Finally, we fuse all eight best runs from chapters \ref{chapter4} and \ref{chapter5} together. This results in a total of ten experiments.





\section{Results}
\label{6:sec:results}

\setlength{\fboxsep}{2pt}
\begin{table}[htp]
  \caption[Results of our fusion experiments]{Results of our fusion experiments. Reported are the MAP scores and the best-performing fusion methods for each set of fusion experiments are printed in bold. Boxed runs are the best overall. Significant differences are calculated over the best individual runs of each fusion run. The percentage difference between the best fusion experiment and the best individual run from the previous chapters is indicated in the bottom row.}
  \shrink  % SHRINK WHITE SPACE TO FIT BETTER ON PAGE
  \begin{center}
  \begin{scriptsize}
  \begin{tabular}{l|l||c|c||c|c}
  \hline
   \multirow{2}{*}{{\bf Run}} & \multirow{2}{*}{{\bf Method}} & \multicolumn{2}{c||}{{\bf bookmarks}} & \multicolumn{2}{c}{{\bf articles}} \\
  \cline{3-6}
  ~  & ~ & {\bf BibSonomy} & {\bf Delicious} & {\bf BibSonomy} & {\bf CiteULike}  \\
  \hline
  \hline
  \multirow{6}{*}{{\bf Fusion A}} &  \combsum       & 0.0282\nosign  & 0.0050\nosign  & 0.0910\nosign  & 0.0871\nosign \\ 
  ~       &  \combmnz       & 0.0249\nosign  & {\bf 0.0065}\nosign  & 0.0924\nosign  & 0.0871\nosign \\ 
  ~       &  \combanz       & 0.0175\nosign  & 0.0043\nosign  & 0.0687\nosign  & 0.0691\nosign \\ 
  ~       & Weighted \combsum & {\bf 0.0362}\nosign  & 0.0056\nosign  & 0.0995\nosign  & {\bf 0.0949}\uptriangle  \\ 
  ~       & Weighted \combmnz   & 0.0336\nosign  & {\bf 0.0065}\nosign  & {\bf 0.1017}\nosign  & 0.0947\uptriangle  \\ 
  ~       & Weighted \combanz   & 0.0303\nosign  & 0.0043\nosign  & 0.0924\nosign  & 0.0934\uptriangle  \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion B}} &  \combsum       & 0.0360\nosign          & 0.0024\nosign        & 0.1062\nosign        & 0.0788\nosign \\ 
  ~       &  \combmnz                               & 0.0350\nosign          & 0.0032\nosign        & 0.1104\nosign        & 0.0801\nosign \\ 
  ~       &  \combanz                               & 0.0245\nosign          & 0.0023\nosign        & 0.0904\nosign        & 0.0560\downtriangle \\ 
  ~       & Weighted \combsum                       & 0.0374\nosign          & 0.0102\nosign        & 0.1171\nosign        & 0.0945\uptriangle \\ 
  ~       & Weighted \combmnz                       & {\bf 0.0434}\nosign    & 0.0093\nosign        & {\bf 0.1196}\nosign  & {\bf 0.0952}\uptriangle \\ 
  ~       & Weighted \combanz                       & 0.0314\nosign          & {\bf 0.0105}\nosign  & 0.1028\nosign        & 0.0798\nosign \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion C}} &  \combsum       & 0.0424\nosign          & 0.0102\nosign               & 0.1543\nosign          & 0.1235\upblack \\ 
  ~       &  \combmnz                               & 0.0389\nosign          & 0.0061\nosign               & 0.1453\nosign          & 0.1239\upblack \\ 
  ~       &  \combanz                               & 0.0229\nosign          & 0.0057\nosign               & 0.0787\nosign          & 0.0896\nosign \\ 
  ~       & Weighted \combsum                       & {\bf 0.0482}\nosign    & 0.0109\nosign               & {\bf 0.1593}\nosign    & 0.1275\upblack \\ 
  ~       & Weighted \combmnz                       & 0.0477\nosign          & \fbox{{\bf 0.0115}}\nosign  & 0.1529\nosign          & {\bf 0.1278}\upblack \\ 
  ~       & Weighted \combanz                       & 0.0305\nosign          & 0.0089\nosign               & 0.1262\nosign          & 0.0973\nosign \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion D}} &  \combsum       & 0.0322\nosign          & 0.0020\nosign          & 0.1273\nosign          & 0.0883\downtriangle \\ 
  ~       &  \combmnz                               & 0.0320\nosign          & 0.0021\nosign          & 0.1273\nosign          & 0.0884\downtriangle \\ 
  ~       &  \combanz                               & 0.0257\nosign          & 0.0013\nosign          & 0.0142\nosign          & 0.0112\downblack \\ 
  ~       & Weighted \combsum                       & {\bf 0.0388}\nosign    & 0.0035\nosign          & {\bf 0.1303}\nosign    & 0.1005\nosign \\ 
  ~       & Weighted \combmnz                       & 0.0387\nosign          & 0.0037\nosign          & 0.1302\nosign          & {\bf 0.1008}\nosign \\ 
  ~       & Weighted \combanz                       & 0.0311\nosign          & {\bf 0.0038}\nosign    & 0.1127\nosign          & 0.0371\nosign \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion E}} &  \combsum       & 0.0410\nosign        & {\bf 0.0051}\nosign  & 0.1314\nosign        & 0.0889\nosign \\ 
  ~       &  \combmnz                               & 0.0371\nosign        & 0.0037\nosign        & 0.1349\nosign        & 0.0926\uptriangle \\ 
  ~       &  \combanz                               & 0.0247\nosign        & 0.0036\nosign        & 0.0636\nosign        & 0.0464\downtriangle \\ 
  ~       & Weighted \combsum                       & {\bf 0.0514}\nosign  & {\bf 0.0051}\nosign  & 0.1579\nosign        & 0.0908\upblack \\ 
  ~       & Weighted \combmnz                       & 0.0473\nosign        & 0.0043\nosign        & {\bf 0.1596}\nosign  & {\bf 0.0945}\upblack \\ 
  ~       & Weighted \combanz                       & 0.0323\nosign        & 0.0042\nosign        & 0.1028\nosign        & 0.0636\downblack \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion F}} &  \combsum       & 0.0418\nosign        & 0.0049\nosign        & 0.1590\nosign        & 0.1117\upblack \\ 
  ~       &  \combmnz                               & 0.0415\nosign        & 0.0050\nosign        & 0.1593\nosign        & 0.1099\uptriangle \\ 
  ~       &  \combanz                               & 0.0142\nosign        & 0.0023\nosign        & 0.0313\downtriangle  & 0.0284\downblack \\ 
  ~       & Weighted \combsum                       & 0.0492\nosign        & 0.0051\nosign        & {\bf 0.1600}\nosign  & 0.1127\upblack \\ 
  ~       & Weighted \combmnz                       & {\bf 0.0494}\nosign  & {\bf 0.0056}\nosign  & 0.1599\nosign        & {\bf 0.1136}\uptriangle \\ 
  ~       & Weighted \combanz                       & 0.0379\nosign        & 0.0038\nosign        & 0.1475\nosign        & 0.0699\nosign \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion G}} &  \combsum       & 0.0470\nosign        & 0.0051\nosign        & 0.1468\nosign        & 0.1511\upblack \\ 
  ~       &  \combmnz                               & 0.0472\nosign        & 0.0046\nosign        & 0.1404\nosign        & 0.1448\upblack \\ 
  ~       &  \combanz                               & 0.0235\nosign        & 0.0051\nosign        & 0.1368\nosign        & 0.0084\downblack \\ 
  ~       & Weighted \combsum                       & 0.0524\nosign        & 0.0102\nosign        & {\bf 0.1539}\nosign  & \fbox{{\bf 0.1556}\upblack} \\ 
  ~       & Weighted \combmnz                       & {\bf 0.0539}\nosign  & {\bf 0.0109}\nosign  & 0.1506\nosign        & 0.1478\upblack \\ 
  ~       & Weighted \combanz                       & 0.0421\nosign        & 0.0098\nosign        & 0.1430\nosign        & 0.0866\nosign \\ 
  \hline
  \hline
  \multirow{6}{*}{{\bf Fusion H}} &  \combsum       & 0.0441\nosign        & 0.0049\nosign        & 0.1137\nosign        & 0.1064\nosign \\ 
  ~       &  \combmnz                               & 0.0463\nosign        & 0.0047\nosign        & 0.1129\nosign        & 0.1117\upblack \\ 
  ~       &  \combanz                               & 0.0134\nosign        & 0.0041\nosign        & 0.0627\nosign        & 0.0540\downtriangle \\ 
  ~       & Weighted \combsum                       & {\bf 0.0619}\nosign  & 0.0077\nosign        & {\bf 0.1671}\nosign  & 0.1276\upblack \\ 
  ~       & Weighted \combmnz                       & 0.0616\nosign        & {\bf 0.0092}\nosign  & 0.1409\nosign        & {\bf 0.1286}\upblack \\ 
  ~       & Weighted \combanz                       & 0.0247\nosign        & 0.0069\nosign        & 0.1063\nosign        & 0.0901\nosign \\ 
  \hline
  \multirow{6}{*}{{\bf Fusion I}} &  \combsum       & 0.0507\nosign        & 0.0042\nosign        & 0.1468\nosign               & 0.1057\nosign \\ 
  ~       &  \combmnz                               & 0.0502\nosign        & 0.0035\nosign        & 0.1479\nosign        & 0.1077\nosign \\ 
  ~       &  \combanz                               & 0.0171\nosign        & 0.0027\nosign        & 0.0049\nosign        & 0.0084\downblack \\ 
  ~       & Weighted \combsum                       & {\bf 0.0565}\nosign  & {\bf 0.0065}\nosign  & {\bf 0.1749}\nosign  & {\bf 0.1188}\upblack \\ 
  ~       & Weighted \combmnz                       & 0.0559\nosign        & 0.0052\nosign        & 0.1716\nosign        & 0.1157\uptriangle \\ 
  ~       & Weighted \combanz                       & 0.0307\nosign        & 0.0033\nosign        & 0.1206\nosign        & 0.0454\downblack \\ 
  \hline
 \multirow{6}{*}{{\bf Fusion J}} &  \combsum        & 0.0507\nosign        & 0.0056\nosign        & 0.1617\nosign & 0.1211\nosign\\ 
  ~       &  \combmnz                               & 0.0502\nosign        & 0.0062\nosign        & 0.1613\nosign  & 0.1260\uptriangle \\ 
  ~       &  \combanz                               & 0.0085\nosign        & 0.0027\nosign        & 0.0163\downtriangle  & 0.0163\downtriangle \\
  ~       & Weighted \combsum                       & 0.0681\nosign        & {\bf 0.0090}\nosign  & \fbox{{\bf 0.1983}\uptriangle}  & {\bf 0.1531}\upblack \\ 
  ~       & Weighted \combmnz                       & \fbox{{\bf 0.0695}}\nosign  & 0.0086\nosign & 0.1904\uptriangle  & 0.1381\upblack \\ 
  ~       & Weighted \combanz                       & 0.0309\nosign        & 0.0039\nosign        & 0.0954\nosign  & 0.0589\nosign \\ 
  \hline
  \hline
  \multicolumn{2}{l||}{\% Change over best individual run} & +72.9\%     & +13.9\%     & +31.3\%     & +57.6\%  \\ 
  \hline
  \end{tabular}
  \end{scriptsize}
  \end{center}
  \label{table:results-fusion}
\end{table}

Table \ref{table:results-fusion} lists the results of our fusion experiments. What we see is that, overall, fusing recommendation results is successful: in 36 out of 40 fusion runs we find a performance increase over the best individual runs. The best fusion runs on our four data sets show improvements ranging from 14\% to 73\% in MAP scores. When we take a look at the three different fusion methods, we see that \combsum\ and \combmnz\ both consistently provide good performance, and that they outperform each other in half of the cases. The difference between the two is never significant. In contrast, the \combanz\ method performs poorly across the board: only on the \dd\ data set does it achieve reasonable performance in some cases. The \combanz\ method performs especially poorly on the \dc\ data set: in many cases it performs significantly worse than the individual runs. When we compare the unweighted combination methods against the weighted combination method, we find that the latter consistently outperform the unweighted fusion approaches. In some cases, these differences are statistically significant, as is the case in fusion run J for \dba\ and \dc. Here, the weighted \combsum\ runs achieve the data set-best MAP scores of 0.1983 and 0.1556 respectively, which are significantly higher than the unweighted \combsum\ runs at 0.1617 ($p = 0.04$) and 0.1211 ($p = 1.9 \cdot 10^{-6}$). This confirms the findings of others such as \cite{Vogt:1998} and \cite{Kamps:2004} who found similar advantages of weighted combination methods. Typically, the best performing component runs are assigned the higher weights than the other runs. The optimal weights for the 120 different weighted combination runs are included in Appendix \ref{app:optimal-weights}.

We considered two different recommendation aspects when deciding which runs to combine: representations and algorithms. Pairwise fusion runs A, B, D, and E all combine different algorithms that use the same representations, whereas runs C, F, and G combine pairs of runs that vary both in representation and algorithm. The results show that the latter type of fusion, where different recommendation aspects are combined consistently performs better than when only one of the aspects is varied among the paired runs. We observe the same thing for runs H, I, and J, that combine more than two runs at the same time. Run J combines eight different runs that represent six different algorithms and four different types of representations, and achieves the best overall MAP scores on three of our four data sets. On the \dd\ data set though, we find that increasing the number of runs to be combined is not always beneficial: we see better performance on the pairwise B, C, and G fusion runs than on the fusion runs that combine more individual runs.

In the next subsection we extend our analysis from determining which weighted combination methods work best to why these methods provide such superior performance (Subsection \ref{6:subsec:fusion-analysis}). Then, in Subsection \ref{6:subsec:comparing-all-methods}, we compare the best weighted combination methods to the similarity fusion and hybrid filtering techniques introduced in Subsection \ref{4:subsec:similarity-fusion} and Subsection \ref{5:subsec:hybrid-filtering} respectively.





\subsection{Fusion Analysis}
\label{6:subsec:fusion-analysis}

While it is useful to determine exactly which combination methods provide the best performance, it would be equally interesting and useful to find out what it is exactly that makes fusion outperform the individual runs. \cite{Belkin:1993} provides two different rationales for the success of fusion approaches that we already mentioned in \mbox{Section \ref{6:sec:related-work}}. The first is a {\em precision-enhancing effect}: when multiple runs are combined that model different aspects of the task, the overlapping set of retrieved items are more likely to be relevant. In other words, more evidence for the relevance of an item to a user translates to ranking that item with higher precision. The second rationale is a {\em recall-enhancing effect}, and describes the phenomenon that multiple runs that model different aspects will retrieve different set of relevant items. Fusing these individual runs can then merge these sets and increase recall of the relevant items. 

Let us zoom in on two of the more successful combination runs, and analyze why we see the improvements that we do. We select two runs that significantly improve over the individual runs: (1) fusion run G for \dc, where we combine the best folksonomic recommendation run with the best metadata-based run, and (2) fusion run J for \dba, where we combine all eight best individual runs from A, B, D, and E. Both runs combine different algorithms and different representation types. In our analysis we take an approach similar to \cite{Kamps:2004} who analyzed the effectiveness of different combination strategies for different European languages. We manipulate our results in two different ways before the MAP scores are calculated to highlight the improvements in precision and recall due to fusion. Each time we compare the fused run with each of the individual runs separately to determine the effects of those runs. 

For identifying the enhancements due to increased precision, we ignore the ranking of items that did not occur in the individual run in our calculation of the MAP scores. This neutralizes the effect of additionally retrieved items and isolates the contribution of items receiving a better ranking \citep{Kamps:2004}. Table \ref{table:fusion-analysis} shows the results of our analysis for the two fusion runs, and the fourth column shows the MAP scores attributed to better ranking of the items that were originally present. For example, if we disregard the additionally retrieved items from the fused run in the top table, and only look at the relevant items already retrieved by run 2, we see that for run 2 we get a MAP score of 0.1546. This is an increase of 56.6\% in MAP score over the original MAP score of 0.0987, which is due to a better ranking of the  items retrieved by run 2. 

\begin{table}[htp]
  \caption[Fusion analysis results]{Results of our fusion analysis. For each individual run we report the original MAP score, the total number of relevant retrieved items by the run, the MAP score attributed to enhanced precision, and the MAP score attributed to enhanced recall.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c|c|c|c|c}
  \multicolumn{7}{c}{CiteULike -- Fusion run G -- Weighted \combsum}  \\
  \hline
  {\bf Run} & {\bf Original MAP} & {\bf Relevant docs} & \multicolumn{2}{c|}{{\bf Precision-enh.\ MAP}} & \multicolumn{2}{c}{{\bf Recall-enh.\ MAP}} \\
  \hline
  \hline
  Run 1     & 0.0887  & 579  & 0.1343   & +51.3\%    & 0.1084   & +22.1\% \\
  Run 2     & 0.0987  & 791  & 0.1546   & +56.6\%    & 0.0992   & +0.5\%  \\
  \hline
  Fused run & 0.1556  & 791 & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-} \\
  \hline
  \multicolumn{7}{c}{~}  \\
  \multicolumn{7}{c}{~}  \\
  \multicolumn{7}{c}{Bibsonomy bookmarks -- Fusion run J -- Weighted \combsum} \\
  \hline
  {\bf Run} & {\bf Original MAP} & {\bf Relevant docs} & \multicolumn{2}{c|}{{\bf Precision-enh.\ MAP}} & \multicolumn{2}{c}{{\bf Recall-enh.\ MAP}}  \\
  \hline
  \hline
  Run 1     & 0.0853  &  86  & 0.1889  & +121.5\%   & 0.0937  &   +9.8\%  \\ 
  Run 2     & 0.0726  &  92  & 0.1501  & +106.7\%   & 0.1071  &  +47.5\%  \\ 
  Run 3     & 0.0452  &  55  & 0.1315  & +190.9\%   & 0.1123  & +148.5\%  \\ 
  Run 4     & 0.1084  &  55  & 0.1355  &  +25.0\%   & 0.1685  &  +55.4\%  \\ 
  Run 5     & 0.1261  & 115  & 0.1981  &  +57.1\%   & 0.1263  &   +0.2\%  \\ 
  Run 6     & 0.1173  & 111  & 0.1970  &  +67.7\%   & 0.1186  &   +1.1\%  \\ 
  Run 7     & 0.0404  &  66  & 0.1709  & +323.0\%   & 0.0638  &  +57.9\%  \\ 
  Run 8     & 0.1488  &  90  & 0.1859  &  +24.9\%   & 0.1591  &   +6.9\%  \\
  \hline
  Fused run & 0.1983  & 115 & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-} \\
  \hline
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:fusion-analysis}
\end{table}

For identifying the enhancements due to increased recall, we look at the contributions the items newly retrieved by the fusion run make to the MAP score. This means we treat the items retrieved by an individual run as occurring at those positions in the fused run as well. Any increases in MAP are then due to newly retrieved items that were not present in the individual run, i.e., increased recall, and not due to a better ranking because of combined evidence \citep{Kamps:2004}. These MAP scores are listed in the sixth column in Table \ref{table:fusion-analysis}. For example, if we consider only the additionally retrieved items from the fused run in the top table compared to run 1, we see that for run 1 we get a MAP score of 0.1084. This is an increase of 22.1\% in MAP score over the original MAP score of 0.0887, which is due to the improved recall of the fusion run. 

The results in Table \ref{table:fusion-analysis} show that combining different runs increases the number of relevant items retrieved by the combination run (third column). However, this increased recall does not necessarily mean that the improvement in the MAP scores is also due to these additionally retrieved items. We see from the adjusted MAP scores that both precision- and recall-enhancing effects are present. However, fusion clearly has a stronger effect on increasing the precision of the recommendations, and the increases in MAP score are almost always due to a better ranking of the documents. These results for these two fusion runs are representative for other fusion runs that (significantly) improved over their component runs. In addition, our  findings confirm those of \cite{Kamps:2004}: most of the effects of fusion they observed were also due to the improved ranking of the documents.





\subsection{Comparing All Fusion Methods}
\label{6:subsec:comparing-all-methods}

Earlier in this thesis, in Subsections \ref{4:subsec:similarity-fusion} and \ref{5:subsec:hybrid-filtering}, we already proposed two other fusion approaches. The first, similarity fusion, was a feature combination approach and involved fusing two similarity matrices together in their entirety. The second, hybrid filtering, was a feature augmentation approach and used content-based user and item similarities in a CF algorithm. How do these two approaches stack up against our weighted fusion runs from this chapter? Table \ref{table:results-fusion-comparison} compares the best fusion runs of this chapter for each data set against the best runs of the other two fusion approaches.

\begin{table}[htp]
  \caption[Comparison of different fusion approaches]{Comparison of our three different approaches to recommender systems fusion. Reported are the MAP scores and the best-performing fusion method for for each data set is printed in bold. Significant differences are calculated between the best and second-best runs for each data set.}
  \begin{center}
  \begin{footnotesize}
  \begin{tabular}{l||c|c||c|c}
  \hline
   \multirow{2}{*}{{\bf Run}} & \multicolumn{2}{c||}{{\bf bookmarks}} & \multicolumn{2}{c}{{\bf articles}} \\
  \cline{2-5}
  ~  & {\bf BibSonomy} & {\bf Delicious} & {\bf BibSonomy} & {\bf CiteULike}  \\
  \hline
  \hline
  Similarity fusion (from Section \ref{4:subsec:similarity-fusion})    & 0.0350\nosign    & 0.0102\nosign    & 0.1210\nosign    & 0.0791\nosign  \\
  Hybrid filtering (from Section \ref{5:subsec:hybrid-filtering})     & 0.0399\nosign    & 0.0039\nosign    & 0.1510\nosign    & 0.0987\nosign  \\
  Weighted run fusion  & {\bf 0.0695}\nosign  & {\bf 0.0115}\nosign  & {\bf 0.1983}\uptriangle  & {\bf 0.1531}\upblack \\
  \hline
  \% Change over second best run & +74.2\%     & +12.7\%     & +31.3\%     & +55.1\%  \\ 
  \hline
  
  \end{tabular}
  \end{footnotesize}
  \end{center}
  \label{table:results-fusion-comparison}
\end{table}

We can clearly see that taking a weighted combination of recommendation runs is superior to the other two approaches. This difference is significant on both the \dba\ and \dc\ data sets, and the weighted run fusion approach is also significantly better than the hybrid filtering approach on the \dbb\ data set.

Finally, we would like to remark that, except for the \del\ data set, weighted run fusion outperforms the tag-aware fusion approach of \cite{Tso-Sutter:2008}. These improvements are statistically significant ($p < 0.05$) on the \dba\ and \cul\ data set.

 



\section{Discussion \& Conclusions}
\label{6:sec:discussion}

We found that combining different recommendation runs yields better performance compared to the individual runs, which is consistent with the theory behind data fusion and with the related work. Weighted fusion methods consistently outperform their unweighted counterparts. This is not surprising as it is unlikely that every run contributes equally to the final result, and this was also evident from the optimal weight distribution among the runs. 

In addition, we observed that combination methods that reward documents that show up in more of the base runs---\combsum\ and \combmnz---are consistently among the best performers. In contrast, the \combanz\ method performed worse than expected on our data sets. One reason for this is that \combanz\ calculates an average recommendation score across runs for each item. There is no bonus for items that occur in multiple runs such as \combsum\ and \combmnz assign, and run overlap is an important indicator of item relevance. In addition, the averaging of \combanz\ can lead to exceptionally performing base runs being snowed under; this is especially apparent for the fusion experiments where four and eight runs were combined. When more runs are combined, \combanz\ starts performing worse, relative to the performance of pairwise fusion runs with \combanz.

A third finding from our fusion experiments was a confirmation of the principle put forward by \cite{Ingwersen:2005} and \cite{Belkin:1993}: it is best to combine recommendations generated using cognitively dissimilar representations and algorithms, touching upon different aspects of the item recommendation process. 

We explored two different aspects to recommendation, representation and the choice of algorithm, and indeed found that runs that combine multiple, different recommendation aspects perform better than runs that consider variation in only one recommendation aspect. We also observed that for two data sets, \dbb\ and \dba, combining more runs tends to produce better performance. Here, the best performing fusion runs were those that combined eight base runs that varied strongly in the type of algorithm and representation. 

A separate analysis confirmed that most of the gains achieved by fusion are due to the improved ranking of items. When multiple runs are combined, there is more evidence for the relevance of an item to a user, which translates to ranking that item with higher precision. Improved recall plays a much smaller role in improving performance. Overall, we find strong evidence for the {\em Chorus effect} in our experiments and, to a lesser extent, support for the {\em Ranking effect}. The lack of recall-related improvements suggests that we do not see the {\em Dark Horse effect} occurring in our fusion experiments.





\section{Chapter Conclusions and Answer to RQ 3}

We observed earlier in Chapters \ref{chapter4} and \ref{chapter5} that combining different algorithms and representations tends to outperform the individual approaches. Guided by our third research question and its subquestion, we examined this phenomenon in more detail in this chapter.

\begin{center}\begin{tabularx}{0.9\linewidth}{lX}  
  {\bf RQ 3}   & Can we improve performance by combining the recommendations 
                 generated by different algorithms? \\
  ~            & ~ \\
  {\bf RQ 3a}  & What is the best recipe for combining the different 
                 recommendation algorithms? \\
\end{tabularx}\end{center}

We found a positive answer to RQ 3: combining different recommendation runs yields better performance compared to the individual runs on all data sets. In answer to RQ 3a, we identified several ingredients for successfully combining recommendation algorithms, such as combining approaches that cover different aspects of the item recommendation task. By combining different algorithms and different representations of the data, we achieved the best results. We compared weighted fusion methods with unweighted fusion methods and found that weighted methods performed best. This is understandable, since not every run contributes equally to the final result. Another ingredient for successful fusion is using a combination method that rewards documents that show up in more of the individual runs, harnessing the {\em Chorus} and {\em Authority} effects. After a detailed analysis, we learned that these performance improvements were largely due to a precision-enhancing effect: the ranking of overlapping items improves when they are retrieved by more runs. While fusion also increases recall, this has only a weak effect on performance. 
