\section{Experiments}
\label{sec:exp}

We perform experiments on both real-world and synthetically generated data sets and compare our diversification algorithm with several natural and intuitive alternative algorithms for the problem. Our experimental evaluation compares the performance of these algorithms measured in terms of feature coverage by varying multiple parameters. 
%The algorithms and experiments are designed to test a wide range of considerations. 
We first describe the data sets used in our experiments; next we describe the alternative algorithms we compare our algorithm against; and finally we describe the experiments that
we perform and interpret the results we obtain in these experiments.

\subsection{Description of Data Sets}
%\begin{itemize}
%\item 
\medskip
\noindent {\bf Real-world data from Google+ Sparks.}\footnote{Google+ Sparks is a news feed shown to users of the social networking website Google+, which is a selection from a large corpus of items (e.g. blogs, videos, news articles, etc.) coming from a variety of sources.} This dataset is obtained from the stream of items received by Google+ Sparks from which it selects a news feed for individual users. Each item considered has several features associated with it, such as the source of the item, the broad category it belongs to, the types of contents in it etc. We extracted 18 features such that each item has a binary value associated with each feature (that is, either has or does not have the feature). The feature set includes a variety of dense (i.e. present in a large fraction of items) as well as sparse features such as whether the item has an embedded video, is in English language, is about Politics or Sports, and so on. We extracted 18 features such that we capture several different kinds of dependencies: e.g., hierarchical dependency (i.e. a feature occurs always with another feature), exclusivity (an item can have only one of a set of features), dense features, or very sparse features. These features were chosen in a careful manner so as to test the performance of the diversification algorithm on dependencies, on sparse features (i.e. even when the optimum is small), on very commonly occurring features, etc. We ran our experiments on hundred such different data sets, each containing about a million items with feature vectors along these 18 dimensions. 
%The experimental results show the average performance over all the runs.

\medskip
\noindent {\bf Synthetically generated data.} In addition to the real data set, we also test our algorithm on various carefully chosen synthetically generated data sets. In each of these data sets, we again generated items with 18 features and tested the algorithms on data sets of one million items each. The synthetic data sets we tested on are the following.
\begin{itemize}
\item {\bf Independent.} In this data set, each entry (that is each feature for every item) is independently set to $1$ or $0$ (that is item either has or does not have the feature) with probability half each. Notice that this generates a data set where, in expectation, each item has nine features. Further, in expectation, each feature is contained in half of all the items.
\item {\bf Parity.} In this data set, initially we fix a bit vector of length 18 (where each bit is set to $1$ or $0$ independently and with equal probability) and then for each item, we pick a $1$ or $0$ independently and with equal probability, and XOR it with the bit vector. Observe that this results in a very strong dependence between different features, and the whole data set contains only two kinds of items. 
\item {\bf Dependent Mixed.} This data set is generated in a manner very similar to parity, but in addition, after each item has been generated, each of the feature bits is flipped independently and with a small probability (set to $0.1$). This results in a milder dependence between the item features.
\item {\bf Dependent.} The dependent data set is similar to the dependent mixed data set. The only difference is that the initial bit vector is set to all $1$'s. Therefore, there is uniform correlation between all item features. 
\end{itemize}
%\end{itemize}

\subsection{Algorithms}
We now describe all the algorithms that we compare in our experiments. 
%our diversification algorithm against. 
%In these experiments, for the diversifier algorithm also, we make a slight alteration in which the thresholds are chosen: Instead of computing the threshold at any stage of the online process by dividing by the total budget $B$, we divide by the remaining budget at that stage. This is just a minor change that does not significantly affect the performance.
%\begin{itemize}

\medskip
\noindent {\bf Diversifier.} This is the diversification algorithm in Section~\ref{sec:algorithm}.

\medskip
\noindent {\bf Diversifier (Uniform Coverage).} This algorithm is the same as the diversifier presented previously with a small difference that the reward function $\phi$ is not decreasing; in particular, the reward function is fixed at $\phi(k) = 1$. Comparing with this algorithm highlights the importance of the decreasing reward function in the diversification algorithm. 
%This is however a reasonable and intuitive algorithm, and for a slight variant of this, one can in fact prove a $O(\log n)$ approximation (we omit details here).

\medskip
\noindent {\bf Fixed Threshold.} The fixed threshold algorithm is a na\"ive baseline algorithm where a specific threshold is fixed at the beginning (between $1$ and $18$, the number of features). Subsequently, when items arrive online, every item that has at least as many features as the threshold is picked, until the budget is exhausted. We compare our diversification algorithm against this fixed threshold algorithm for different thresholds. We performed experiments with all possible thresholds between $1$ and $18$ but present only a representative set of results, for thresholds $3, 6, 9, 12, 15$. The performance of the fixed threshold algorithm with other thresholds is similar to the ones we show.

\medskip
\noindent {\bf Simple Random.} In this algorithm, items are selected randomly based on the number of features they contain, in such a way that the expected number selected items equals the budget $B$. The probability that an item containing $k$ features is picked is determined by optimally solving a linear program that aims to maximize the coverage on every feature. This algorithm performs two passes over the input stream of items: in the first pass, it computes frequency counts of features and uses them to obtain the probability of selecting an item with $k$ features. In the second pass, it uses these probabilities to actually select the items. 
%We allow this algorithm to perform one pass over the entire input in the beginning so that the distribution of items to number of features may be calculated/estimated, so that the corresponding probabilities can be accurately computed. 
%\end{itemize}

\subsection{Description of the Experiments}
%We now describe the experiments we performed.
%\begin{itemize}
\noindent {\bf Sorted Coverage.} In this experiment, we evaluate the performance of each of the algorithms on the coverage achieved on the features. Recall that our objective is to maximize the minimum coverage over all features. 
%We therefore of course consider the minimum coverage for each of the algorithms. 
In addition to the minimum coverage, we also look at the performance of the algorithms on the 2nd to 6th least covered features as well. The specific goal of the diversifier is to maximize minimum coverage, but at the same time, it is desirable that the coverage be substantial on other features as well. This experiment highlights that even though the specific goal of the diversification algorithm is to maximize minimum coverage, it achieves substantial coverage on other features as well (particularly for features where a larger coverage is attainable without compromising on the minimum coverage achieved). 

\medskip
\noindent {\bf Varying Budgets.} We perform a series of experiments by varying the available budget to the algorithms, to see if the performance of the diversification algorithm scales with larger budgets. We also perform experiments for low budgets to test whether the algorithm is able to achieve reasonable coverage on each of the features (even on the sparse ones). 
%In a more general setting, it is conceivable that different features have different targets or demands, and therefore a good coverage should be attainable with varying budgets. 
These experiments show that the diversification algorithm performs admirably for a variety of budgets.
%is robust to such constraints. 
%\end{itemize}

In all our experiments, the total number of features is $18$ and the number of items in each data set is around $1M$. Also, when we plot sorted coverage, the default value of the budget is set to $20K$ (which is roughly two percent of the data set), and the targets for all the features are set to the budget itself. 
%however, we show several plots even for sorted coverage with different values of the budget. 
For the plots where we vary budgets on the $x$-axis, the coverage plotted on the $y$-axis is the minimum over all the $18$ features (which is the objective function of the diversification algorithm). %The experimental results show the average performance over 100 runs on each data set.

As stated in Theorem~\ref{thm:main}, our diversification algorithm achieves a $\frac{1}{2}-\delta$-approximation ratio as long as the expected optimum coverage is at least $\frac{24\ln n}{\delta^2}$. In all our experiments, the minimum coverage on any feature (for both real and synthetic data sets) is at least $900$, out of around $1M$ items. Further, since we are dealing with $n=18$ features, $\ln n\approx 2.89$. Note that $900\geq  \frac{24\ln n}{\delta^2}$ for values of $\delta\geq 0.28$. As we note in the experiments shortly, the performance of the diversification algorithm is significantly better than the theoretical guarantee. 

\subsection{Description of plots}

\begin{figure}[!t]
\centering
\subfigure[Higher range of budgets between 0.5\% and 8\%]{\includegraphics[width = 0.7\linewidth]{figs/realdata_100_199_avg_high.ps}}
\subfigure[Lower range of budgets between 0.1\% and 0.5\%]{\includegraphics[width = 0.7\linewidth]{figs/realdata_100_199_avg_low.ps}}
\caption{The average minimum coverage achieved by various algorithms over 100 real world data sets of $1M$ items each.}
\label{fig:realvbhighlow}
\end{figure}

Our first set of plots (Figure~\ref{fig:realvbhighlow}) compare the minimum coverage achieved by various algorithms, averaged over the 100 real data sets of $1M$ items each, with budgets varying from 0.1\% to 8\% of the input. 
%We plot the coverage obtained by various algorithms for varying budgets for the real data sets in. 
%These plots show the performance for both high budgets (when $B$ is varied between $5K$ and $80K$) and low budgets (when $B$ is varied between $1K$ and $5K$). 
Observe that the Diversifier significantly outperforms all the other algorithms for any budget between 0.1\% and 4\%. The minimum coverage achieved by the Diversifier levels off at a value between 900 and 1000 beyond a budget of 1\% since there are features in our dataset that occur in fewer than 1000 items (and therefore the Diversifier has already achieved an almost optimal solution). 
%In particular, we see that in this experiment (and throughout all experiments) the performance is significantly better than a $2$-approximation guaranteed by our theorem.
Once the budget reaches 4\%, the Diversifier (Uniform Coverage) also achieves optimal performance. All the other algorithms fair significantly worse, especially for smaller budgets.
Notice that the other algorithms however require a budget of at least $40K$ to achieve any reasonable performance. In fact the minimum coverage achieved by the Diversifier increases linearly with the budget thereby confirming scalability of the algorithm.

\begin{figure*}[!t]
\centering
\subfigure[Budget = 0.1\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_1000_sc_avg.ps}}
\subfigure[Budget = 0.2\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_2000_sc_avg.ps}}
\subfigure[Budget = 0.4\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_4000_sc_avg.ps}}
%\caption{The coverage achieved on the least covered features on the real world data set by various algorithms.}
%\label{fig:realscalllowbudgets}
%\end{figure}
%
%\begin{figure}[!t]
\subfigure[Budget = 1\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_10000_sc_avg.ps}}
\subfigure[Budget = 2\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_20000_sc_avg.ps}}
\subfigure[Budget = 4\%]{\includegraphics[scale = 0.45]{figs/realdata_100_199_40000_sc_avg.ps}}
%\subfigure[]{\includegraphics[width = 0.7\linewidth]{figs/realdata_100_199_80000_sc_avg.ps}}
%\caption{Real Data Sorted Coverage (for Budgets = 10K, 20K, 40K)}
\caption{The coverage achieved on the least covered features on the real world data set by various algorithms.}
\label{fig:realscallhighbudgets}
\end{figure*}

For our next set of plots (Figure~\ref{fig:realscallhighbudgets}) we show the coverage achieved by the various algorithms on the least covered features. These experiments are performed on the real data set for six fixed budgets varying from 0.1\% to 4\%. Throughout these plots, we see a consistent trend of the Diversifier performing extremely well for the lesser covered features while the other algorithms are able to perform well only on the features that get higher coverage. In other words, these algorithms fail to perform well at the specific task of diversification which requires spreading out the coverage uniformly. Observe that in the plot for the comparatively high budget of 4\%, the other algorithms also perform well for features that receive lesser coverage. This is because even the optimal solution can only attain a coverage of about as much obtained by these algorithms. However, for small budgets, the diversifier significantly outperforms all other algorithms. 

\begin{figure}[t]
\centering
\subfigure[Minimum coverage with varying budgets]{\includegraphics[width = 0.7\linewidth]{figs/independent_1M_vb.ps}}
%\caption{Independent Data Varying Budgets}
%\label{fig:independentvb}
%\end{figure}
%
%\begin{figure}[h]
%\centering
\subfigure[Coverage on the least covered features]{\includegraphics[width = 0.7\linewidth]{figs/independent_1M_sc.ps}}
\caption{Experimental results for the Independent data set}
\label{fig:independent}
\end{figure}

Now, we describe the experimental results obtained for synthetically generated data sets. The plots for the experiments described above performed on the independent data set are given in Figure~\ref{fig:independent}. In Figure~\ref{fig:independent} part (a), we see that the performance of the Diversifier rapidly improves with increasing budget while the other algorithms do not scale as well. This highlights that our algorithm is able to quickly adapt to varying coverages across features and assign importance to features that suffer from low coverage. On the other hand, the other algorithms continue to select items oblivious to previously selected items and therefore suffer from lower values of minimum coverage. In Figure~\ref{fig:independent} part (b), we again observe that the Diversifier performs well on the features with low coverage, i.e. it is able to balance out the coverage along different features and obtain a large minimum coverage, while other algorithms fail to do so. 

\begin{figure}[t]
\centering
\subfigure[Minimum coverage with varying budgets]{\includegraphics[width = 0.7\linewidth]{figs/parity_1M_vb.ps}}
%\caption{Parity Data Varying Budgets}
%\label{fig:parityvb}
%\end{figure}
%
%\begin{figure}[h]
%\centering
\subfigure[Coverage on the least covered features]{\includegraphics[width = 0.7\linewidth]{figs/parity_1M_sc.ps}}
\caption{Experimental results for the Parity data set}
\label{fig:parity}
\end{figure}

Very similar trends are seen for the same experiments performed on the parity data set Figure~\ref{fig:parity} part (a). The diversifier does significantly better throughout, and the contrast is particularly noticeable as the budgets are increased. Observe that in the plot for sorted coverage in the parity data set (Figure~\ref{fig:parity} part (b)), the Diversifier does significantly better than all other algorithms. In fact, the Diversifier is very close to the optimum in this case as well, and therefore performs much better than guaranteed by Theorem~\ref{thm:main}. All these plots are horizontal because the data set contains only two kinds of items. 

\begin{figure}[!t]
\centering
\subfigure[Dependent data set]{\includegraphics[width = 0.7\linewidth]{figs/dependent_1M_vb.ps}}
\subfigure[Dependent Mixed data set]{\includegraphics[width = 0.7\linewidth]{figs/dependent_mixed_1M_vb.ps}}
\caption{The coverage achieved on the least covered features for various budgets by all algorithms on the Dependent and Dependent Mixed data sets.}
\label{fig:depanddepmixedvb}
\end{figure}

Finally, we show plots for the minimum coverage achieved for the different algorithms on the dependent and dependent-mixed data sets for varying budgets (Figure~\ref{fig:depanddepmixedvb}). In these plots, we notice that some of the other algorithms also perform well; in fact the randomized algorithm even outperforms the Diversifier for the dependent data set. Further, we observe that some of the fixed threshold algorithms perform well for large budgets. This is not surprising given that the features are very rigidly dependent on each other - therefore an algorithm that chooses the right threshold performs well on all features. Of course, note that we are comparing against an algorithm that somehow knows this threshold value, which is not feasible in practice. The takeaway here therefore is that the diversifier loses a bit in learning what threshold to use in an online fashion, but is then able to adapt and obtain a good coverage. 

\subsection{Summary}

To summarize the experiments, we have seen that the diversifier performs extremely well on a wide range of parameters, and in particular, does significantly better than all the other algorithms we implemented on the real data set. The real data set comprised of several kinds of features with varying densities among items, hierarchical dependence, exclusive dependence etc., and yet the diversifier performs really well on the coverage for all budget ranges. The algorithm is fairly general, extremely simple to implement, low cost and efficient, provably approximate, and performs near-optimally even at large scales. Therefore these ideas and techniques may be useful in a wide range of other settings and applications. 
