\section{Diversification Functions}
\label{sec:functions}

As described in the introduction, our high-level objective is to select a 
small collection of items that are diverse with respect to their constituent
features or dimensions, from a large corpus of multi-dimensional items.
In this section, we discuss multiple variants of the problem, all of which 
represent the high-level goal of diversification, and ultimately converge to 
a particular problem formulation that we focus on for the rest of the paper.

First, we establish some notation that we will use throughout
the paper. Let $F$ be the set of $n$ features. As described in the 
introduction, the input consists of a set $U$ of $m$ items, where each item $j\in U$ 
consists of a subset of features $F_j \subseteq F$. 
The diversification algorithm needs to select a representative subset $S$
of at most $B$ items, where $B$ is a given budget, from the input set $U$.
The {\em coverage} of feature $i$ in the selected subset $S$, denoted by $C_i$, is defined
as the number of selected items that have feature $i\in F$. Each 
feature also has a {\em target} $T_i$ which is the desired coverage 
for the feature. The {\em fractional coverage} of feature $i$ is 
the fraction of its target
coverage that has been achieved by the selected set of items, i.e.
$c_i = C_i/T_i$. Let ${\bf c}$ be the vector of fractional coverages,
i.e. ${\bf c} = (c_i: i\in F)$; then
the objective is to select a subset $S$ that 
maximizes the value of $D({\bf c})$, where $D$ is the
diversification function of interest.

We consider two versions of the diversification problem depending on 
whether the entire set $U$ is available to the algorithm 
before it starts selecting the items in $S$. 
For example, in selecting a program committee from a set of 
researchers, the entire set of researchers is known to the program 
committee chairs before any selection decision is taken. 
We call this the {\em offline} version of the 
problem. On the other hand, consider the 
diversification problem in generating news feeds. In 
this case, the diversification algorithm needs to select news items 
as they arrive, i.e. without having access to the entire input 
set of items. We call this the {\em online} version of the problem.
Thus, in the online model, on the arrival of an item $j$, the 
algorithm must immediately either select or discard it,
subject to the constraint that the total number of selected items
cannot exceed $B$. We assume that each item in the online input stream
is drawn {\em independently} and {\em identically} (i.i.d.) from
some probability distribution on a set of features that is {\em unknown}
to the algorithm.

Perhaps the simplest objective function $D$ that one can aim for while
selecting a representative subset from a large set of items is to 
maximize the sum of fractional coverages of all the features, i.e.
\begin{equation*}
	D({\bf c}) = \sum_{i\in F} c_i.
\end{equation*}
However, observe that this function fails to distinguish between
a subset of items that achieves large coverage on a few features but 
very small coverage on the remaining features, and a different subset of 
items that achieves uniform moderate coverage on all features.  
Intuitively, the second subset is clearly more diverse, and
hence should be preferred. A diversification function that reflects
this intuition is 
\begin{equation*}
	D({\bf c}) = \sum_{i\in F} p_i,
\end{equation*}
where $p_i = 1$ if $c_i > 0$, and $p_i = 0$ otherwise. This function 
clearly distinguishes between the two subsets of items described 
above, but has the shortcoming that it treats all non-zero coverage 
values identically. In fact, these two functions are the extreme ends
of a continuum of candidate functions
\begin{equation*}
	D_{\alpha}({\bf c}) = \sum_{i\in F} c_i^{\alpha},\quad 0\leq \alpha \leq 1
\end{equation*}
that represents the 
classical trade-off between maximizing {\em magnitude} (cf. 
the first function, i.e. $\alpha = 1$) and ensuring {\em fairness} (cf. 
the second function, i.e. $\alpha = 0$). 

By a slight abuse of notation, let us also denote the value of the function $D_{\alpha}$
on the coverage achieved by a set of selected items $S$ as $D_{\alpha}(S)$. 
It can be shown that all such function $D_{\alpha}(S)$ (for $0\leq \alpha\leq 1$), are {\em submodular}.\footnote{A 
function $f$ defined on all subsets of a ground set $X$ is said to be 
{\em submodular} if for any $A\subseteq B$,
\begin{equation*} 
	f(A \cup \{x\}) - f(A) \geq f(B \cup \{x\}) - f(B)
\end{equation*}
for any $x\in X$.} Consider a {\em greedy} algorithm that repeatedly 
selects the item that yields the maximum increase in the value of the objective
until the entire budget has been used up. It is well-known that this algorithm
has an approximation factor of $(1-1/e)$ for the problem of maximizing any submodular
function. 

\begin{theorem}
For any $\alpha$ between $0$ and $1$, the greedy algorithm has an
approximation factor of $(1-1/e)$ for maximizing $D_{\alpha}$ in the offline setting.
\end{theorem}
 
In the online setting, if the optimal value of objective function 
is known, then a standard thresholding technique yields an algorithm with a 
constant competitive ratio\footnote{For a maximization problem, an online algorithm has
a competitive ratio of $\beta$ if the objective value in the solution produced by the algorithm
is at least $\beta$-times the offline optimum.}. On the other hand, if the optimum is unknown, then
we can guess its value using a standard doubling technique.
The key property that we exploit in this guessing scheme 
is that the input stream 
is drawn i.i.d.,
and therefore only a small
fraction of the budget is used in obtaining a good estimate of the optimum.

\begin{theorem}
For any $\alpha$ between $0$ and $1$, there exists an algorithm that has a
constant competitive ratio for maximizing $D_{\alpha}$ in the online setting, 
if the input is drawn i.i.d.
\end{theorem}

We now focus on another natural candidate function for 
diversification, where the objective is to maximize the minimum fractional coverage
all features. That is, 
\begin{equation*}
	D({\bf c}) = \min_{i\in F} c_i
\end{equation*} 
Observe that this function achieves the twin objectives of {\em magnitude} and
{\em fairness} of feature coverage. 
This function is {\bf not} submodular, and therefore, the
techniques described above cannot be used to solve this problem. 

Let $c_{\opt}$ be the optimal value of $D$ and 
\begin{equation*}
	\rho_{\opt} = c_{\opt} \min_{i\in F} T_i.
\end{equation*}
The next theorem (proof in appendix) shows that the problem does not admit an
algorithm with a finite approximation ratio even in the offline setting, if 
$\rho_{\opt} = o(\log n)$.
\begin{theorem}
\label{thm:optlb}
Under standard complexity-theoretic assumptions, there exists no algorithm
that obtains a finite approximation ratio for offline instances of the 
diversification problem where $\rho_{\opt} = o(\log n)$.
\end{theorem}
The above theorem implies that we need to assume that the optimal solution
satisfies $\rho_{\opt} = \Omega(\log n)$, 
in order to obtain a finite approximation ratio. (In fact, this assumption holds
for most real data sets, as verified later in the experimental section.)
If this property is satisfied by an offline instance of the problem, 
then a  simple algorithm that employs randomized rounding of the natural linear 
programming formulation of the problem gives the following theorem. 
\begin{theorem}
For the problem of maximizing $D$ in the offline setting, there is a 
PTAS\footnote{A {\em Polynomial-time Approximation Scheme} (or PTAS) for a maximization problem 
is an algorithm that has an approximation factor of $(1-\epsilon)$ for any 
arbitrarily small constant $\epsilon > 0$. (The running time of the algorithm
depends on the choice of $\epsilon$.)} 
if the optimum is $\Omega(\log n)$.
\end{theorem}

Now, we focus on the online version of the problem. The next theorem (proof in
appendix) shows that we need to assume that the input stream is not adversarial
in order to obtain a sub-polynomial competitive 
ratio for this problem.
\begin{theorem}
\label{thm:adversarial}
For an adversarial input stream, the competitive ratio of any algorithm for 
maximizing $D$ in the online setting is $\Omega(n)$.
\end{theorem}
To overcome the barrier imposed by the above theorem, we assume that the
input is drawn i.i.d. from a probability distribution that is unknown to the 
algorithm.
