\section{Problem Formulation}
\label{sec:problem}

%First, we establish the notation that we will use throughout
%the paper. Let $F$ be the set of $n$ features. As described in the 
%introduction, the input consists of a 
%stream $S$ of $m$ items, where each item $j\in S$ consists of a subset
%of features $F_j \subseteq F$. On the arrival of an item $j$, the 
%algorithm must immediately either select or discard it,
%subject to the {\em budget constraint} that the number of selected items
%cannot exceed a given budget $B$. Let $T$ denote the set of selected
%items, and the {\em coverage} of a feature $i$ in set $T$ be the number 
%of items in $T$ that contain $i$, i.e. $C_i = |\{j\in T: i\in F_j\}|$.
%Additionally, every feature $i\in F$ has a {\em target coverage} $T_i$.
%The {\em fractional coverage} of feature $i$ is the fraction of its target
%coverage that has been achieved by the selected set of items, i.e.
%$c_i = C_i/T_i$. The objective is to select a subset of items that 
%maximizes the minimum fractional coverage among all features,
%maximizes $\min_{i\in F} c_i$.

As described in the previous section, we focus on the following problem 
formulation, which we call the \diverse problem. 
\begin{framed}
\parbox{0.95\textwidth}{
Let $F$ be a set of features and $T_i$ be the target coverage for feature $i\in F$. 
An input set $U$ of $m$ items arrives online, where the set of features 
$F_j\subseteq F$ in each item $j\in U$ is drawn i.i.d. from a probability 
distribution on subsets of $F$ that is unknown to the algorithm. The algorithm
must decide, on the arrival of item $j$, whether to select or discard it. The
overall goal is to select a subset $S$ of at most $B$ items that maximizes 
$\min_{i\in F} c_i = \min_{i\in F} \frac{C_i}{T_i}$, where $C_i$
is the number of items in $S$ that contain feature $i$.
}
\end{framed}
 
\medskip
\noindent
{\bf Main Result.} We give an online algorithm for the \diverse problem which 
proves the next theorem.
\begin{theorem}
\label{thm:main}
There is a deterministic online algorithm for the \diverse problem that 
has a competitive ratio of $\frac{1}{2} - \delta$ for any $\delta > 0$ 
with probability (over the input distribution) 
at least $1 - 1/n$, provided the input is drawn 
i.i.d. from an (unknown) probability 
distribution on feature sets  
satisfying the property that the expected value of 
$\rho_{\opt} \geq \frac{24\ln n}{\delta^2}$.
\end{theorem} 

\medskip
\noindent
{\bf Our Techniques.}
Consider the special case where all targets $T_i$ are equal to the budget
$B$. Further, assume that we have the
guarantee that the expected value of $\rho_{\opt} = c_{\opt} B = \Omega(\log^2 n)$
(which is stronger than that required by Theorem~\ref{thm:main}). 
Then, we can partition the input into $\log n$ {\em epochs}, where in each epoch,
the algorithm selects at most $B/\log n$ subsets from an input stream containing
$m/\log n$ items, and aims at achieving an expected minimum coverage of
$\lambda = \Omega(\frac{\rho_{\opt}}{\log n})$. 

Now, 
instead of achieving a coverage of $\lambda$ for each feature, let us change our 
goal in any epoch to achieving a cumulative coverage of $\Omega(n \lambda)$ over 
all features, 
where the contribution of any single feature to this sum is at most $\lambda$,
i.e. $\sum_{i\in F} \min(C_i, \lambda) = \Omega(n\lambda)$.
This can be achieved by using a thresholding algorithm that selects an item 
{\em if and only if} 
it contains at least $\Omega\left(\frac{n\lambda\log n}{B}\right)$ features $i\in F$
with current coverage $C_i < \lambda$.
This immediately implies, via an averaging argument, that some constant fraction
of features have achieved a coverage of $\Omega(\lambda)$. We discard these features
in the next epoch and recurse. Since the number of retained features decreases
by a constant factor in every epoch, the coverage on every feature is $\Omega(\lambda)$
at the end of $\log n$ epochs. Therefore, this algorithm yields a competitive
ratio of $O(\log n)$.

To transform the algorithm described above to an algorithm that proves 
Theorem~\ref{thm:main}, we need to make the following improvements:
\begin{itemize}
	\item Improve the competitive ratio of the algorithm to a constant.
	\item Generalize the algorithm to handle arbitrary targets $T_i$.
	\item Relax the constraint on the expected value of 
	$\rho_{\opt}$ from $\Omega(\log^2 n)$ to $\Omega(\log n)$.
\end{itemize} 
The previous 
algorithm can be interpreted in terms of a reward function which gives a reward
of 1 every time a feature is covered until the feature has been covered 
$\lambda$ times, at which point the reward on covering the feature drops to 0.
The algorithm then essentially sets a threshold proportional to the ratio of the
remaining rewards to the remaining budget, and selects an item if it
meets this reward threshold. However, observe that 
the algorithm fails to differentiate between covering a feature that 
already has a large coverage (but less than $\lambda$), and 
a feature that has smaller coverage. To make this distinction, we introduce a
smoother reward function in the next section, and show that this simple 
thresholding algorithm for the new reward function achieves all the three
goals outlined above. 
