%!TEX root = paper.tex

\section{Algorithms}
\label{sec:algo}

In this section, we present two algorithms.  The baseline algorithm
(Algorithm~\ref{algo:base}) performs exact evaluation of an
exploration query $f$, ignoring computation budget $\eta$, and
computes accurate ($\RePre$, $\ReSke$) for the full result set
$\Result$.  The sampling-based algorithm (Algorithm~\ref{algo:sample})
produces approximate solutions to $\RePre$ and $\ReSke$ more
efficiently than the baseline, within the computation budget $\eta$.

\subsection{Baseline Algorithm}
\label{sec:algo:base}
%
\begin{algorithm2e}[t]
% \KwSty{def} $\textsf{Partition}(\Result,r_x,r_y)$ \Begin{
%  $\Partition \gets \set{\Partition_{ij}=\emptyset}_{i,j\in\Z}$\;
%  \ForEach{$p\in\Result$}{\label{line:base:pstart}
%   $i \gets \floor{p.x/r_x}$; $j \gets \floor{p.y/r_y}$\;
%   $\Partition_{ij} \gets \Partition_{ij}\cup\{p\}$\;
%     \label{line:base:pend}
%  }
%  \Return $\Partition$\;
% }
 $\Result \gets \emptyset$\;
 \For{$i=1$ \KwTo $N$}{
  $\R_i \gets \textsf{LoadObjectData}(i)$\;\label{line:base:load}
  $\Result \gets \Result \uplus f(\R_i)$\;\label{line:base:execute}
 }
%  $\Partition \gets \textsf{Partition}(\Result, r_x, r_y)$\;
 $\RePre \gets \emptyset$;
 $\ReSke \gets \emptyset$\;
%  \ForEach{$\Partition_{ij}\in\Partition$}{
%   $n_{sparse}\gets 0$\;
%   \ForEach{$p\in\Partition_{ij}$}{ 
 \ForEach{$p\in\Result$}{
   \eIf{$|\Neighbor_\Result(p;r_x,r_y)|\le\tau$\label{line:base:count}}{
     $\RePre \gets \RePre \uplus \set{p}$\;
%     $n_{sparse}\gets n_{sparse}+1$\;
   }{
     $\ReSke \gets \ReSke \uplus \{(p, 1)\}$\;
   }
%   }
%   \If{$n_{sparse}<|\Partition_{ij}|$}{
%    $\ReSke\gets\ReSke\cup\set{((i,j),|\Partition_{ij}|-n_{sparse})}$\;
%   }
 }
 \Return $\RePre$, $\ReSke$\;
 \caption{\label{algo:base}$\textsf{ExploreBase}(f,r_x,r_y,\tau)$}
\end{algorithm2e}

We first present a straightforward algorithm (Algorithm
\ref{algo:base}) that performs exact evaluation on the full dataset
given an exploration query $f$.  The algorithm takes as input (i)~a
callable function $f$ specifying the exploration query, and (ii)~radii
$r_x$ and $r_y$ that define the neighborhood of a point, and sparsity
threshold $\tau$.

Algorithm~\ref{algo:base} evaluates the exploration query in a
brute-force fashion.  For each object $i$, we load its data $\R_i$
into memory (line~\ref{line:base:load}).  (If a tuple is in the
prefetched sample, we can avoid reloading it; this detail is not shown
in Algorithm~\ref{algo:base}.)  Then, we feed object $i$'s data as
input to function $f$ to evaluate (line~\ref{line:base:execute}).
$\Result$ is simply the union of execution output over all objects.
For each point $p \in \Result$, we include it in $\RePre$ if it has no
more than $\tau$ neighbors in $\Result$.  Otherwise, we simply create
a sketch point for it with weight $1$ in $\ReSke$.

\paragraph{Counting Neighbors.}
Neighbor counting by brute force can take as much as $O(|\Result|^2)$
time.  We use the following heuristic by partitioning the result set
$\Result$ using grid cells each of size $r_x \times r_y$.

By Lemma~\ref{lemma:count}, the neighborhood of a point $p\in\RR$
contains the grid cell $\Grid_{ij}$ containing $p$, and is contained
by the $3 \times 3$ grid cells centered at $\Grid_{ij}$.  Moving from
$\RR$ to the finite result set $\Result$, for a $p\in\Result$ that
belongs to some partition $\Partition_{ij} = \Result \cap \Grid_{ij}$,
$p$'s neighbors in $\Result$, i.e., $\Neighbor_{\Result}(p;r_x,r_y)$,
form a superset of $\Partition_{ij}$ and fall within the $3 \times 3$
partitions centered at $\Partition_{ij}$
(Corollary~\ref{corollary:count}).

\begin{lemma}\label{lemma:count}
 For any $p\in\RR$, if $p\in\Grid_{ij}$,
 where $\Grid_{ij}=[i\cdot r_x, (i+1)\cdot r_x)\times[j\cdot r_y,
   (j+1)\cdot r_y)$,
 then we have
 \begin{itemize}
  \item[i.] $\Grid_{ij}\subseteq\Neighbor_{\RR}(p;r_x, r_y)$;
  \item[ii.] $\Neighbor_{\RR}(p;r_x, r_y)\subseteq\bigcup_{i'=i-1}^
    {i+1}\bigcup_{j'=j-1}^{j+1}\Grid_{i'j'}$.
 \end{itemize}
\end{lemma}

\begin{corollary}\label{corollary:count}
 Given $\Result\subseteq\RR$, partition it as $\Result =
   \bigcup_{i,j\in\Z} \Partition_{ij}$, where $\Partition_{ij} =
   \Result \cap \Grid_{ij}$.  For any $p \in \Partition_{ij}$,
 we have
 \begin{itemize}
  \item[i.] $\Partition_{ij}\subseteq\Neighbor_{\Result}(p;r_x, r_y)$;
  \item[ii.] $\Neighbor_{\Result}(p;r_x, r_y)\subseteq\bigcup_{i'=i-1}^
    {i+1}\bigcup_{j'=j-1}^{j+1}\Partition_{i'j'}$.
 \end{itemize}
\end{corollary}

Because $\Result$ is determined by $f$, whose behavior is assumed to
be unknown to the algorithm, it is not possible to index $\Result$
beforehand in order to speed up neighbor counting.  However, with
Corollary \ref{corollary:count}, we can avoid performing
$O(|\Result|^2)$ comparisons, and narrow down the possible neighbors
of a point to its 9 adjacent partitions.  Moreover, if a partition
$\Partition_{ij}$ contains more than $\tau$ points, we can immediately
determine that all its points should be added to the sketch.

\paragraph{Time Complexity.} The execution of
Algorithm~\ref{algo:base} consists of three steps: (i)~loading each
object's data into memory, (ii)~executing $f$ on each object's data,
and (iii)~computing $\RePre$ and $\ReSke$ from $\Result$.

Step~(i) requires accessing the full dataset, which requires fetching
$(1-\zeta)\|\DataSet\|$ tuples, with prefetched ones excluded.  This
use of the prefetched sample is not very effective; we will see a much
better use of this sample in Section~\ref{sec:algo:sample}.

Steps~(ii) and~(iii) are carried out in memory.  Step~(ii) depends on
how $f$ behaves, while step~(iii) only depends on the size of the
result set $\Result$ and the sparsity threshold $\tau$.

For step (ii), the time complexity for the brute-force execution of
$f$ is linear in the number of tuples for all three types of
exploration queries described in Section~\ref{sec:problem:query}.
Hence, the overall complexity of executing such queries on $\DataSet$
is $O(\|\DataSet\|)$.

For step (iii), thanks to the counting technique we have described,
the worse case complexity is improved from $O(|\Result|^2)$ to
$O(\tau\cdot|\Result|)$.  The worse-case scenario is that each
(non-empty) partition of $\Result$ contains $\tau$ points (so no point
can be pruned for counting), and each non-empty partition is adjacent
to one or more (at most 8) other non-empty partition(s).  In this
case, the total number of pairs of points compared is
$O\left(\tau^2\cdot\tfrac{|\Result|}{\tau}\right)=O(\tau\cdot|\Result|)$.
In particular, for all three types of exploration queries we consider,
$|\Result|=O(\|\DataSet\|)$, i.e., linear in the size of the full
dataset $\DataSet$.

\paragraph{Result Quality.}
In terms of the quality of the output, without imposing the
computation budget $\eta$, Algorithm~\ref{algo:base} trivially
returns the exact $\RePre$ and a sketch that gives
$\delta(\Result\setminus\RePre, \ReSke) = 0$.

\begin{figure}[t]
 \centering
 \subfloat[$f_1:$ Projection on \texttt{points}-\texttt{rebounds} plain]
   {\label{fig:eg-provision-pts-reb}
%    \input{figs/prj_pts_reb.tikz}}
   \includegraphics[width=0.45\linewidth]{figs/prj_pts_reb.tikz}}
 \hfill
 \subfloat[$f_2:$ Projection on \texttt{assists}-\texttt{steals} plain]
   {\label{fig:eg-provision-ast-stl}
%    \input{figs/prj_ast_stl.tikz}}
   \includegraphics[width=0.45\linewidth]{figs/prj_ast_stl.tikz}}
 \caption{Comparing results of projection queries with different
   attributes.}\label{fig:eg-provision}
\end{figure}


\subsection{Sampling-based Algorithms}
\label{sec:algo:sample}
The baseline algorithm essentially fetches the full dataset
$\DataSet$, object by object.  Due to the large volume of the data,
accessing the entire dataset piece by piece via service API can be
very costly.  The complexity of full evaluation and counting makes it
inefficient to perform such brute-force evaluation in an interactive
environment.

The baseline algorithm also disregards the computation budget $\eta$.
Working under a budget constraint is challenging.  Because we do not
assume anything about the behavior of $f$, there is no guarantee that
evaluating $f$ on partial data of an object will produce a partial
output of evaluating $f$ on the object's full data.  Therefore, in
order to comply with the budget constraint, we must choose to evaluate
$f$ on the full data for a subset of objects, and completely ignore
the remaining objects.  But how can we know which subset of objects to
choose, without knowing how $f$ behaves?  The challenge is compounded
by the problem that the data $\R_i$ of each object $i$ follows some
unknown distribution.  For example, in the context of the basketball
data, an object could be a player or a team.  For a player object,
depending on the players' position on the court (e.g., point guard
vs.\ power forward), and depending on the ability of the player (e.g.,
a superstar like Michael Jordan vs.\ a mediocre player), the
distribution of this object's data will obviously be very different
from other objects.

% In our problem, we are not interested in knowing the data distribution
% for each object, but only the result of applying a query function $f$
% on its data.  One key observation is that, if we evaluate $f$ on a
% sample of each object's data, the distribution of the result set on
% the sample - let's call it $\Result\Sample$ - should resemble the
% distribution of the full result set $\Result$, even though the sparse
% points returned by some exploration query, e.g. \emph{streak query}, 
% represents low probability events that came from one or a few tuples
% of the full data. In Section~\ref{sec:select}, we provide a
% quantitative analysis on the \emph{projection query} for this
% connection between the result on the sample and that on the full data.

Fortunately, the prefetched sample comes to rescue.  Suppose that for
each object, we have a sample of its data.  Then, the result of
applying $f$ on the prefetched sample may resemble the full result
$\Result$ in terms of outlier identities, even though the outliers
returned by $f$, e.g., a \emph{streak query}, represent
low-probability events that come from one or a few tuples of the full
data.  In Section~\ref{sec:select:pos}, we provide an analysis of this
connection between the result on the sample and that on the full data
for \emph{projection queries}.

In the remainder of this section, we describe a general sampling-based
algorithm that uses the prefetched sample to select a small number of
objects, for which we access their full data for evaluation
(Algorithm~\ref{algo:sample}).

\begin{algorithm2e}[t]
 $\Result\Sample \gets \emptyset$\;
 \For{$i=1$ \KwTo $N$}{
  $\Result\Sample \gets \Result\Sample \uplus f(\R\Sample_i)$\;
  \label{line:sample:exe}
 }
 $(\Obj^+,\Obj^-) \gets \textsf{SelectObjects}(\Result\Sample,\eta)$\;
 \label{line:sample:select}
%  $N^+ \gets |\Obj^+|$; $N^-\gets|\Obj^-|$\;
 $\Result^+ \gets \emptyset$\;
 \ForEach{$i\in\Obj^+$}{
  $\R_i \gets \textsf{LoadObjectData}(i)$\;
  $\Result^+ \gets \Result^+ \uplus f(\R_i)$\;
 }
%  $\Partition^+ \gets \textsf{Partition}(\Result^+, r_x, r_y)$\;
 $\Result^- \gets \emptyset$\;
 \ForEach{$j\in\Obj^-$}{
  $\R_j \gets \textsf{LoadObjectData}(j)$\;
  $\Result^- \gets \Result^- \uplus f(\R_j)$\;
 }
%  $\Partition^- \gets \textsf{Partition}(\Result^-, r_x, r_y)$\;
 $\RePreProx \gets \emptyset$;
 $\ReSke \gets \emptyset$\;
%  \ForEach{$(i,j)$ \emph{such that} $\Partition_{ij}^+\cup
%    \Partition_{ij}^-\ne\emptyset$}{
%   $n_{\textsf{precise}} \gets 0$\;
%   \ForEach{$p\in\Partition_{ij}^+$}{
  \ForEach{$p \in \Result^+$}{
%    $c_{\Neighbor} \gets \Neighbor_{\Partition^+}(p;r_x,r_y) +
%      \lambda \cdot \Neighbor_{\Partition^-}(p;r_x,r_y)$\;
%      \label{line:sample:count1}
   \eIf{$|\Neighbor_{\Result^+}(p;r_x,r_y)| + \lambda \cdot
     |\Neighbor_{\Result^-}(p;r_x,r_y)| \le \tau$
     \label{line:sample:count1}}{
    $\RePreProx \gets \RePreProx \uplus \set{p}$\;
   }{
    $\ReSke \gets \ReSke \uplus \{p, 1\}$\;
   }
  }
  \ForEach{$p \in \Result^-$}{
   $\ReSke \gets \ReSke \uplus \{p, \lambda\} $\;  \label{line:sample:count2}
  }
%  }
 \Return $\RePreProx$, $\ReSke$\;
 \caption{\label{algo:sample}$\textsf{ExploreSample}
   (f,\set{\R\Sample_i}_{i=1}^N,r_x,r_y,\tau,\eta)$}
\end{algorithm2e}

\paragraph{Prefetching.}
Algorithm~\ref{algo:sample} assumes a prefetching step (Step~0
described in Section~\ref{sec:system:flow}) that works as follows.
Upon initialization of the query evaluator, given a fixed sample rate
$\zeta$, for each object $i$, we sample $\zeta n_i$ times uniformly
and independently at random from $t_1,\dots,t_{n_i}$ with replacement.
All the $\zeta n_i$ sample tuples form the sample data $\R\Sample_i$
of object $i$.  The full set of sample data $\{\R\Sample _i\}_{i=1}^N$
sits in the memory throughout the lifetime of the evaluator, and is
fed into the evaluation of each forthcoming exploration query $f$ in
Algorithm~\ref{algo:sample}.

\paragraph{Objects Selection.}
Algorithm~\ref{algo:sample} first executes $f$ on the prefetched
sample for all objects.  The result, denoted by $\Result\Sample$, is
the union of $f(\Result\Sample_i)$ along with the ownership of each
point in the result (line~\ref{line:sample:exe}).  Base on
$\Result\Sample$, we select two disjoint subsets of objects
(line~\ref{line:sample:select}): (i)~$\Obj^+$, the set of objects such
that we envision for each $p\in\RePre$, the object $i$ that yields $p$
under $f$ (i.e., $p\in f(\R_i)$) belongs to $\Obj^+$, and
(ii)~$\Obj^-$, a random sample of all objects $\Obj$ excluding
$\Obj^+$ (i.e., $\Obj\setminus\Obj^+$).  We ensure that the total
number of tuples contained in the full data for these objects is
within the budget $\eta$.  We defer the discussion of how the objects
are selected (line~\ref{line:sample:select}) to
Section~\ref{sec:select}.

\paragraph{Full Execution.}
The two sets of objects $\Obj^+$ and $\Obj^-$ are presumably much
smaller than the set of all objects.  Algorithm~\ref{algo:sample} then
executes $f$ on the full data for $\Obj^+$ and $\Obj^-$; we denote the
result sets as $\Result^+$ and $\Result^-$ respectively.

\paragraph{Counting.}
There are two key differences in neighbor counting and result
generation compared to Algorithm~\ref{algo:base}: (1)~only points
of $\Result^+$ may be included in $\RePreProx$, and (2)~while each
point of $\Result^+$ is counted once as before, the presence of each
point in $\Result^-$ is multiplied by $\lambda$, a multiplier
depending on how the budget $\eta$ is divided between $\Obj^+$ and
$\Obj^-$.  The choice of the value of $\lambda$ will also be described
in Section~\ref{sec:select}.

% For any continuous region of $\RR$, be it a grid cell $\Grid_{ij}$
% or the neighborhood $\Neighbor_{\RR}(p;r_x,r_y)$ of some point $p$,
% suppose it contains $c_{\textsf{actual}}$ points of $\Result\setminus
%   \Result^+$.  Let $c_{\textsf{sample}}$ denote the number of points
% in $\Result^-$ contained in the same continuous region.  Note that
% $\Obj^-$ are $N^-$ objects chosen uniformly at random from the set
% $\Obj\setminus\Obj^+$ of $N-N^+$ objects.  So we have
% \begin{equation}
%  \mathbb{E}_{\Obj^-}[c_{\textsf{sample}}] =
%  c_{\textsf{actual}}\cdot\frac{N^-}{N-N^+}
% \end{equation}
% Conversely, in counting the number of points in each continuous
% region, we use $c_{\textsf{sample}}\cdot\frac{N-N^+}{N^-}$ to
% approximate $c_{\textsf{actual}}$ (lines~\ref{line:sample:count1}
% and~\ref{line:sample:count2}).

\paragraph{Time Complexity.}
Compared to the complexity of the Algorithm~\ref{algo:base},
Algorithm~\ref{algo:sample} reduces the cost of fetching data and
counting from $O(\|\DataSet\|)$ to $O((\zeta + \eta) \cdot
  \|\DataSet\|)$.  For cost of execution $f$ for a query with
linear/super-linear complexity $T(n)$, the overall time complexity
is reduced to $T((\zeta + \eta) \cdot\|\DataSet\|)$.  Specifically,
for the three types of queries we consider with linear time
complexity, the overall time complexity is also reduced to
$O((\zeta + \eta) \cdot \|\DataSet\|)$.
%(fetching data, executing $f$, and counting) by a factor of
%$\frac{N}{N^+ + N^-}$ (e.g. $O(Nn)$ becomes $O((N^+ + N^-)n)$).


\paragraph{Difficulty of Provisioning $\Obj^+$.}
It is obvious that the choice of $\Obj^+$ is critical to the quality
of $\RePreProx$---if $i\not\in\Obj^+$, there is no chance for points
of $f(\R_i)$ to appear in $\RePreProx$.  To understand why it is
necessary to perform online selection of $\Obj^+$ and $\Obj^-$ based
on $\Result\Sample$, one must consider the diversity of queries that
can be applied to a single dataset.

Figure \ref{fig:eg-provision} shows the scatter plot with heatmap
of accurate results for two queries of the same type (projection)
but on different attributes.  Query $f_1$ projects the NBA players'
game-by-game performance to the \texttt{points}-\texttt{rebounds}
plane, while $f_2$ projects the same data on two different attributes
\texttt{assists} and \texttt{steals}.  For each of $f_1$ and $f_2$,
we use a rectangular neighborhood of size approximately $\tfrac{1}{10}
  \times \tfrac{1}{10}$ of the result space, and a proper value of
$\tau$ that limits the size of $\RePre$ to be roughly 100.  Comparing
$\RePre^1$ for $f_1$ and $\RePre^2$ for $f_2$, $\RePre^1$ has 99
points (possibly overlapping) corresponding to 40 distinct objects, and
$\RePre^2$ has 101 points corresponding to 46 distinct objects.
Together, $\RePre^1$ and $\RePre^2$ consist of a total of 83 distinct
objects, sharing only 3 in common.

This example illustrates that it is impossible to use a static choice
of $\Obj^+$ to provide a good coverage of objects that result in
points of $\RePre$ for all possible queries.  Therefore, we perform
query-specific online object selection, as we explain in the next
section.








