%!TEX root = paper.tex

\begin{figure*}[t]
 \centering
 \input{figs/flowchart.tikz}
 \caption{System workflow.}\label{fig:flow}
\end{figure*}

\section{System Overview}
\label{sec:system}
In this section, we first describe our storage structure for
data, and then walk through the flow of the algorithm.

\subsection{Data Storage and Working Memory}
\label{sec:system:memory}
% We assume that the whole dataset $\DataSet$ does not fit into
% the memory, and so the data is stored in an SSTable. An SSTable
% is a set of key-value pairs where the pairs are sorted by their
% keys so that it is easy to lookup a value provided that its key
% is known.
From a system point of view, we would like to host the exploration
query evaluation service for multiple datasets.  Hence, we will not
dedicate memory to storing the entire dataset, even if it fits into
memory by itself.  Instead, the dataset is stored in
SSTable~\cite{tocs08-ChangDeanEtAl-bigtable}, a distributed key-value
storage system supporting efficient lookup.  The data of an object is
accessed via a service API using its index as the key; additionally,
\emph{any single tuple} can be accessed using the object index along
with the tuple index as the key.  In other words, the API allows both
object- and tuple-level access.

On the other hand, we do assume that a small amount of memory is
reserved for storing a small sample of the dataset, and that enough
working memory is available, on a per-query basis, for storing data
accessed during query evaluation (whose size is capped by budget
$\eta$) and for query evaluation.

\subsection{Workflow}
\label{sec:system:flow}

Here we describe the high-level workflow of evaluating an exploration
query (Figure~\ref{fig:flow}).  Detailed algorithms and
analysis will be provided in Sections~\ref{sec:algo}
and~\ref{sec:select}, respectively.

\begin{itemize}
\item[0.] Upon initialization of the query evaluator, we establish the
  connection to the dataset $\DataSet$, and ``prefetch'' a random
  sample of $\zeta\|\DataSet\|$ tuples into memory, where $\zeta$ is
  the sample rate and $\|\DataSet\|$ is the total number of tuples in
  $\DataSet$.  We leave the the details of this prefetching step to
  Section~\ref{sec:algo}.

  Since the prefetching step is performed only once and its cost is
  amortized over all ensuing exploration queries regardless of their
  types, we do not count its data accesses towards the budget $\eta$
  in our problem definition.  On the other hand, the sample rate
  $\zeta$ is bounded by the amount of memory reserved for storing the
  sample, which is far less than the total data size.

\item[1.] When an exploration query comes in, $f$ is passed in as a
  blackbox function, with attribute(s) specified as parameter(s) of
  $f$.

\item[2.] At its discretion, the algorithm executes function $f$ on
  the prefetched sample, and/or full data for a subset of the
  objects retrieved via the object-level API.  Depending on the
  algorithm, this step may be performed more than once.

\item[3.] Based on the results of executing $f$ in the previous step,
  the algorithm computes the set $\RePre$ of sparse points and a
  sketch of the remaining points $\Result\setminus\RePre$, or
  approximations thereof.

\item[4.] The results are then combined to produce a visualization for
  the result of the exploration query, using a scatter plot overlaid
  on top of a heatmap.
\end{itemize}

In this workflow, the evaluator assumes that it knows only the input
and output format of $f$, and nothing about how $f$ actually processes
the input and produces the output.  In other words, an exploration
query can be represented by any blackbox function $f$ carrying the
signature specified in Section~\ref{sec:problem:prelim}, with a few
examples given in Section~\ref{sec:problem:query}.
