%!TEX root = paper.tex

\section{Introduction}
\label{sec:intro}

Our work is partially motivated by observing claims made about
interesting facts from data in the context of \emph{computational journalism}~\cite{cidr11-CohenLiEtAl-cjdb,cacm11-CohenHamiltonTurner-comp_jour}.
For example, \emph{ESPN Elias
  Says...}\footnote{\url{http://espn.go.com/espn/elias}} produces many
factual claims based on players' performance statistics for a variety
of professional sports in North America.  While these claims come in
very different forms, the key ingredient is comparison against claims
in the same form.  As an example, consider the following two claims
about the performance of two NBA players.
\begin{itemize}
\item \emph{Kevin Love's 31-point, 31-rebound game on Friday night ...
    Love became the first NBA player with a 30/30 game since Moses
    Malone had 38 points and 32 rebounds in a game back in
    1982.}\footnote{\url{http://espn.go.com/espn/elias?date=20101113}}
\item \emph{He (LeBron James) scored 35 or more points in nine
    consecutive games and joined Michael Jordan and Kobe Bryant as the
    only players since 1970 to accomplish the
    feat.}\footnote{\url{http://www.nba.com/cavaliers/news/lbj_mvp_candidate_060419.html}}
\end{itemize}
The purposes of both claims above are to highlight some player's
performance, but they describe different aspects of the game.  Common
to both claims is the attempt to impress the reader by stating that
few others have done the same or better before.

However, instead of just singling out outliers in text, it is more
powerful to visualize the set of points representing all claims of the
same form. Figure~\ref{fig:eg-proj-full} shows one such visualization
of all NBA players' \texttt{points} and \texttt{rebounds} stats in a
single game by treating them as 2d points and plotting them in a
scatter plot of the ``sparse'' points and a heatmap showing the
density of the remaining points (we will define ``sparse points''
formally in Section~\ref{sec:problem:defn}).  The visualization gives
clear context on how impressive Kevin Love's 31/31 performance is by
showing not only whether the performance is on the skyline, but also
how far away it is from the edge of the cloud of mediocre
performances. Furthermore, this single visualization can help the
users explore many outstanding performances for which the same form of
claims can be made, and see the distribution of players' performance
in terms of \texttt{points}-\texttt{rebounds} in a single game.  A
similar visualization can be generated to evaluate LeBron James'
9-game 35-plus-point streak.  Indeed, this visualization can be used
for any form of claim that can be represented by a 2d scatter plot.

%This 2d representation is indeed a powerful mechanism for users to
%explore their data. For example, in analyzing search engine users, one
%can plot number of queries issued and number of results clicked in a
%single query session to learn the common patterns and identify outlier
%users who often deviate from ordinary users.

We identify two visual features essential to data
exploration---outliers and clusters.  This leads us naturally to the
choice of overlaying a scatter plot (for outliers) on a heatmap (for
clusters).

Given a large set $\Result$ of points to visualize, we do not need to
show the exact distribution of $\Result$ because in practice because
ordinary users are unable to perceive the difference between two dense
regions containing a similar number of points (say $200$ versus
$210$).  Approximation can be easy in certain cases.  For example, in
the Kevin Love case, if the underlying data are stored as points and
rebounds per player per game, each exploration query is a simple
lookup.  One can apply many existing outlier detection and
density-estimation techniques to produce an approximation of the final
plot.  In many other cases, however, computing this visualization,
even approximately, is a non-trivial task since it involves running
many potentially expensive aggregate queries over the entire database.
For example, in the LeBron James case, it takes an algorithm linear
time to scan through a player's game-by-game scoring stats to produce
$\Result$ representing his \emph{prominent scoring
  streaks}~\cite{sigkdd11-JiangLiEtAl-prominent_streak}.  In such
cases, without knowing how the input is transformed into $\Result$, it
is impossible to sample directly from $\Result$.

%If $\Result$ is given, all sorts of outlier detection and sampling-based
%density estimation techniques may apply to produce an approximation
%or a summary of $\Result$, as in the case of Kevin Love's single game
%performance example.  But in other cases, it takes a complicated
%function to transform the input data to $\Result$.  

We observe that many datasets for exploration are comprised of data
for objects (e.g., players or teams), and an exploration is often
represented as a 2d point set $\Result$ obtained by evaluating a
blackbox exploration query on all the objects.  It is clear that the
objects do not contribute equally to the visual properties of the
visualization of $\Result$: many objects produce only points that will
be buried inside dense regions, and are not interesting from the
visualization perspective except that they contribute to the density
color.

Consider a user-provided exploration query modeled as a function $f$
that maps the data of each object to a set of 2d points, our goal is
to, without knowing how $f$ behaves, find a small set of objects,
evaluate $f$ on their data and produce a good approximation of the
scatter plot with heatmap visualization of $\Result$ and preserve the
two aforementioned key visual features: outliers and clusters.

The main contributions of this paper include:
\begin{enumerate}
\item We formally define the two key visual features of scatter plot
  with heatmap type visualization, and quantify the quality of
  approximation.
\item We propose a two-phase sampling-based algorithm that efficiently
  generates an approximation of $\Result$ for visualization without
  obtaining the exact $\Result$ by evaluating $f$ on the full data.
  Quantitative justification is provided.
\item We perform extensive experiments to evaluate both the quality of
  approximation and the efficiency of the sampling-based algorithm for
  interactive data exploration.
\end{enumerate}

% 
% motivation
% \begin{itemize}
%  \item visualization helps in data analysis
%  \item increasing need of rapid visualization of results of various
%    complicated queries
%  \item people pay most attention to the most important visual features
%  \item one common type is 2d scatter plot/heatmap, the important
%    visual features being (1) outliers, and (2) estimated density of
%    the dense region(s)   
% \end{itemize}
% 
% computational challenges
% \begin{itemize}
%  \item same visualization type, but various query types
%  \item sampling approach usually do not work for outlier detection
% \end{itemize}
% 
% target data domain
% \begin{itemize}
%  \item object centric dataset. 
% \end{itemize}
% 
% contribution
% \begin{itemize}
%  \item sampling based algorithm framework for rapid generation of
%    approximate query results for visualizing in 2d scatter plot/heatmap
%    while preserving the important visual features
% \end{itemize}
