\section{Experimental Evaluation}
\label{sec:evaluation}

\newcommand{\figsize}{0.32}

We experimentally evaluated our algorithms on both real and synthetic datasets to test their accuracy. We measured the absolute error in the KS-statistic for both the one-sample and the two-sample KS tests using different quantile $\epsilon$-sketches to evaluate which ones are most effective in practice. All our code was written in Java. The experiments were all run on a 3.0 GHz Intel Core i3 Mac with 4GB memory running Mac OS X 10.6.8.

We used synthetic data drawn from uniform, Gaussian, and power-law (Pareto) distributions, as these commonly appear in real data. We averaged the results over 10 independently generated datasets in each case. Unless otherwise stated, our experiments used ten thousand points ($n = 10000$) and used 1\% of the space needed to store the entire stream.

For our experiments on real data we used the following three traces:

\squishlist
\item Astronomy data: We collected magnitude data for stars and galaxies from the Sloan Digital Sky Survey\footnote{\url{http://cas.sdss.org/astro/en/tools/search/radial.asp}}. We queried for data on all objects within 60 arcminutes of the location (180, 0) 
\begin{comment}
using the following query:
\begin{verbatim}
SELECT  '''' + cast(p.objId as varchar(20)) + '''' as objID,    
p.type, p.g    
FROM fGetNearbyObjEq(180,0,60) n, PhotoPrimary p    
WHERE n.objID=p.objID
\end{verbatim}
\end{comment}
and got 35697 stars and 62091 galaxies. The KS-statistic was computed for the magnitudinal distribution of the stars versus the galaxies in the green part of the spectrum.

% 30 arc mins, no specification of ra/dec, 0-30 for all magnitudes
% http://cas.sdss.org/astro/en/tools/search/radial.asp
\begin{comment}
http://cas.sdss.org/astro/en/tools/search/sql.asp
All stars and galaxies centered at (180, 0) within 60 arcminutes
retrieved on 7/17/2012
SELECT  '''' + cast(p.objId as varchar(20)) + '''' as objID,
   p.type, p.g
   FROM fGetNearbyObjEq(180,0,60) n, PhotoPrimary p
   WHERE n.objID=p.objID
\end{comment}


% http://www.math.ucla.edu/$\sim$tom/distributions/Kolmogorov.html
% 5\% confidence: 1.3581 (but 0.01356 for 10k points)
% 1\% confidence: 1.6276


% CRAWDAD
% packet loss: http://crawdad.cs.dartmouth.edu/meta.php?name=isti/rural\#N1000B
% signal strength: http://crawdad.cs.dartmouth.edu/meta.php?name=cu/rssi
% signal strength: http://crawdad.cs.dartmouth.edu/meta.php?name=cu/antenna
% light measurements: http://crawdad.cs.dartmouth.edu/meta.php?name=columbia/enhants\#N1000B

\item Light data: We also used irradiance measurements (in units W/cm$^2$) taken from photometric sensors as part of Columbia University's EnHANTs (Energy Harvesting Active Networked Tags) project\footnote{\url{http://crawdad.cs.dartmouth.edu/meta.php?name=columbia/enhants\#N1000B}}. We compared the irradiance levels between Traces A and B of this dataset.


\begin{comment}
irradiance measurements (in units W/cm$^2$) taken from photometric sensors as part of Columbia
University's EnHANTs (Energy Harvesting Active Networked Tags) project. 
Freely available at:
http://crawdad.cs.dartmouth.edu/meta.php?name=columbia/enhants\#N1000B
We compared the irradiance levels between Traces A and B of this dataset. 
\end{comment}

\item Inter-arrival time data: For our third dataset we used inter-arrival times collected from wireless networks in the Portland area\footnote{\url{http://crawdad.cs.dartmouth.edu/meta.php?name=pdx/vwave}}. We compared the inter-arrival times (measured in nanoseconds) of five minutes of data collected at the Portland State University CS department (260325 values) against those collected at Pioneer Square (517631 values).

\squishend

We tested our algorithms using the following quantile $\epsilon$-sketches and compared the results. In every experiment, all the algorithms were allocated identical amounts of memory. 
\squishlist
\item The Greenwald-Khanna~\cite{GK01} (GK) algorithm is considered the state of the art for quantile estimation in streams, and uses space $O(\frac{1}{\epsilon}\log{(\epsilon n)})$. 
\item The q-digest~\cite{SBAS04} (QD) sketch uses space $O(\frac{1}{\epsilon}\log{U})$ memory, where $U$ is the size of the universe. In the case of real-valued data, we quantized the data into bins of size 1e-5 and executed the algorithm on this quantized stream. Since the KS-statistic is a function of the relative size of data, rather than the absolute values, we did not expect this to affect the result.
\item We also compared the above algorithms with the naive methodology: sampling the data. This was included to give a comparison with the obvious solution for this problem. 
% \item The random sampling (RS) algorithm uses reservoir sampling~\cite{V85} to obtain a random sample of a given size (even if the length of the stream is not known beforehand). There is a folklore result that random sampling allows the computation of any median to within rank $\epsilon n$ with probability at least $1-\delta$ if $O(\log{(1/\delta)}/\epsilon^2)$ samples are taken. See Appendix~\ref{app:folklore-sampling} for more details.
\squishend

\subsection{One Sample}

\begin{figure*}[tbp]
\centering
\subfigure[N$(0, 1)$ vs.\ N$(0.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/onesample-varymem-g-0-01-1.eps}}
\subfigure[U$(0, 1)$ vs.\ U$(0.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/onesample-varymem-u-0-01-1.eps}}
\subfigure[P$(1, 1)$ vs.\ P$(1.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/onesample-varymem-p-1-11-1.eps}}
\caption{Varying memory ($n = 10000$) for one-sample data drawn from various distributions}
%N$(0.1, 1)$ compared with N$(0, 1)$, using $n = 10000$}
\label{fig:onesample-varymemory}
\end{figure*}

In our experiments, we focused on computing the KS-statistic between distributions that are so close that we require accurate estimates to be able to distinguish them with high confidence. The case in which distributions are far apart is relatively easy to handle because more coarse-grained estimates suffice to distinguish them. The summarization of the data causes an absolute error that increases with the degree to which the data are compressed. This is illustrated using normal, uniform, and Pareto-distributed data in Figure~\ref{fig:onesample-varymemory}. We found that comparing data from N$(0.1, 1)$ with the distribution N$(0, 1)$ gave a KS-statistic close to the threshold for distinguishing distributions using $n = 10000$ points, where N($\mu, \sigma^2$) represents the Gaussian distribution with mean $\mu$ and variance $\sigma^2$. The uniform distributions U$(0, 1)$ and U$(0.1, 1)$, where U$(a, b)$ is the uniform distribution on the range $[a, b]$, and the Pareto distributions P$(1, 1)$ and P$(1.1, 1)$, where P($x_m, \alpha$) is the Pareto distribution with scale $x_m$ and shape $\alpha$, were picked for similar reasons. In all these cases, the Greenwald-Khanna sketch out-performed the sampling algorithm which in turn out-performed the q-digest sketch. The Greenwald-Khanna sketch also gave low enough error at 1\% memory to be able to distinguish the distributions. 
%The uniform case is particularly notable as the error is very low for such a simple distribution. 
For the rest of our experiments we focus on the normal distribution as it appears to have the highest error.

\begin{figure}[tbp]
\centering
\includegraphics[width=\figsize\textwidth]{figs/onesample-varysize-g-0-01-1.eps}
\caption{Varying one-sample data size ($n$), for data drawn from N$(0, 1)$ compared with N$(0.1, 1)$, using $1\%$ memory}
\label{fig:onesample-varysize}
\end{figure}


In Figure~\ref{fig:onesample-varysize} we fixed the algorithms to all use 1\% of the memory it would take to store all the data and compared how the algorithms performed when the data size increased. Note that both the axes have logarithmic scales. We see that as the data size grows, the absolute error drops rapidly in all cases. For smaller data sizes the sampling algorithm out-performs the q-digest sketch, but the Greenwald-Khanna sketch is clearly the best at all sizes. This drop in error is to be expected since the increase in data size corresponds with an increase in the number of samples stored since the memory size is fixed to 1\%.

\begin{figure}[tbp]
\centering
\includegraphics[width=\figsize\textwidth]{figs/onesample-varydistance-many-0-1.eps}
\caption{Varying mean of one-sample distribution (N$(x, 1)$) compared with N$(0, 1)$, using $n = 10000$ and $1\%$ memory}
\label{fig:onesample-varydistance}
\end{figure}

Next, we studied how the accuracy of the estimate changed as the actual KS-statistic between the data and comparative distribution varied. In Figure~\ref{fig:onesample-varydistance} we varied the mean of the normally-distributed data and compared with the distribution N$(0, 1)$, comparing the estimate of each sketch to the exact value. Once again, the Greenwald-Khanna sketch is the clear winner, almost indistinguishable from the real value in the figure. In contrast, for this distribution and these parameter values, the q-digest sketch and the sampling solution were equally bad, tending to over-estimate the actual distance.

We omit experiments for the real data in the one-sample case as there are no known analytical distributions for these datasets.

\begin{comment}
\begin{itemize} 
\item Comparing error (absolute) of each sketch as memory percent changes 

\item Varying size of data

\item Varying the actual difference between streams (scatter plot)

\item Plotting actual values as some parameter (location/shape) is varied around actual value

\end{itemize}
\end{comment}

\subsection{Two Sample}

\begin{figure*}[tbp]
\centering
\subfigure[N$(0, 1)$ vs.\ N$(0.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-g10k-0-01-1.eps}\label{fig:varymem-normal}}
\subfigure[U$(0, 1)$ vs.\ U$(0.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-u10k-0-01-1.eps}\label{fig:varymem-uniform}}
\subfigure[P$(1, 1)$ vs.\ P$(1.1, 1)$]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-p10k-1-11-1.eps}\label{fig:varymem-pareto}}
\caption{Varying memory ($n = m = 10000$) for two-sample data drawn from various distributions}
\label{fig:twosample-varymem}
\end{figure*}

\begin{comment}
Experiments
\begin{itemize}
\item Comparing error (absolute) of each sketch as memory percent changes
\item Varying size of (both) data sets
\item Varying size of one data set (keeping other fixed) to measure skew
\item Varying the actual difference between streams (scatter plot)
\end{itemize}
\end{comment}

Similarly to the one-sample case, we compared our two-sample algorithm using both sketches against the sampling technique on normal, uniform, and Pareto-distributed data. This can be seen in Figure~\ref{fig:twosample-varymem}. In all these cases, the q-digest sketch does slightly better than sampling, but the Greenwald-Khanna sketch gives the best performance at almost all levels of summarization. Once again, it can be seen that using a sketch with as little as 1\% of the original data can shrink the error in the KS-statistic small enough to reliably apply this test. 


Figure~\ref{fig:twosample-varyboth} shows a sharp drop in the error as the data size increases, just as in the one-sample case. The reason for this drop is the same as in the one-sample case. We were also curious about what the effect of changing one of the data sizes while keeping the other one fixed would be. Since the KS test can be applied to two samples of differing sizes, we were interested to see what varying the relative sample sizes would do to the error. Figure~\ref{fig:twosample-varyone} shows this result when one dataset was fixed to $n = 10000$ points while the other's size ($m$) varied. We can see in this figure that there is a drop in error, but that it seems to level out after $m$ becomes larger than $n$. This seems to indicate that the accuracy of the test is dependent on the size of the smaller of the two samples, as predicted in the previous section.

Next, we studied the accuracy of the algorithms as the actual distance between the datasets was varied. Figure~\ref{fig:twosample-varydistance} shows how each of the algorithms performs by plotting the estimated value against the real value. For reference, the $y = x$ line that indicates the ideal answer is given as well. It is again clear from this figure that the Greenwald-Khanna sketch gives the best performance. 


\begin{figure*}[tbp]
\centering
\begin{minipage}[c]{.31\linewidth}
\includegraphics[width=\textwidth]{figs/twosample-varyboth-g-0-01-1.eps}
\caption{Varying two-sample data size ($n = m$), for data drawn from N$(0, 1)$ and N$(0.1, 1)$, using $1\%$ memory}
\label{fig:twosample-varyboth}
\end{minipage}
\quad
\begin{minipage}[c]{.31\linewidth}
\includegraphics[width=\textwidth]{figs/twosample-varyone-g-0-01-1.eps}
\caption{Varying data size of one sample ($m$) keeping other sample fixed ($n = 10000$) for data drawn from N$(0, 1)$ and N$(0.1, 1)$, using $1\%$ memory}
\label{fig:twosample-varyone}
\end{minipage}
\quad
\begin{minipage}[c]{.31\linewidth}
\includegraphics[width=\textwidth]{figs/twosample-varydistance-g10k-0-many-1.eps}
\caption{Scatter plot of estimated vs.\ real values of KS-statistic ($n = m = 10000$) between two-sample data drawn from N$(0, 1)$ and various distributions of the form N$(x, 1)$, using $1\%$ memory. The $y = x$ line is also shown for reference.}
\label{fig:twosample-varydistance}
\end{minipage}
\end{figure*}

Finally, we tested the two-sample algorithm on our real datasets. The results are shown in Figure~\ref{fig:twosample-varymem-real}. In the case of the astronomy dataset, the Greenwald-Khanna sketch peformed excellently, needing less than 0.1\% memory to give a very accurate estimate of the KS-statistic. In contrast, for the light dataset, the Greenwald-Khanna sketch performed poorest at very small fractions of memory, but by the 1\% mark had started to best the other algorithms. For the inter-arrival time data, the Q-Digest sketch performs very poorly at high compression, but then soon gets better than sampling; as always, the Greenwald-Khanna version has the best performance.

\begin{figure*}[tbp]
\centering
\subfigure[Astronomy data]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-astro.eps}}
\subfigure[Light data]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-light.eps}}
\subfigure[Inter-arrival data]{\includegraphics[width=\figsize\textwidth]{figs/twosample-varymem-psu.eps}}
\caption{Varying memory for real two-sample data}
\label{fig:twosample-varymem-real}
\end{figure*}

