\section{Introduction}\label{sec:intro}

% introductory para to problem

It is useful to be able to identify the source distribution of a given stream of data, or to compare the distributions of different streams. 
% test whether a stream is from a given distribution, or if the source of two independent collections of data are from the same source distribution. 
For instance, we may study packet inter-arrival in a high-speed network and want to know whether the distribution of these times is identical on different days or on different routers, or astronomers may compare the magnitude distribution between different classes of celestial objects to see if they are similar. Unfortunately, in many of these applications, to maintain a complete record would very quickly overwhelm local storage capacity---in the abovementioned examples, the data rate is orders of magnitude greater than available memory. To overcome this problem, it is necessary to perform comparative tests on the distribution of the data presented as a stream. In this paper, we show how one such statistic for comparing distributions, the Kolmogorov-Smirnov statistic, can be estimated succinctly yet accurately.

% Motivate with an example?

% introduce the KS test

The Kolmogorov-Smirnov test (henceforth referred to as the {\em KS test}) is a means for measuring whether given data are drawn from a specific distribution. 
For instance, this test is commonly used by astronomers to check if different classes of stars have the same magnitude distribution~\cite{WJ03}. 
The power of this test comes from the fact that it is non-parametric, i.e., it does not assume some fixed type of distribution (such as the normal distribution in the case of the Student $t$-test) and can be applied to any distribution with a continuous distribution function. This lack of restriction makes it invaluable for inferring whether data fits a given distribution when it is preferable to not assume a fixed parametric distribution, or when the distribution does not have well-established tests of its own. It is also superior to $\chi^2$ tests in that there is no need to determine how to bin the data or worry about whether there is sufficient density in each bin.
The test computes a statistic of the distribution function of the data which is used to reject the hypothesis that the distributions are identical up to a given significance level (e.g., $\alpha = 0.05$).

The KS test has both a one-sample and a two-sample variant. In the one-sample variant, empirical data (e.g., packet inter-arrival times or luminosity) can be tested against a fixed, known distribution to see whether the data are drawn from this distribution. This test is useful for verifying a hypothesis about the data. The two-sample version of the test allows for the comparison of two (not necessarily equal-size) datasets without any foreknowledge of the underlying distributions. This test can be used to check whether the two sources are different.

% Para on one-sided estimate of distance
% There are situations in which we would like the error to be one-sided so that we always either over- or under-estimate the KS-statistic. Specifically, if we would like to be certain that there is a large deviation between distributions (e.g., we want to only raise an alarm in a networking application when we are quite certain that something anomalous is going on since these anomalies are expensive to drill down), then we would prefer to err on the side of under-estimating. Conversely, if we never want to miss a potential deviation (have a false negative, or miss out on some anomalous behavior, in the abovementioned application), then it would be preferable to over-estimate the statistic.

\subsection{Applications}

We outline below some of the applications for which streaming algorithms for the KS test would be useful.

{\bf Astronomy:} The KS test is commonly used in the field of astronomy to measure the distance between distributions of astronomical measurements~\cite{WJ03}. The recent increase in the amount of data available to astronomers will soon make storing these measurements very challenging. For instance, the Chandra space telescope~\cite{CHANDRA} is capable of recording data at the rate of 1.8 gbps, but it has a downlink capacity of only 1 mbps to Earth. Another telescope under development, the Square Kilometre Array\cite{SKA}, will generate data at the rate of several times the bandwidth of the entire Internet! In these and other cases, there will be a critical need for summarizing this data as efficiently as possible.

{\bf Wireless Sensor networks:} One of the most common uses of wireless sensor networks is to perform scientific measurements at remote or wide-spread locations. These sensor networks consist of sensor motes that have limited resources such as battery-life, memory, processing power, etc.~\cite{MFHH05}. Being able to perform statistical tests to detect changes in or to measure properties of the distributions of the measurements necessitates the ability to retain the sensed data in the mote's limited storage. The techniques in this paper could be used to perform light-weight tests on the data to detect significant changes in the measurements.

{\bf Networking:} The inter-arrival time between packets is a common metric for network measurements~\cite{CPB93, KMFB04}. An algorithm for measuring the KS-statistic would give network operators the ability to detect when the packet arrival rate changes significantly or to match an arrival pattern with known distributions of previously-identified behavior. Since this data is generated at the rate of many gigabytes per second across a large ISP, it is infeasible to keep a long-term record of this data. The algorithms proposed in this paper would allow for succinct storage of these measurements. Other quantities that could be compared in this way are packet size, delay, loss, and latency.

% {\bf Financial applications:} 


\subsection{Contributions}

% Para on streaming algorithms                                                                                         
In this paper we build on the considerable research done in the area of streaming algorithms (e.g.,~\cite{MP78, FM85, AMS96, GMV06, MG82, GK01, SBAS04}). These algorithms are designed for situations in which only a single pass is allowed over the data and a small summary (called a {\em sketch}) of the data must be used to compute statistics on it. While many streaming algorithms compute functions on the frequency distribution of the stream (e.g., the frequency moments~\cite{AMS96}), a few study applications in which the values of the items in the stream are of interest. A notable example of this is the work done on computing quantiles (i.e., median and other selection queries) in a stream, which we make use of in this paper.

% Para on quantile sketches and their use to this problem
% Para on meta-algorithms that can use any of these sketches                                                         

The problem of computing quantiles was one of the earliest to be investigated in the streaming context~~\cite{MP78}. There has been much research done on this problem~\cite{MRL99, GK01, SBAS04, CM04, CKMS06} that we take advantage of in this paper. More precisely, we show how any reasonable sketch data structure (e.g., the ones defined in~\cite{GK01, SBAS04}) can be used to approximate the distance measurement for the KS test. The algorithms we propose are, hence, meta-algorithms (with their own approximation guarantees) that can make use of any of these quantile sketches. The analysis of the algorithms proposed in this paper are hence in terms of the approximation guarantee of the quantile sketch that is used. The advantage of this type of algorithm is that it is very extensible: if a better quantile sketch is discovered, then it can easily be applied by this algorithm. 

The contributions of this paper can be summarized as follows:

\begin{itemize}
% motivate the use of sketches for statistical tests -- shown in this paper
\item This paper motivates the use of streaming algorithms for statistical tests, such as the Kolmogorov-Smirnov test, for when the datasets are so large that it is infeasible to store them in their entirety. We show in this paper, via theorems and empirical experiments, that it is possible to perform the KS test with high accuracy while using orders of magnitude less memory than storing all the data. 

\item We propose an algorithm for the one-sample KS test to test whether a source of data is drawn from a fixed (known) distribution. The algorithm does not need to know the distribution being tested against {\em a priori}. This sketch can be made arbitrarily precise at the cost of additional memory.

\item We design an algorithm for the two-sample KS test to test whether two sources of data are from the same (unknown) distribution. In this case, nothing needs to be known about either source, other than the fact that they have continuous distributions.

\item Finally, we performed extensive experiments to demonstrate that the proposed algorithms do perform well in practice, and give considerable benefit over simple strategies such as sampling the data. We also demonstrate that the Greenwald-Khanna~\cite{GK01} quantile sketch is the best one to use for this application.
\end{itemize}

\noindent
{\bf Organization:} In Section~\ref{sec:related} the work most directly related to this problem is discussed. We define the problem and introduce quantile sketches in Section~\ref{sec:def}. The algorithms for the one-sample and two-sample KS test are given in Sections~\ref{sec:onesample} and~\ref{sec:twosample}, respectively. In Section~\ref{sec:pickingeps} we show how to pick the error parameters in our algorithms so as to guarantee a reliable answer to the KS test. The algorithms are evaluated on both real and synthetic data in Section~\ref{sec:evaluation}. Lastly, we discussion our conclusions and future work in Section~\ref{sec:conclusions}.
