\section{Two-Sample Test}
\label{sec:twosample}

% It turns out that the two-sample KS test can be efficiently approximated in a stream. We defer discussion of the one-sample test to the next session.  

The two-sample KS test is used in situations in which two datasets need to be compared to see if they come from the same distribution. A significant advantage it has over the one-sample test is that there is no need to assume anything about the distribution that both samples are drawn from. As a result, it is more commonly used in practice. 

Just as for the one-sample test algorithm, we use quantile $\epsilon$-sketches to solve this problem. The major difference here is that we assume that the sketches from the two streams (samples) are shipped to a common location for the computation to be performed. Note that this algorithm allows for pairwise comparison of any number of streams, as long as the sketches are all in the same location. Moreover, transmitting these sketches is much more bandwidth-efficient than sending the entire stream in distributed settings.

\subsection{Two-sample algorithm}

To compute the KS-statistic, we need to be able to find the maximum of $|F_n(v) - G_m(v)|$ over all values $v$. Fortunately, rather than having to check all (possibly infinite) such values, we can take advantage of the fact that the empirical distribution is discrete and only check at the values $v$ such that $F_n(v)$ or $G_m(v)$ is in the set 
$$\{i/n~|~ 0 \leq i \leq n\} \cup \{i/m~|~0 \leq i \leq m\},$$
where $n$ and $m$ are the lengths of the two streams. 

\begin{algorithm}[tb]
\caption{TwoSample($Q_1$, $n$, $Q_2$, $m$)} 
\label{alg:two-sample}

{\bf Input:} Quantile $\epsilon$-sketches $Q_1$ and $Q_2$ of streams with sizes $n$ and $m$, respectively

{\bf Output:} $\hat{D}$, an estimate of the KS-statistic $D$

\begin{algorithmic}[1]
\STATE Let $X_{i_1} \leq \ldots \leq X_{i_k}$ be the values in $Q_1$, as described in Observation~\ref{obs:sketchvalues}.
\STATE Let $Y_{j_1} \leq \ldots \leq Y_{j_l}$ be the values in $Q_2$, as described in Observation~\ref{obs:sketchvalues}.
\STATE $\hat{D} = 0$
\FOR{each $x \in \{X_{i_1}, \ldots, X_{i_k}\} \cup \{Y_{j_1}, \ldots, Y_{j_l}\}$}
\STATE Let $a = \max{\{j~|~X_{i_j} \leq x\}}$.
\STATE Let $\hat{i}_a$ be the approximate index of $X_{i_a}$, computed as described in Observation~\ref{obs:sketcherror}.
\STATE Let $b = \max{\{i~|~Y_{j_i} \leq x\}}$.
\STATE Let $\hat{j}_b$ be the approximate index of $Y_{j_b}$, computed as described in Observation~\ref{obs:sketcherror}.
\STATE $\hat{E}_x = |\hat{i}_a/n - \hat{j}_b/m|$ 
\STATE $\hat{D} = \max{(\hat{D}, \hat{E}_x)}$
\ENDFOR
\STATE return $\hat{D}$
\end{algorithmic}
\end{algorithm}


\begin{theorem}
\label{thm:twosampleguarantee}
Algorithm~\ref{alg:two-sample} returns an estimate of the KS-statistic with at most $6\epsilon$ additive error.
\end{theorem}

\begin{proof}
Our goal is to compute $D_{n,m} = \sup_x |F_n(x) - G_m(x)|$.  For any $x$, let $E_x = |F_n(x) - G_m(x)|$. 
Let $i = \max{\{i~|~X_i \leq x\}}$.  Similarly, define $j = \max{\{j~|~Y_j \leq x\}}$. We know that $F_n(x)$ must be $i/n$ and $G_m(x)$ must be $j/m$, by definition, so we have that $E_x = |i/n - j/m|$. We compare this value with that of $\hat{E}_x$ computed in line 9 of Algorithm~\ref{alg:two-sample} below.

Let $X_{i_1} \leq \ldots \leq \ldots X_{i_k}$ and $Y_{j_1} \leq \ldots \leq \ldots Y_{j_l}$ be the values stored in the sketches, defined as in lines 1-2 of Algorithm~\ref{alg:two-sample}. Let $a = \max{\{j~|~X_{i_j} \leq x\}}$ and $b = \max{\{i~|~Y_{j_i} \leq x\}}$. Since $i$ is defined such that $X_i \leq x < X_{i+1}$, we have that
\begin{align}
\label{eq:squeeze}
X_{i_a} \leq X_i \leq x \leq X_{i+1} \leq X_{i_{a + 1}},
\end{align}
where the first inequality follows from the fact that $X_i$ was chosen as the largest value among $X_1, \ldots, X_n$ that is at most $x$ and because $\{X_{i_1}, \ldots, X_{i_k}\} \subseteq \{X_1, \ldots, X_n\}$. Similarly, the last inequality follows from the fact that $X_{i+1}$ is the smallest value among $X_1, \ldots, X_n$ that is greater than $x$.

We know from Observation~\ref{obs:sketchvalues} that the indexes $i_a$ and $i_{a+1}$ are such that $i_{a+1} - i_a \leq 2\epsilon n$. This implies, from Eq.~\ref{eq:squeeze}, that 
\begin{align}
\label{eq:realindexerror}
i - i_a \leq 2\epsilon n.
\end{align}

Now, keep in mind that even though the sketch returns the value $X_{i_a}$, it does not have the exact value of $i_a$ available to it. However, we can approximate this value as $\hat{i}_a$ (line 6 of Algorithm~\ref{alg:two-sample}) by performing a binary search of the quantile sketch, as described in Observation~\ref{obs:sketcherror}. We have that this approximation $\hat{i}_a$ of $i_a$ is such that
\begin{align}
\label{eq:approxindexerror}
|\hat{i}_a - i_a| \leq \epsilon n.
\end{align}

Putting together Eqs.~\ref{eq:realindexerror} and~\ref{eq:approxindexerror}, using the triangle inequality we get that
\begin{align}
|i - \hat{i}_a| &\leq& |i - i_a| + |\hat{i}_a - i_a|\\ 
                &\leq& 2\epsilon n + \epsilon n = 3\epsilon n.
\end{align}

In exactly the same way we can show that the estimate $\hat{j}_b$ computed in line 8 of Algorithm~\ref{alg:two-sample} is such that
\begin{align}
|j - \hat{j}_b| \leq 3\epsilon m.
\end{align}

Putting all this together, we get that since the error of estimating $i/n$ by $\hat{i}_a/n$ is at most $3\epsilon$ and the error of estimating $j/m$ by $\hat{j}_b/m$ is at most $3\epsilon$, the error of estimating $E_x = |i/n - j/m|$ by $\hat{E}_x = |\hat{i}_a/n - \hat{j}_b/m|$ is at most $6\epsilon$.

Finally, note that we do not have to repeat the process above for {\em every} value of $x$, just the ones that give a different answer. Since the approximation only changes for values of $x$ among $\{X_{i_1}, \ldots, X_{i_k}\} \cup \{Y_{j_1}, \ldots, Y_{j_l}\}$, it suffices to approximate $E_x$ for these values. 
\end{proof}



\begin{comment}

Fix one such $i$ and let $v$ be the smallest value such that $F(v) = i/n$. We would like to compute $|F(v) - G(v)|$ for this value. It is not possible to know this value $F(v)$ exactly (short of storing the entire stream), but fortunately the aforementioned streaming quantile data structures can give us a value $v'$ such that $|F(v) - F(v')| \leq \epsilon n /n = \epsilon$. We show next that this returned value can be used to approximate the KS-statistic using the estimator $E_i = |F(v') - G(v)|$. First, we see that $E_i$ is not too high:
\begin{eqnarray*}
  |F(v') - G(v)| &\leq& |F(v) - G(v)| + |F(v') - F(v)|\\
                 &=& |F(v) - G(v)| + \epsilon,
\end{eqnarray*} 
where the first line comes from the triangle inequality. Similarly, we can bound $E_i$ from below
\begin{eqnarray*}
  |F(v') - G(v)| &\geq& |F(v) - G(v)| - |F(v') - F(v)|\\
   &=& |F(v) - G(v)| - \epsilon,
\end{eqnarray*}
once again using the triangle inequality. Putting this together, we get that $E_i$ is within $\epsilon$ additive error of $|F(v) - G(v)|$, where $v$ corresponds to $i$. Hence, we can check all the quantiles $i/n$ ($0 \leq i \leq n$) and compute the maximum $E = \max_{0 \leq i \leq n} E_i$ as our estimate for the KS-statistic and be certain that the resulting answer is correct to within $\epsilon$.

\end{comment}

\subsection{Computational Analysis}

The analysis of the online computation for the two-sample case is identical to that of the one-sample case. Hence, we focus on just the offline computation cost here. Once again, this cost is dominated by the $n + m$ queries performed in lines 1-2 of Algorithm~\ref{alg:two-sample}. The running time of the following iterations is $o(n + m)$, once again depending on the number of samples that the quantile $\epsilon$-sketches end up storing.
