\section{Sequential Clustering}
\label{sec:svem}

In this section, we introduce a Sequential Clustering algorithm that
partitions a data stream into a number of continuous data segments,
each corresponding to a single occurrence of a model. The process
takes near linear time only.

%% In this section, we introduce an efficient algorithm that

%% In concept-changing data streams, the class distribution, or the
%% hidden concept generating streaming data, changes over time. Such
%% data stream can be broken into many slices, and each slice refers to
%% an episode that corresponds to a hidden concept. A fundamental
%% problem is that, given a historical data stream, finds the partition
%% of consecutive data by their hidden concepts.

%% However, because the concepts depending on the hidden contexts are
%% not given explicitly, and stochastic factors may take part in the
%% production of data drawn from concepts, there is no determinate
%% method that could verify whether a partition is correct or not.
%% Instead, we measure the quality of a partition by its mean
%% validation error, which indicates how well the partition models the
%% data. Specifically, given a stream $D=\{d_1,d_2\dots d_n\}$, and a
%% partition $P=\{D_1,D_2\dots D_m|D_i\subset D\}$, the validation
%% error of $P$ is $Q(P)=\frac{1}{|D|}\sum_i|D_i|\VE(D_i,D_i)$. We will
%% try to find a partition $P$ that minimizes $Q(P)$.

%% The criterion using the validation error comes from the intuition
%% that a partition consistent with the true history will have smaller
%% validation error. Improper partition either produces data block that
%% contains conflicting concepts, or breaks a continuous episode of a
%% concept into several slices. If a data block contains conflicting
%% concepts, the model built upon the data will have lower
%% classification accuracy; and if an continuous episode is partitioned
%% into several blocks, each block will hold fewer training data and
%% thus produce models of larger overfitting error.

\subsection{Overview}

\begin{figure}[!h]
    \centering
    \includegraphics[width=\columnwidth]{SVEM/Stream2.eps}
    \caption{Concept-changing data stream}
    \label{fig:svem:stream}
\end{figure}

Fig.~\ref{fig:svem:stream} shows an example of a data stream with
changing concepts. Each circle corresponds to a single data $d_i$, and
the black/white pattern inside the circle denotes the unknown hidden
model that governs data generation. Data in this example is
generated by three different models, and the models may recur over
time. Partition $P_1$ shown in the figure is improper. It is easy to
see that the quality of $P_1$ is not good: $D_1$ and $D_2$, the first
and second sectors of $P_1$, contain conflicting models; the last
sector $D_3$ has too little data to learn a good model. A better
partition $P_2$ is also shown, and it is closer to the true
occurrences of models in the stream.

\begin{figure}[!h]
    \centering
    {\small
    \begin{tabular}[h]{|ll|}
        \hline
        $D$ & a data stream $D=\{d_1,\cdots,d_n\}$\\
        $D_{i,j}$ & a subsequence of data $D$ consisting $\{d_i,\cdots,d_j\}$\\
        $P_{i,j}$ & a partition of subsequence $D_{i,j}$\\
        $M_i^k$ & the $i$-th candidate model on level $k$\\
        $L_i^k$ & the last candidate model on level $k$ that is\\ & learned from data completely contained in $D_{1,i}$\\
        $P_i^k$ & a partition of $D_{1,i}$ whose last designated model\\& (designated model of the last sector) is on level $k$\\
        $H(D_{i,j})$ & the designated model of $D_{i,j}$\\
        $Q(P_{i,j})$ & the error of a partition over $D_{i,j}$\\
        $\hat Q(P_{i,j})$ & estimated error of a partition over $D_{i,j}$\\
        \hline
    \end{tabular}
    }
    \caption{Notations\label{tab:notation}}
\end{figure}

Fig.~\ref{tab:notation} summarizes the notations used in this
section. Below, we first get some insight by studying a naive
dynamic programming approach of at least $O(n^3)$ time complexity,
then we give the details of the near linear time Sequential
Clustering algorithm.

\subsection{A brute force dynamic programming approach}

%% Dynamic programming solves problems having overlapping subproblems and
%% optimal substructure.
We show that the sequential clustering problem -- the problem of
partitioning a stream into data sectors that minimize the partition
error -- has optimal sub-structures, and is thus amenable to a
dynamic programming solution.

Let $D=\{d_1,\cdots,d_n\}$ denote a data stream, and $D_{i,j}$
denote a sector of $D$ from $d_i$ to $d_j$.  Let $P_{i,j}$ denote
the optimal partition of $D_{i,j}$, that is, models learned from
sectors defined by $P_{i,j}$ leads to the minimum partition error
$Q(P_{i,j})$. Our task is to find $P_{1,n}$, the optimal partition
of the entire stream.
%%  which partitions the entire stream with minimal
%% validation error.

Assume we have known $P_{1,k}$ and $P_{k+1,n}$, the optimal
partitions for sub-stream $D_{1,k}$ and $D_{k+1,n}$, $\forall k,
1\leq k<n$. Consider $P_{1,n}$. There are two cases: It either
contains no sub-partition, i.e., the entire sequence $D_{1,n}$ forms
a single cluster; or it is a union of two partitions $P_{1,k}$ and
$P_{k+1,n}$, for a certain $k$. In the former case, we learn a model
from $D_{1,n}$, and evaluate the error of the model. In the latter
case, the partition error can be derived by
$$Q(P_{1,n})=\min_k \{ \frac{k}{n}Q(P_{1,k})+\frac{n-k}{n}Q(P_{k+1,n}) \} $$

%%  of data stream
%% $D_1^n=\{d_1\dots d_n\}$ and minimizes the validation error
%% $Q(P_1^n)$,
%% we first find optimal partitions $P_1^k$ and $P_{k+1}^n$ for data
%% sequences $D_1^k$ and $D_{k+1}^n$, $1\leq k<n$.  The global partition
%% is the partition that contains a pair of partitions $P_1^k$ and
%% $P_{k+1}^n$ that minimizes
%% $$Q(P_1^n)=\frac{k}{n}Q(P_1^k)+\frac{n-k}{n}Q(P_{k+1}^n)$$

Clearly, finding the best partition for $D_{1,k}$ or $D_{k+1,n}$ is
the sub-problem with the same structure. To solve these
sub-problems, we recursively solve the sub-sub-problems that find
partitions for $D_{i,j}$, $1\leq i\leq j\leq n$, until we reach the
simple case $i=j$.  Note that we only need to solve each sub-problem
once, as we memorize and reuse solutions to problems that we have
already solved. With a bottom-up method, we find the best partition
$P_{i,j}$ for each sequence $D_{i,j}$, from shorter sequences to
longer sequences. The intermediate results $Q(P_{i,j})$ are stored
and reused by succeeding computation. By storing backtracking
pointers, the corresponding optimal partition $P_{1,n}$ can be
recovered easily.

In terms of complexity, since $Q(P_{i,j})$ is computed at least
$\Omega(j-i)$ time, and there are totally $\Theta(n^2)$ sub-problems,
the cost for computing the global partition $P_{1,n}$ is
$\Omega(n^3)$. Clearly, the complexity is too high for data streams of
large volume. %% The rest of the section introduces a
%% near linear time algorithm for the same problem.

\subsection{Candidate models and designated models}

The algorithm described above finds the optimal partition with the
minimum error, but is not scalable to data of large volume, as it
learns $n^2$ models from the $n^2$ sequences $D_{i,j}$. We want to
reduce the number of models that are learned. It is easy to see that
among the $n^2$ sequences, many are highly overlapping (e.g.,
$D_{i,j}$ and $D_{i,j+1}$). As a result, many models are similar to
each other as they are learned from similar data. Instead of learning
$n^2$ models, we learn a smaller number of models from pre-determined
data sectors. We call these models {\it candidate models}. For any
data sector $D_{i,j}$, we choose a candidate model to approximate the
exact model learned from $D_{i,j}$, which is called the {\it
  designated model} of $D_{i,j}$.  The optimal partition is thus
approximated
%% found by the dynamic programming method
by minimizing the error of the designated models instead of the
exact models.

In the following, we will show how candidate models are generated, and
how designated models are selected.

\subsubsection{Candidate models}

We learn candidate models from pre-determined data sectors in the
stream.  What data sectors should we choose? The candidate models
are for approximating exact models learned from arbitrary data
sector $D_{i,j}$.  Given $D_{i,j}$, we want to find a candidate
model that is learned from a data sector as similar to $D_{i,j}$ as
possible. Thus, in order to approximate any $D_{i,j}$, we would like
the pre-determined data sectors to have the largest variety, so that
we can always find a good approximate for an arbitrary $D_{i,j}$.

Next, we propose a method to generate pre-determined data sectors,
then we study how well the models learned from these generated data
sectors approximate the exact models.

To ensure pre-determined sectors have good coverage and variety, we
generate sectors using two parameters, i) $\gamma$, which controls the
variety of sector size, and ii) $p$, which controls the degree of
overlap between two sectors of the same size.  Specifically, sectors
are organized into a hierarchy. Sectors on the same level $k$ have the
same size $b_k$.  Sectors on level 0 have size $b_k=1$, and sectors on
level $k+1$ are $\gamma$ times as large as sectors on level $k$, that
is, $b_{k+1} = \gamma \cdot b_k$.  Furthermore, on the same level,
sectors may overlap with each other.
%% Think of sectors
%% on the same level are ordered by their left ends (starting position of
%% the sector).
The gap between the left ends of two neighboring sectors is given by
$p \cdot b_k$. Clearly, if $p < 1$ then the two neighboring sectors
overlap with each other.\footnote{More rigorously, since $\gamma$
can take any real value $>1$, and $p$ can take any real value $>0$,
so we let $b_{k+1}=\max(\lfloor\gamma \cdot b_k\rfloor,b_k+1)$, and
the gap is $\max(\lfloor p\cdot b_k\rfloor,1)$.}

\begin{figure}[!t]
    \centering
    \includegraphics[width=\columnwidth]{SVEM/CandidateModels.eps}
    \caption{Candidate models with $\gamma=2$ and $p=1$}
    \label{fig:svem:candidatemodels}
    \includegraphics[width=\columnwidth]{SVEM/DesignatedModels.eps}
    \caption{Designated models ($\gamma = 2$ and $p=1$)}
    \label{fig:svem:designatedmodels}
\end{figure}

Let $M_i^k$ denote the $i$-th model on level $k$. As an example,
given $\gamma=2$ and $p=1$, the pre-determined data sectors are
generated as shown in Fig.~\ref{fig:svem:candidatemodels}.

%%    of  In our design, we can think of the pre-determined data
%% sectors as in multiple levels.  Data sectors on the same level have the
%% same size, and data sectors on level $k+1$ are $\gamma$ times larger
%% than data blocks on level $k$.  Within each level, data sectors may
%% overlap each other.

%%  that have
%% different spreads over the stream. With the intuition that similar
%% training data produce similar models, our goal is to select such data
%% sectors that the candidate models built upon them can be used to
%% approximate the class distribution of any data sequence that includes
%% a predetermined sector as a sample set, without spending additional
%% time on learning a new exact model for each sequence.
%% Fig.~\ref{fig:svem:candidatemodels} illustrates an example of how
%% candidate models spread over the stream.




%% The data sectors supporting candidate models can have arbitrary
%% sizes, and the sectors of the same size can be overlapping. We deploy
%% the sectors by two parameters controlling the density: $\gamma>1$ the
%% amplification of the sector sizes, and $p>0$ the offset of the sector
%% positions. The sectors can be hierarchically divided into levels by
%% size such that a $k$-level sector has size $b_k$, then the ratio of
%% $b_{k+1}$ to $b_k$ is $\gamma$, and the offset of positions between
%% two neighboring $k$-level sectors is $p$ times $b_k$. More formally,
%% $b_0=1$ and $b_{k+1}=\max(\lfloor\gamma b_k\rfloor,b_k+1)$, and the
%% gap between the left ends of two neighboring $k$-level sectors is
%% $\max(\lfloor pb_k\rfloor,1)$. For instance, if $\gamma=2$ and
%% $p=1$, we deploy the sectors as in
%% Fig.~\ref{fig:svem:candidatemodels}.

\subsubsection{Designated models}

For any data sequence $D_{i,j}$, we want to designate a model
learned from a pre-determined data sector to approximate the exact
model learned from $D_{i,j}$.

\begin{definition}[Designated model of $D_{i,j}$]
Given any data sector $D_{i,j}$, the designated model of $D_{i,j}$,
denoted as $H(D_{i,j})$, is a candidate model learned from the
biggest pre-determined sector that is completely contained in
$D_{i,j}$. (If there is a tie, we select the pre-determined sector
appearing later in the stream)
\end{definition}

Thus, the designated model is learned from a sample of the data that
produces the exact model. The larger the sample, the better the
approximation, which is why we want to find the biggest pre-determined
sector. As an example, as shown in
Fig.~\ref{fig:svem:candidatemodels}, the designated model of
$D_{4,12}$ is $M_3^2$, i.e., $H(D_{4,12})=M_3^2$, because $M_3^2$ is
completely covered by $D_{4,12}$ and is bigger than $M_3^1,M_4^1,
\ldots, M_6^1$, and it also appears later than $M_2^2$.

\subsection{Approximation}

% After substituting exact models with designated models, 
We estimate
the quality of a partition $P$ by:
\begin{equation}
\hat{Q}(P)=\frac{1}{|D|}\sum_{D_{i,j} \in P} |D_{i,j}| \cdot ( \VE(H(D_{i,j}),D_{i,j}) + \delta)
\label{eq:svem}
\end{equation}
where $\VE(M,D)$ gives the error of model $M$ by cross-validating it
on data $D$. % , i.e., instead of evaluating the error of the exact model
% (the model learned from $D_{i,j}$), we evaluate the error of its
% designated model $H(D_{i,j})$. 
We require
$\VE(\cdot,\cdot)$ to have the following property:
\begin{gather*}
    \lim_{|T|/|S|\rightarrow 0}\VE(M_S,S\cup T)=Err(M_{S\cup T}) %\\
%    \VE(M,D)=\frac{1}{|D|}\sum_{x\in D}\VE(M,\{x\})
\end{gather*}
Intuitively, $\VE(M_S,D)$ is used to estimate the error of $D$'s
exact model when we have learned a model from only a sample $S$ of
$D$. The design of $\VE(\cdot,\cdot)$ is related with the
application specific $Err(\cdot)$, which is often not difficult. For
example, if $M_S$ is a classifier and $Err(M_S)$ denotes the cross
validation error of $M_S$, then $\VE(M_S,D)$ can be defined as the
classification error of $M_S$ on $D$.

Fig.~\ref{fig:svem:designatedmodels} gives an example of the
designated models used in evaluating the quality of two different
partitions. The designated model of $D_{i,j}$ is learned from a
subset or a sample of $D_{i,j}$. Clearly, the larger the sample, the
more likely that the designated model is a good approximate of the
exact model, which is learned from $D_{i,j}$. We formalize the
approximation factor by the following definition.


%% The fraction of data used to learn the designated model decides how
%% approximately the designated model represents the class distribution of a
%% data sequence. A designated model $H(D_i^j)$ is said $\delta$-approximate if
%% its corresponding data sector covers at least a fraction $\delta$ of
%% the data sequence $D_i^j$ (i.e., $|H(D_i^j)|\geq\delta\cdot|D_i^j|$).

\begin{definition}[$\delta$-approximate] A designated model
$H(D_{i,j})$ of the data sector $D_{i,j}$ is $\delta$-approximate
iff $|H(D_{i,j})|\geq\delta\cdot|D_{i,j}|$, where $|H(D_{i,j})|$ is
the size of the sector that $H(D_{i,j})$ learns from.
\end{definition}

The parameters $\gamma$ and $p$ jointly control the lower bound of
the approximation factor of the designated models for arbitrary data
sectors, and the total number of candidate models we will learn. The
following theorem gives an exact lower bound of the approximation
factor, given fixed value of $\gamma$ and $p$.

\begin{theorem}[Lower bound]
Any data sector $D_{i,j}$ has at least a
$\frac{1}{\gamma(1+p)}$-approximate designated model.
\end{theorem}

\begin{proof}
For any data sector $D_{i,j}$, let $k$ be the lowest level in the
candidate sector hierarchy that satisfies $b_k \ge
\frac{|D_{i,j}|}{\gamma(1+p)}$.  In other words, $b_{k-1} \le
\frac{|D_{i,j}|}{\gamma(1+p)}$. Since $b_k = \gamma b_{k-1}$, we
have $b_k\leq\frac{|D_{i,j}|}{1+p}$. We find the first candidate
sector on level $k$ that does not cover any data that precedes
$D_{i,j}$. Because the beginning position of the candidate sector is
at most $i+p\cdot b_k$, the ending position is at most $i+(1+p)\cdot
b_k-1\leq i+|D_{i,j}|-1=j$. Therefore, this candidate sector is
completely inside by $D_{i,j}$. Since $b_k \ge
\frac{|D_{i,j}|}{\gamma(1+p)}$, the designated model $H(D_{i,j})$ is
$\frac{1}{\gamma(1+p)}$-approximate.
\end{proof}

Given the lower bound of the approximation factor, is finding a
partition that minimizes $\hat{Q}$ in Eq~\ref{eq:svem} equivalent to
finding a partition that minimizes $Q$?

The answer is yes to a certain extent.  First, if the optimal
partition $P=\arg\min_pQ(P)$ is consistent with the true occurrences
of models, then each sector $D_{i,j}$ of $P$ is homogeneous and
generated by a single model. Since the designated model
$H(D_{i,j})$ is learned from a sample of $D_{i,j}$, it should be
similar to the exact model $M$ learned from the entire $D_{i,j}$. In
other words, $\VE(H(D_{i,j}),D_{i,j})\approx\VE(M,D_{i,j})=Err(M)$
and $\hat{Q}(P)\approx Q(P)$.

Second, for partitions $P$ not consistent with the true history,
some sectors may contain conflicting models. For such a sector
$D_{i,j}$, its designated models either agrees with the overall
distribution of $D_{i,j}$, or is learned from a bias sample of
$D_{i,j}$, which leads to larger validation error. In both cases,
the validation error might not be smaller, and thus $\hat{Q}(P)\geq
Q(P)$.

In conclusion, we have $\hat{Q}(P)\geq Q(P)$ for all partitions, and
$\hat{Q}(P)\approx Q(P)$ for the optimal partition $P$ that minimizes
$Q(P)$. Therefore, $\hat{Q}$ is a reasonable substitute for $Q$.

\subsection{Finding the best partition}

The Sequential Clustering algorithm uses dynamic programming
approach to find the approximate optimal partition. Specifically,
our goal is to find a partition $P_{1,n}$ that minimizes $\hat Q
(P_{1,n})$ in Eq~\ref{eq:svem}. Unlike the brute force dynamic
programming solution, we are using candidate models instead of exact
models in computing the partition error. Furthermore, since the
candidate models are on $k$-levels, we divide our problem into
sub-problems by size and level.

\begin{figure}[!h]
    \centering
    \includegraphics[width=\columnwidth]{SVEM/FindPartition.eps}
    \caption{Derive the best partition of $D_{1,10}$ from that of $D_{1,9}$}
    \label{fig:svem:findpartition}
\end{figure}

Let us first simplify notation by using $\hat Q_i$ to denote $\hat Q
(P_{1,i})$.  Our goal is thus to find $\hat Q_n$. Consider the last
sector of partition $P_{1,i}$. If the designated model of the sector
is on level $k$, we denote that partition as $P_i^k$. Furthermore,
we use $L_i^k$ to denote the last candidate model on level $k$ (up
to position $i$, that is, the model is learned from data in
$D_{1,i}$). Finally, we abuse notation by using $\hat Q_i^k$ to
denote the error of partition $P_i^k$. Now, we have:

$$\hat Q_i=
\begin{cases}
    0 & i=0 \\
    \min_k \hat Q_i^k & i>0
\end{cases}\\
$$

Thus, we break the problem of $\hat Q_n$ to sub-problems $\hat Q_i$
and $\hat Q_i^k$, $\forall i,k$. With dynamic programming, we compute
and memorize solutions to all sub-problems. Before we go into the
details, we first show two properties of the problem structure:

%% Consider the prefix sequence $D_{1,i}$, for $1 \le i\le n$.  Let
%% $L_i^k$ be the last candidate model on level $k$ that is learned from
%% a data sector completely inside $D_{1,i}$.
%$$L_i^k=H(D_{x,i})\text { where } x=\min\{x|\text{$H(D_{x,i})$ is on level $k$}\} $$
%% For example, Fig.~\ref{fig:svem:findpartition} shows $L_9^0$, $L_9^1$,
%% $L_9^2$, and $L_9^3$, which are the last candidate models (on levels
%% 0, 1, 2, and 3 respectively) learned from data covered by $D_{1,9}$.
%% We show some properties of $L_i^k$ that are important to speed up the
%% partition algorithm:

\begin{enumerate}
\item The designated model of the last sector of partition $P_i^k$ is
  $L_i^k$.  In Fig.~\ref{fig:svem:findpartition}, we show $L_9^0$,
  $L_9^1$, $L_9^2$, and $L_9^3$, which are the last candidate models
  (on levels 0, 1, 2, and 3 respectively) learned from data in
  $D_{1,9}$.

\item

  Let $x$ be the starting position of the last sector in partition
  $P_i^k$. When $x$ is given, the best partition of sequence
  $D_{1,x-1}$ does not depend on sequence $D_{x,i}$, i.e., we can
  partition the two sequences independently.
\end{enumerate}

We next show how to derive $P_i^k$. Note that the designated model of
the last sector in $P_i^k$ must on level $k$. %%  focus on from
%% partition $P_{i-1}^k$ or .  Accordingly, we need to compute the
%% validation error $Q_i^k$ of every best partition $P_i^k$ of $D_{1,i}$,
%% which subjects to the constraint that the designated model of the last
%% block is fixed to the $k$-level model $L_i^k$. We also compute
%% $Q_i=\min_kQ_i^k$ corresponding to the best partition $P_i$ of
%% $D_{1,i}$ without constraint. When computing $Q_i^k$,
We consider two cases:

\begin{enumerate}
\item
When $L_i^k=L_{i-1}^k$.  In Figure~\ref{fig:svem:findpartition}, for
example, we have $L_{10}^3 = L_9^3 = M_1^3$ and
$L_{10}^2=L_9^2=M_2^2$.  Since the designated model of the last
sector does not change, no new sector will be created. Partition
$P_i^k$ simply extends $P_{i-1}^k$ by adding $d_i$ to its last
sector.  Accordingly, since $\VE(\cdot,\cdot)$ is additive, $\hat
Q_i^k$ can be derived from $\hat Q_{i-1}^k$ as follows:
$$i \cdot \hat Q_i^k=(i-1) \cdot \hat Q_{i-1}^k+\VE(L_i^k,\{d_i\})$$

\item
When $L_i^k\neq L_{i-1}^k$. In Figure~\ref{fig:svem:findpartition},
for example, we have $L_{10}^0 \neq L_9^0$, and $L_{10}^1 \neq
L_9^1$.  In this case, we must find the starting position of the
last sector that minimizes $\hat Q_i^k$.  The starting position $x$
ranges from $i-b_k+1$ backwards to the leftmost position $i'$ where
$H(D_{i',i})=L_i^k$ still holds.  For each $x$, $P_i^k$ extends
$P_{1,x-1}$ by creating a new sector $D_{x,i}$ in the partition.
Accordingly, $Q_i^k$ can be derived as follows:
\[
i \cdot \hat Q_i^k=\min_{x:H(D_{x,i})=L_i^k} (x-1) \cdot \hat
Q_{x-1}+ (i-x+1) \cdot \VE(L_i^k,D_{x,i})
\]
As computing $\VE(L_i^k,D_{x,i})$ needs linear time, if we recompute
$\VE(L_i^k,D_{x,i})$ for every position $x$, we will spend enormous
time to find the best position of $x$. Fortunately, because
\begin{align*}
    &(i-x+1) \cdot \VE(L_i^k,D_{x,i})=\sum_{j=x}^i\VE(L_i^k,\{d_j\})\\
    =&\VE(L_i^k,\{d_x\}) + (i-x) \cdot \VE(L_i^k,D_{x+1,i})
\end{align*}
the value of $\VE(L_i^k,D_{x,i})$ can be incrementally updated
while we move the position $x$ backwards.
\end{enumerate}

%% Above processes, demonstrated an example in
%% Fig.~\ref{fig:svem:findpartition}, are formalized as follows:
%% \begin{align}
%% L_i^k=&H(D_{x,i}):x=\min\{x|\text{$H(D_{x,i})$ is $k$-level}\}\\
%% Q_i=&
%% \begin{cases}
%%     0 & i=0 \\
%%     \min_kQ_i^k & i>0
%% \end{cases}\\
%% Q_0^k=&+\infty
%%     \intertext{when $i>0$ and $L_i^k=L_{i-1}^k$}
%% i \cdot Q_i^k=&(i-1) \cdot Q_{i-1}^k+\VE(L_i^k,\{d_i\})\\
%%     \intertext{when $i>0$ and $L_i^k\neq L_{i-1}^k$}
%% i \cdot Q_i^k=&\min_{x:H(D_{x,i})=L_i^k}
%%     (x-1) \cdot Q_{x-1}+\VE(L_i^k,D_{x,i})\\
%% =&\min_{x:H(D_{x,i})=L_i^k}
%%     (x-1) \cdot Q_{x-1}+\sum_{j=x}^i\VE(L_i^k,\{d_j\})\notag
%% \end{align}


\subsection{Complexity analysis}

The Sequential Clustering algorithm consists of two parts: learning
all candidate models, and finding the best partition. %% We analyze the
%% time complexity of our algorithm in this section, and reveal that it
We show that combined they
spend near linear time in total.

First of all, we present a lemma to assist our analysis.

\begin{lemma}
Suppose learning a model from $n$ training data needs $T(n)$ time.
If the learning time is super-linear, that is, $T(n)=\Omega(n)$,
then $k\cdot T(\frac{n}{k})=O(T(n))$, i.e., learning $k$ models each
from one dataset of size $\frac{n}{k}$ is asymptotically not slower
than learning a model from a dataset of size $n$.
\end{lemma}

With the help of this lemma, we can derive the total time for
learning all candidate models.

\begin{theorem}
  Learning all candidate models from a data stream of size $n$ takes
  at most $O(\frac{\ln n}{p\ln\gamma}T(n))$ time.
\label{thm:svem:time}
\end{theorem}
\begin{proof}
At each level $k$ we learn $\frac{n}{pb_k}$ models each from a
sector of size $b_k$. According to the aforementioned lemma,
$\frac{n}{pb_k}T(b_k)=O(\frac{1}{p}T(n))$, and because we have
$\ln_\gamma n=\frac{\ln n}{\ln\gamma}$ levels, the total learning
time is at most $O(\frac{\ln n}{p\ln\gamma}T(n))$.
\end{proof}

\begin{theorem}
If training a single model has time complexity of $O(n^c)$ (e.g.,
using some sorts of SVM), then learning all candidate models takes
time at most $O(\frac{n^c}{p\ln\gamma})$.
\end{theorem}
\begin{proof}
Let $A_k$ denote the total time for learning all candidate models at
level $k$, and $K$ denote $\lfloor\ln_\gamma n\rfloor$. We thus get
$A_k=\frac{n}{pb_k}T(b_k)=O(\frac{n^{c+1}}{pb_k})$. Since
$b_k=\gamma^k$, we get $A_k=\gamma A_{k-1}$. Hence,
$A_K,A_{K-1},A_{K-2},\dots,A_0$ forms geometric series, and the
total learning time is
\begin{align*}
\sum_{k=0}^KA_k=&\frac{1-(\frac{1}{\gamma})^{K+1}}{1-\frac{1}{\gamma}}O(A_K)\leq
\frac{\gamma}{\gamma-1}O(\frac{n^{c+1}}{p\gamma^K})\\
=&O(\frac{1}{\ln\gamma}\cdot\frac{n^c}{p})=O(\frac{n^c}{p\ln\gamma})
\end{align*}
%  According to the Master Theorem. Detail omitted for lack of space.
\end{proof}

Theorem~\ref{thm:svem:time} reveals that, given fixed $\gamma$ and
$p$, learning all candidate models adds only a logarithmic factor
compared with learning a single model from the entire data stream.
After learning the candidate models, we find the best partition via
dynamic programming approach. In the latter part, since all models
have been learned, the most time consuming computation is evaluating
the validation errors by testing each model on a portion of data. It
is clear that each candidate model at level $k$ is only tested on
those data around its learning sector, of quantity in proportion to
$b_k$. Therefore, the total time of the latter part is $O(\frac{n\ln
n}{p\ln\gamma}t)$, where $t$ denotes the time for testing a model on
a single data. Usually, the time for learning candidate models
dominates the total running time of our algorithm.

% There is also an optimization consideration in practice. Candidates
% models learned from very small data sectors (e.g., candidate models
% on level 0) may not be useful, as concept occurrence usually lasts
% for a number of records. Similarly, candidate models learned from
% very big data sectors (e.g., candidate models on the top level),
% which costs considerable learning time and spans over several
% episodes, may not be useful either. Thus, we can set thresholds,
% either straightforward or heuristic, to avoid learning candidate
% models from data sectors either too big or too small.

The extra space cost of our algorithm is small. If we process the data
in time order, learn candidate models only when they are about to be
used, and discard them immediately after they have expired, then at
any time only $\ln_\gamma n$ candidate models have to be retained in
memory. This strategy also turns Sequential Clustering into an
incremental algorithm that is able to dynamically process streaming
data and maintain the current best partition.
