\section{Two Step Clustering}
\label{sec:svem}

We adopt a two-step clustering approach. In the first step (Sequential
Clustering), we partition the data stream into a number of segments.
Each segment corresponds to a single occurrence of a concept.  In the
second step (Concept Clustering), we merge the segments into clusters
so that each cluster corresponds to a distinct concept.

The reason we adopt a two-step approach instead of performing Step 2
(Concept Clustering) directly on the original data is due to
optimization considerations.  For every two segments, Step 2 must
compute the possibility that they belong to a same cluster.  If Step 1
is skipped, the total number of segments is the same as the total
number of data records (each segment consists of one record only).
Thus, it is extremely expensive as there are too many possibilities to
consider.  To reduce the complexity, consider the underlying data
generating mechanism.  As long as it stays in one state, it will
generate data that belongs to the same cluster/concept, until it
switches to another state.  We can conclude that neighboring data is
more likely to be of the same concept than non-neighboring data.
Instead of deciding whether any two records should be in the same
cluster, Step~1 decides whether two neighboring records should be in
the same state, which is much less costly.  The number of data
segments produced by Step 1 is usually orders of magnitude smaller
than the number of original data records, which in turn reduces the
cost of Step 2.



\subsection{Sequential Clustering}
%% We introduce the Sequential Clustering algorithm that partitions a
%% data stream into a number of continuous data segments, each
%% corresponding to a single occurrence of a certain
%% concept. %The process takes near linear time only.

%% In this section, we introduce an efficient algorithm that

%% In concept-changing data streams, the class distribution, or the
%% hidden concept generating streaming data, changes over time. Such
%% data stream can be broken into many slices, and each slice refers to
%% an episode that corresponds to a hidden concept. A fundamental
%% problem is that, given a historical data stream, finds the partition
%% of consecutive data by their hidden concepts.

%% However, because the concepts depending on the hidden contexts are
%% not given explicitly, and stochastic factors may take part in the
%% production of data drawn from concepts, there is no determinate
%% method that could verify whether a partition is correct or not.
%% Instead, we measure the quality of a partition by its mean
%% validation error, which indicates how well the partition models the
%% data. Specifically, given a stream $D=\{d_1,d_2\dots d_n\}$, and a
%% partition $P=\{D_1,D_2\dots D_m|D_i\subset D\}$, the validation
%% error of $P$ is $Q(P)=\frac{1}{|D|}\sum_i|D_i|\VE(D_i,D_i)$. We will
%% try to find a partition $P$ that minimizes $Q(P)$.

%% The criterion using the validation error comes from the intuition
%% that a partition consistent with the true history will have smaller
%% validation error. Improper partition either produces data block that
%% contains conflicting concepts, or breaks a continuous episode of a
%% concept into several slices. If a data block contains conflicting
%% concepts, the model built upon the data will have lower
%% classification accuracy; and if an continuous episode is partitioned
%% into several blocks, each block will hold fewer training data and
%% thus produce models of larger overfitting error.

%% \subsection{Overview}

\begin{figure}
    \centering
    \includegraphics[width=\columnwidth]{SVEM/Stream.eps}
    \caption{Concept-changing data stream}
    \label{fig:svem:stream}
\end{figure}

Fig.~\ref{fig:svem:stream} gives an example of a data stream with
changing concepts. Each circle denotes a labeled data $d_i=(x_i,c_i)$,
which consists of $x_i$ represented by the direction of the arrow, and
$c_i$ represented by the color of the arrow. The black/white pattern
inside the circle denotes the unknown hidden concept that governs data
generation. Data in this example is generated by three different
concepts, and the concept changes and recurs over time.  Partition
$P_1$ shown in the figure is improper.  It is easy to see that it has
large validation error: $D_1$ and $D_2$, the first and second segments
of $P_1$, contain conflicting concepts; the last segment $D_3$ has too
little data to learn a good model. A better partition $P_2$ is also
shown, and it is closer to the true occurrences of concepts in the
stream.

We show that the sequential clustering problem has optimal sub
structures, and is thus amenable to a dynamic programming solution.
Let $D=\{d_1,\cdots,d_n\}$ denote a data stream, $D_{i,j}$ a segment
of $D$ from $d_i$ to $d_j$, $P_{i,j}$ the optimal partition of
$D_{i,j}$, that is, models learned from segments defined by $P_{i,j}$
have the minimum validation error $Q(P_{i,j})$.  Our task is to find
$P_{1,n}$, the optimal partition of the entire stream.

Assume we know $P_{1,k}$ and $P_{k+1,n}$, %%  the optimal partitions for
%% sub stream $D_{1,k}$ and $D_{k+1,n}$,
$\forall k, 1\leq k<n$.  Consider $P_{1,n}$. There are two cases.  It
either contains no sub-partition, i.e., the entire sequence $D_{1,n}$
forms a single cluster, or it is a union of two partitions $P_{1,k}$
and $P_{k+1,n}$, for a certain $k$. In the first case, we learn a
model on $D_{1,n}$, and estimate the validation error of the model. In
the second case, the validation error can be derived by
$$Q(P_{1,n})=\min_k \{ \frac{k}{n}Q(P_{1,k})+\frac{n-k}{n}Q(P_{k+1,n}) \} $$

%%  of data stream
%% $D_1^n=\{d_1\dots d_n\}$ and minimizes the validation error
%% $Q(P_1^n)$,
%% we first find optimal partitions $P_1^k$ and $P_{k+1}^n$ for data
%% sequences $D_1^k$ and $D_{k+1}^n$, $1\leq k<n$.  The global partition
%% is the partition that contains a pair of partitions $P_1^k$ and
%% $P_{k+1}^n$ that minimizes
%% $$Q(P_1^n)=\frac{k}{n}Q(P_1^k)+\frac{n-k}{n}Q(P_{k+1}^n)$$

Clearly, finding the best partition for $D_{1,k}$ or $D_{k+1,n}$ is
the sub-problem with the same structure. To solve these sub-problems,
we recursively solve the sub-sub-problems that find partitions for
$D_{i,j}$, $1\leq i\leq j\leq n$, until we reach the simple case
$i=j$.  Note that we only need to solve each sub-problem once, as
solutions to problems that we have already solved are memorized and
reused. With a bottom-up method, we find the best partition $P_{i,j}$
for each sequence $D_{i,j}$, from shorter sequences to longer
sequences. The intermediate results $Q(P_{i,j})$ are reused by
succeeding computation. By storing backtracking pointers, the
corresponding optimal partition $P_{1,n}$ can be recovered easily.

Since every $Q(P_{i,j})$ must be computed in at least $\Omega(j-i)$
time, and there are $\Theta(n^2)$ sub-problems, the total time for
computing the global partition $P_{1,n}$ is $\Omega(n^3)$.  Clearly,
the naive dynamic programming approach has high complexity for data
streams of large volume. We developed optimization techniques that
achieves nearly linear complexity~\cite{techreport}.  Due to space
restriction, we omit the optimization techniques here.

%% The rest of the section introduces a
%% near linear time algorithm for the same problem.

%% \subsection{Candidate models and designated models}

%% The dynamic programming approach finds the optimal partition with
%% the minimum validation error, but not scalable to large size data.
%% To find the optimal partition, it learns $n^2$ models from the $n^2$
%% sequences $D_{i,j}$.

%% We want to reduce the number of models we need to learn. It is easy
%% to see that among the $n^2$ sequences, many are highly overlapping
%% (e.g., $D_{i,j}$ and $D_{i,j+1}$). Consequently, many models are
%% similar to each other because they are learned from similar data.
%% Instead of learning $n^2$ models, the Sequential Clustering
%% algorithm learns a small number of models from pre-determined data
%% segments. We call these models {\it candidate models}. For any data
%% segment $D_{i,j}$, we choose a candidate model to approximate the
%% exact model learned from $D_{i,j}$, which is called the {\it
%% designated
%%   model} of $D_{i,j}$.  The  optimal partition is approximated
%% %% found by the dynamic programming method
%% by minimizing the validation error of the designated models instead of
%% the exact models.

%% Before giving details of finding the best partition, we show how
%% candidate models are generated, and how designated models are
%% selected.

%% \begin{figure}
%%     \centering
%%     \includegraphics[width=\columnwidth]{SVEM/CandidateModels.eps}
%%     \caption{Candidate models with $\gamma=2$ and $p=1$}
%%     \label{fig:svem:candidatemodels}
%%     \includegraphics[width=\columnwidth]{SVEM/DesignatedModels.eps}
%%     \caption{Designated models ($\gamma = 2$ and $p=1$)}
%%     \label{fig:svem:designatedmodels}
%% \end{figure}

%% \ \\
%% \noindent{\bf Candidate models}\\
%% We learn candidate models from pre-determined data segments in the
%% stream.  What data segments should we choose? The candidate models are
%% for approximating exact models learned from any arbitrary data segment
%% $D_{i,j}$.  Given $D_{i,j}$, %is not a pre-determined data block, we would
%% we want to find a candidate model which is learned from a data
%% segment as close to $D_{i,j}$ as possible. Thus, in order to
%% approximate any $D_{i,j}$, we would like the pre-determined data
%% segments to have the largest variety, so that we can always find a
%% good approximate for an arbitrary $D_{i,j}$.

%% Following, we propose a method to generate pre-determined data
%% segments, then we study how well models learned from the generated
%% data segments approximate the exact models.

%% We want to ensure the pre-determined segments have good coverage and
%% variety. We generate segments using two parameters, i) $\gamma$,
%% which controls the variety of segment size, and ii) $p$, which
%% controls the degree of overlap between two segments of the same size.
%% Specifically, segments are organized into a hierarchy. Sectors on the
%% same level $k$ have the same size $b_k$.  Sectors on level 0 have
%% size $b_k=1$, and segments on level $k+1$ are $\gamma$ times larger
%% than segments on level $k$, that is, $b_{k+1} = \gamma \cdot b_k$.
%% Furthermore, on the same level, segments may overlap with each other.
%% %% Think of segments
%% %% on the same level are ordered by their left ends (starting position of
%% %% the segment).
%% The gap between the left ends of two neighboring segments is given by
%% $p \cdot b_k$. Clearly, if $p < 1$ then the two neighboring segments
%% overlap with each other.\footnote{More rigorously, since $\gamma$
%% can take any real value $>1$, and $p$ can take any real value $>0$,
%% $b_{k+1}=\max(\lfloor\gamma \cdot b_k\rfloor,b_k+1)$, and the gap is
%% $\max(\lfloor p\cdot b_k\rfloor,1)$.}


%% Let $M_i^k$ denote the $i$-th model on level $k$. As an example,
%% given $\gamma=2$ and $p=1$, the pre-determined data segments are
%% generated as shown in Fig.~\ref{fig:svem:candidatemodels}.

%% %%    of  In our design, we can think of the pre-determined data
%% %% segments as in multiple levels.  Data segments on the same level have the
%% %% same size, and data segments on level $k+1$ are $\gamma$ times larger
%% %% than data blocks on level $k$.  Within each level, data segments may
%% %% overlap each other.

%% %%  that have
%% %% different spreads over the stream. With the intuition that similar
%% %% training data produce similar models, our goal is to select such data
%% %% segments that the candidate models built upon them can be used to
%% %% approximate the class distribution of any data sequence that includes
%% %% a predetermined segment as a sample set, without spending additional
%% %% time on learning a new exact model for each sequence.
%% %% Fig.~\ref{fig:svem:candidatemodels} illustrates an example of how
%% %% candidate models spread over the stream.




%% %% The data segments supporting candidate models can have arbitrary
%% %% sizes, and the segments of the same size can be overlapping. We deploy
%% %% the segments by two parameters controlling the density: $\gamma>1$ the
%% %% amplification of the segment sizes, and $p>0$ the offset of the segment
%% %% positions. The segments can be hierarchically divided into levels by
%% %% size such that a $k$-level segment has size $b_k$, then the ratio of
%% %% $b_{k+1}$ to $b_k$ is $\gamma$, and the offset of positions between
%% %% two neighboring $k$-level segments is $p$ times $b_k$. More formally,
%% %% $b_0=1$ and $b_{k+1}=\max(\lfloor\gamma b_k\rfloor,b_k+1)$, and the
%% %% gap between the left ends of two neighboring $k$-level segments is
%% %% $\max(\lfloor pb_k\rfloor,1)$. For instance, if $\gamma=2$ and
%% %% $p=1$, we deploy the segments as in
%% %% Fig.~\ref{fig:svem:candidatemodels}.

%% \ \\
%% \noindent {\bf Designated models}\\
%% For any data sequence $D_{i,j}$, we want to use a model learned from
%% a pre-determined data segment to approximate the exact model learned
%% from $D_{i,j}$.

%% \vspace{.2cm}\begin{definition} (Designated model of $D_{i,j}$)
%% Given
%%   any data segment $D_{i,j}$, the designated model of $D_{i,j}$,
%%   denoted as $H(D_{i,j})$, is a candidate model learned from the
%%   biggest segment that is completely contained in $D_{i,j}$. (If there
%%   is a tie between two candidate models, we select the one learned
%%   from a segment appears later in the stream.)
%% \end{definition} \vspace{.2cm}

%% In other words, the designated model is learned by a sample of the
%% data from which the exact model is learned. The larger the sample,
%% the better the approximation, which is why we want to find the
%% biggest
%% segment. %% We call the approximate model the designated model of the data
%% %% sequence $D_{i,j}$, and we denote it as $H(D_{i,j})$.
%% As an example, as shown in Fig.~\ref{fig:svem:candidatemodels}, the
%% designated model of $D_{4,12}$ is $M_3^2$, i.e., $H(D_{4,12})=M_3^2$,
%% because $M_3^2$ is completely covered by $D_{4,12}$ and is bigger than
%% $M_3^1,M_4^1, \ldots, M_6^1$, and it also appears later than $M_2^2$.

%% \subsection{Approximation}

%% After substituting exact models with designated models, we estimate
%% the quality of a partition $P$ given in Eq~\ref{eq:vem} by:
%% \begin{equation}
%% \hat{Q}(P)=\frac{1}{|D|}\sum_{D_{i,j} \in P} |D_{i,j}| \cdot \VE(H(D_{i,j}),D_{i,j})
%% \label{eq:svem}
%% \end{equation}
%% Here, instead of validating $D_{i,j}$'s exact model (the model
%% learned from $D_{i,j}$) on $D_{i,j}$, we validate $D_{i,j}$'s
%% designated model on $D_{i,j}$.  Fig.~\ref{fig:svem:designatedmodels}
%% gives an example of the candidate models used in evaluating the
%% quality of two different partitions.

%% The designated model of $D_{i,j}$ is learned from a subset or a
%% sample of $D_{i,j}$.  Clearly, the larger the sample, the more
%% likely that the designated model is a good approximate of the exact
%% model, which is learned from $D_{i,j}$.

%% %% The fraction of data used to learn the designated model decides how
%% %% approximately the designated model represents the class distribution of a
%% %% data sequence. A designated model $H(D_i^j)$ is said $\delta$-approximate if
%% %% its corresponding data segment covers at least a fraction $\delta$ of
%% %% the data sequence $D_i^j$ (i.e., $|H(D_i^j)|\geq\delta\cdot|D_i^j|$).

%% \vspace{.2cm}\begin{definition} ($\delta$-approximate) A designated
%%   model $H(D_{i,j})$ of the data sequence $D_{i,j}$ is
%%   $\delta$-approximate iff $|H(D_{i,j})|\geq\delta\cdot|D_{i,j}|$,
%%   where $|H(D_{i,j})|$ is the size of the segment that $H(D_{i,j})$ is
%%   learned from.
%% \end{definition}
%% \vspace{.2cm}

%% The parameters $\gamma$ and $p$ jointly control the lower bound of
%% the approximation of the designated models for all data sequences,
%% and the total number of candidate models we have to learn.  The
%% following theorem tells how well the designated models approximate
%% the class distribution of the data sequences, given fixed value of
%% $\gamma$ and $p$.

%% \vspace{.2cm}
%% \begin{theorem} (Lower bound)   Any data sequence
%%   has at least a $\frac{1}{\gamma(1+p)}$-approximate designated model.
%% \end{theorem}

%% \begin{proof}
%%   For any data sequence $D_{i,j}$, let $k$ be the lowest level in the
%%   segment hierarchy that satisifies $b_k \ge
%%   \frac{|D_{i,j}|}{\gamma(1+p)}$.  In other words, $b_{k-1} \le
%%   \frac{|D_{i,j}|}{\gamma(1+p)}$. Since $b_k = \gamma b_{k-1}$, we
%%   have
%% %  $l=b_k$ denote the smallest segment size that is at least
%%   $b_k\leq\frac{|D_{i,j}|}{1+p}$. We find the first segment on level
%%   $k$ that does not cover any data that precedes $D_{i,j}$.  Because
%%   the beginning position of the segment is at most $i+p\cdot b_k$, the
%%   ending position is at most $i+(1+p)\cdot b_k-1\leq i+|D_{i,j}|-1=j$.
%%   Therefore, this segment is completely inside by $D_{i,j}$. Since $b_k
%%   \ge \frac{|D_{i,j}|}{\gamma(1+p)}$, the designated model
%% $H(D_{i,j})$ is $\frac{1}{\gamma(1+p)}$-approximate. %%  (although it may
%% %% learn from other later appearing segment of the same size).
%% \end{proof}
%% \vspace{.2cm}

%% Given the lower bound, % is then to find a partition $P$ that minimizes $\hat{Q}$.
%% is finding a partition that minimizes $\hat{Q}$ in Eq~\ref{eq:svem}
%% equivalent to finding a partition that minimizes $Q$?

%% The answer is yes to a certain extent.  First, if the optimal
%% partition $P=\arg\min_pQ(P)$ is consistent with the true occurrences
%% of concepts, then each segment $D_{i,j}$ of $P$ corresponds to an
%% episode of a hidden concept. Since the designated model $H(D_{i,j})$
%% %% used in $\hat{Q}$
%% is learned from a sample of $D_{i,j}$, %the data produced by a consistent concept,
%% it should be similar to the exact model $M$ learned from the entire
%% $D_{i,j}$.  In other words,
%% $\VE(H(D_{i,j}),D_{i,j})\approx\VE(M,D_{i,j})$ and $\hat{Q}(P)\approx
%% Q(P)$.

%% Second, for partitions $P$ not consistent with the true history,
%% some segments may contain conflicting concepts. For such a segment
%% $D_{i,j}$, its designated models either agrees with the overall
%% class distribution of $D_{i,j}$, or is learned from a bias sample of
%% $D_{i,j}$, which leads to larger validation error. In both cases,
%% the validation error might not be smaller, and thus $\hat{Q}(P)\geq
%% Q(P)$.

%% In conclusion, we have $\hat{Q}(P)\geq Q(P)$ for all partitions, and
%% $\hat{Q}(P)\approx Q(P)$ for the optimal partition $P$ that minimizes
%% $Q(P)$. Therefore, $\hat{Q}$ is a reasonable substitute for $Q$.

%% \subsection{Finding the best partition}

%% \begin{figure}
%%     \centering
%%     \includegraphics[width=\columnwidth]{SVEM/FindPartition.eps}
%%     \caption{Derive the best partition of $D_{1,10}$ from that of $D_{1,9}$}
%%     \label{fig:svem:findpartition}
%% \end{figure}

%% The Sequential Clustering algorithm is a dynamic programming approach
%% that finds the approximate optimal partition in near linear time.

%% Specifically, our goal is to find a partition $P_{1,n}$ that minimizes
%% $\hat Q (P_{1,n})$ in Eq~\ref{eq:svem}. Unlike the naive dynamic
%% programming solution, we are using candidate models instead of exact
%% models in computing validation error. Furthermore, since the candidate
%% models are on $k$-levels, we divide our problem into sub problems by
%% size and level.

%% Let us first simplify notation by using $\hat Q_i$ to denote $\hat Q
%% (P_{1,i})$.  Our goal is thus to find $\hat Q_n$. Consider the last
%% segment of partition $P_{1,i}$. If the designated model of the segment
%% is on level $k$, we denote that partition as $P_i^k$. Furthermore, we
%% use $L_i^k$ to denote the last candidate model on level $k$ (up to
%% position $i$, that is, the model is learned from data in $D_{1,i}$).
%% Finally, we abuse notation by using $\hat Q_i^k$ to denote the
%% validation error of partition $P_i^k$. Now, we have:

%% $$\hat Q_i=
%% \begin{cases}
%%     0 & i=0 \\
%%     \min_k \hat Q_i^k & i>0
%% \end{cases}\\
%% $$

%% Thus, we break the problem of $\hat Q_n$ to sub-problems $\hat Q_i$
%% and $\hat Q_i^k$, $\forall i,k$. With dynamic programming, we compute
%% and memorize solutions to all sub-problems. Before we go into the
%% details, we first show two properties of the problem structure:

%% %% Consider the prefix sequence $D_{1,i}$, for $1 \le i\le n$.  Let
%% %% $L_i^k$ be the last candidate model on level $k$ that is learned from
%% %% a data segment completely inside $D_{1,i}$.
%% %$$L_i^k=H(D_{x,i})\text { where } x=\min\{x|\text{$H(D_{x,i})$ is on level $k$}\} $$
%% %% For example, Fig.~\ref{fig:svem:findpartition} shows $L_9^0$, $L_9^1$,
%% %% $L_9^2$, and $L_9^3$, which are the last candidate models (on levels
%% %% 0, 1, 2, and 3 respectively) learned from data covered by $D_{1,9}$.
%% %% We show some properties of $L_i^k$ that are important to speed up the
%% %% partition algorithm:


%% \vspace*{.1cm}
%% \begin{enumerate}
%% \item The designated model of the last segment of partition $P_i^k$ is
%%   $L_i^k$.  In Fig.~\ref{fig:svem:findpartition}, we show $L_9^0$,
%%   $L_9^1$, $L_9^2$, and $L_9^3$, which are the last candidate models
%%   (on levels 0, 1, 2, and 3 respectively) learned from data in
%%   $D_{1,9}$.

%% \vspace*{.1cm}
%% \item

%%   Let $x$ be the starting position of the last segment in partition
%%   $P_i^k$. When $x$ is given, the best partition of sequence
%%   $D_{1,x-1}$ does not depend on sequence $D_{x,i}$, i.e., we can
%%   partition the two sequences independently.
%% \end{enumerate}
%% \vspace*{.1cm}

%% We next show how to derive $P_i^k$. Note that the designated model of
%% the last segment in the partition is on level $k$. %%  focus on from
%% %% partition $P_{i-1}^k$ or .  Accordingly, we need to compute the
%% %% validation error $Q_i^k$ of every best partition $P_i^k$ of $D_{1,i}$,
%% %% which subjects to the constraint that the designated model of the last
%% %% block is fixed to the $k$-level model $L_i^k$. We also compute
%% %% $Q_i=\min_kQ_i^k$ corresponding to the best partition $P_i$ of
%% %% $D_{1,i}$ without constraint. When computing $Q_i^k$,
%% We consider two cases:

%% \begin{enumerate}
%% \item When $L_i^k=L_{i-1}^k$.  In Figure~\ref{fig:svem:findpartition},
%%   for example, we have $L_{10}^3 = L_9^3 = M_1^3$ and
%%   $L_{10}^2=L_9^2=M_2^2$.  Since the designated model of the last
%%   segment does not change, no new segment will be created. Partition
%%   $P_i^k$ simply extends $P_{i-1}^k$ by adding $d_i$ to its last
%%   segment.  Accordingly, $\hat Q_i^k$ can be derived from $\hat
%%   Q_{i-1}^k$ as follows:
%%   $$i \cdot \hat Q_i^k=(i-1) \cdot \hat Q_{i-1}^k+\VE(L_i^k,\{d_i\})$$

%% \item When $L_i^k\neq L_{i-1}^k$. In
%%   Figure~\ref{fig:svem:findpartition}, for example, we have $L_{10}^0
%%   \neq L_9^0$, and $L_{10}^1 \neq L_9^1$.  In this case, we must find
%%   the starting position of the last segment that minimizes $\hat
%%   Q_i^k$.  The starting position $x$ ranges from $i-b_k+1$ backwards to the
%%   leftmost position $i'$ where $H(D_{i',i})=L_i^k$ still holds.  For
%%   each $x$, $P_i^k$ extends $P_{1,x-1}$ by creating a new segment
%%   $D_{x,i}$ in the partition.  Accordingly, $Q_i^k$ can be derived as
%%   follows:
%%   $$i \cdot \hat Q_i^k=\min_{x:H(D_{x,i})=L_i^k} (x-1) \cdot \hat Q_{x-1}+
%%   (i-x+1) \cdot \VE(L_i^k,D_{x,i}) $$
%%   As computing $\VE(L_i^k,D_{x,i})$ needs linear time, if we
%%   recompute $\VE(L_i^k,D_{x,i})$ for every position $x$, we
%%   will spend at least quadratic time to find the best position of $x$.
%%   Fortunately, because
%%   \begin{align*}
%%     &(i-x+1) \cdot \VE(L_i^k,D_{x,i})=\sum_{j=x}^i\VE(L_i^k,\{d_j\})\\
%%     =&\VE(L_i^k,\{d_x\}) + (i-x) \cdot \VE(L_i^k,D_{x+1,i})
%%   \end{align*}
%%   the value of $\VE(L_i^k,D_{x,i})$ can be incrementally updated
%%   while we move the position $x$ backwards.

%% \end{enumerate}

%% %% Above processes, demonstrated an example in
%% %% Fig.~\ref{fig:svem:findpartition}, are formalized as follows:
%% %% \begin{align}
%% %% L_i^k=&H(D_{x,i}):x=\min\{x|\text{$H(D_{x,i})$ is $k$-level}\}\\
%% %% Q_i=&
%% %% \begin{cases}
%% %%     0 & i=0 \\
%% %%     \min_kQ_i^k & i>0
%% %% \end{cases}\\
%% %% Q_0^k=&+\infty
%% %%     \intertext{when $i>0$ and $L_i^k=L_{i-1}^k$}
%% %% i \cdot Q_i^k=&(i-1) \cdot Q_{i-1}^k+\VE(L_i^k,\{d_i\})\\
%% %%     \intertext{when $i>0$ and $L_i^k\neq L_{i-1}^k$}
%% %% i \cdot Q_i^k=&\min_{x:H(D_{x,i})=L_i^k}
%% %%     (x-1) \cdot Q_{x-1}+\VE(L_i^k,D_{x,i})\\
%% %% =&\min_{x:H(D_{x,i})=L_i^k}
%% %%     (x-1) \cdot Q_{x-1}+\sum_{j=x}^i\VE(L_i^k,\{d_j\})\notag
%% %% \end{align}


%% \subsection{Complexity analysis}

%% The Sequential Clustering algorithm consists of two parts: learning
%% all candidate models, and finding the best partition. %% We analyze the
%% %% time complexity of our algorithm in this section, and reveal that it
%% We show that combined they
%% spend near linear time in total.

%% First of all, we present a lemma to assist our analysis.

%% \vspace*{.1cm}
%% \begin{lemma}
%% Suppose learning a model from $n$ training data needs $T(n)$ time.
%% If the learning time is super-linear, that is, $T(n)=\Omega(n)$,
%% then $k\cdot T(\frac{n}{k})=O(T(n))$, i.e., learning $k$ models each
%% from one dataset of size $\frac{n}{k}$ is asymptotically not slower
%% than learning a model from a dataset of size $n$.
%% \end{lemma}
%% \vspace*{.1cm}

%% With the help of this lemma, we can derive the total time for
%% learning all candidate models.


%% \vspace*{.1cm}
%% \begin{theorem}
%%   Learning all candidate models from a data stream of size $n$ takes
%%   at most $O(\frac{\ln n}{p\ln\gamma}T(n))$ time.
%% \label{thm:svem:time}
%% \end{theorem}
%% \begin{proof}
%% At each level $k$ we learn $\frac{n}{pb_k}$ models each from a
%% segment of size $b_k$. According to the aforementioned lemma,
%% $\frac{n}{pb_k}T(b_k)=O(\frac{1}{p}T(n))$, and because we have
%% $\ln_\gamma n=\frac{\ln n}{\ln\gamma}$ levels, the total learning
%% time is at most $O(\frac{\ln n}{p\ln\gamma}T(n))$.
%% \end{proof}
%% \vspace*{.1cm}

%% \vspace*{.1cm}
%% \begin{theorem}
%% If training a single model has time complexity of $O(n^c)$ (e.g.,
%% using some kinds of SVM), then learning all candidate models takes
%% time at most $O(\frac{n^c}{p\ln\gamma})$.
%% \end{theorem}
%% \begin{proof}
%% Let $A_k$ denote the total time for learning all candidate models at
%% level $k$, and $K$ denote $\lfloor\ln_\gamma n\rfloor$. We thus get
%% $A_k=\frac{n}{pb_k}T(b_k)=O(\frac{n^{c+1}}{pb_k})$. Since
%% $b_k=\gamma^k$, we get $A_k=\gamma A_{k-1}$. Hence,
%% $A_K,A_{K-1},A_{K-2},\dots,A_0$ forms geometric series, and the
%% total learning time is
%% \begin{align*}
%% \sum_{k=0}^KA_k=&\frac{1-(\frac{1}{\gamma})^{K+1}}{1-\frac{1}{\gamma}}O(A_K)\leq
%% \frac{\gamma}{\gamma-1}O(\frac{n^{c+1}}{p\gamma^K})\\
%% =&O(\frac{1}{\ln\gamma}\cdot\frac{n^c}{p})=O(\frac{n^c}{p\ln\gamma})
%% \end{align*}
%% %  According to the Master Theorem. Detail omitted for lack of space.
%% \end{proof}
%% \vspace*{.1cm}

%% Theorem~\ref{thm:svem:time} reveals that, given fixed $\gamma$ and
%% $p$, learning all candidate models adds only a logarithmic factor
%% compared with learning a single model from the entire data stream.
%% After learning the candidate models, we find the best partition via
%% dynamic programming approach. In the latter part, since all models
%% have been built, the most time consuming computation is evaluating
%% the validation errors by testing each model on a portion of data. It
%% is clear that each candidate model at level $k$ is only tested on
%% those data around its training segment, of quantity in proportion to
%% $b_k$. Therefore, the total time of the latter part is $O(\frac{n\ln
%% n}{p\ln\gamma}t)$, where $t$ denotes the time for testing a model on
%% a single data. Usually, the time learning all candidate models
%% dominates the total running time of our algorithm.

%% There is also an optimization consideration in practice. Candidates
%% models learned from very small data segments (e.g., candidate models
%% on level 0) may not be useful, as concept occurrence usually lasts
%% for a number of records. Similarly, candidate models learned from
%% very big data segments (e.g., candidate models on the top level),
%% which costs considerable learning time and spans over several
%% episodes, may not be useful either. Thus, we can set thresholds,
%% either straightforward or heuristic, to avoid learning candidate
%% models from data segments either too big or too small.

%% Finally, the extra space cost of our algorithm is also small,
%% because at any time only $\ln_\gamma n$ candidate models have to be
%% retained for future use. Sequential Clustering is also an
%% incremental algorithm that is capable of processing streaming data
%% dynamically.

%% The naive dynamic programming approach of at least $O(n^3)$ time, then
%% we give the details of the near linear time Sequential Clustering
%% algorithm.
%% The process takes near linear time only.

\subsection{Concept Clustering}

\input{hierarchical.tex}

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "vem-icde09"
%%% End: 
