\section{Iterative Clustering}
\label{sec:ivem}

\begin{figure*}[!t]
    \centering
    \[
    \small
    \begin{array}{c|c|}
        & M_{D_i} \\ \hline
        D_1 & 0.3 \\ \hline
        D_2 & 0.2 \\ \hline
        D_3 & 0.3 \\ \hline
        D_4 & 0.3 \\ \hline
        D_5 & 0.4 \\ \hline
        D_6 & 0.2 \\ \hline
        \multicolumn{2}{c}{\text{before assignment}}
    \end{array}\quad
    \begin{array}{c|c|c|c|c|}
        & M_a & M_b & M_c & M_d \\ \hline
        D_1 & \underline{0.3} & & & \\ \hline
        D_2 & 0.3 & \underline{0.2} & & \\ \hline
        D_3 & 0.3 & \underline{0.2} & & \\ \hline
        D_4 & 0.8 & 0.7 & \underline{0.3} & \\ \hline
        D_5 & 0.6 & 0.7 & \underline{0.3} & \\ \hline
        D_6 & 0.7 & 0.8 & 0.3 & \underline{0.2} \\ \hline
        \multicolumn{5}{c}{\text{assignment 1}}
    \end{array}\quad
    \begin{array}{c|c|c|c|c|}
        & M_a & M_b & M_c & M_d \\ \hline
        D_1 & \underline{0.3} & 0.15 & 0.6 & 0.7 \\ \hline
        D_2 & 0.3 & \underline{0.16} & 0.8 & 0.7 \\ \hline
        D_3 & 0.3 & \underline{0.13} & 0.7 & 0.6 \\ \hline
        D_4 & 0.8 & 0.8 & \underline{0.2} & 0.25 \\ \hline
        D_5 & 0.6 & 0.7 & \underline{0.15} & 0.2 \\ \hline
        D_6 & 0.7 & 0.8 & 0.18 & \underline{0.2} \\ \hline
        \multicolumn{5}{c}{\text{update 1}}
    \end{array}\quad
    \begin{array}{c|c|c|}
        & M_b & M_c \\ \hline
        D_1 & \underline{0.15} & 0.6 \\ \hline
        D_2 & \underline{0.16} & 0.8 \\ \hline
        D_3 & \underline{0.13} & 0.7 \\ \hline
        D_4 & 0.8 & \underline{0.2} \\ \hline
        D_5 & 0.7 & \underline{0.15} \\ \hline
        D_6 & 0.8 & \underline{0.18} \\ \hline
        \multicolumn{3}{c}{\text{assignment 2}}
    \end{array}\quad
    \begin{array}{c|c|c|}
        & M_b & M_c \\ \hline
        D_1 & \underline{0.1} & 0.7 \\ \hline
        D_2 & \underline{0.1} & 0.7 \\ \hline
        D_3 & \underline{0.1} & 0.8 \\ \hline
        D_4 & 0.8 & \underline{0.15} \\ \hline
        D_5 & 0.9 & \underline{0.15} \\ \hline
        D_6 & 0.7 & \underline{0.15} \\ \hline
        \multicolumn{3}{c}{\text{update 2}}
    \end{array}
    \]
    \caption{Validation errors in interleaved assignment and update steps (underlined numbers denote the models that each sector is assigned to)}
    \label{fig:ivem:steps}
\end{figure*}

% Section~\ref{sec:svem} focused on partitioning a stream into
% continuous sectors so that each sector represents an episode of a
% concept. 
Each segment produced by the Sequential Clustering Algorithm
represents a snapshot of a
model. % Since the segments are small in size, they cannot
% reveal the entire model that generates
% them. % view is Such episodes are of low quality in terms of their
% % accuracy in representing the concepts, as they are learned from data
% % in single sectors, which are small in size. 
% However, when models repeat themselves over time, and if we group
% sectors generated by the same model, the resulting model
% will have high quality as it is learned from more data. 
Our goal is to group snapshots that belong to the same model together
to reveal a big picture of the model. In this section, we propose an
iterative clustering algorithm for this purpose.

\subsection{Overview}

A major challenge in grouping snapshots into distinct models is that
we do not have a clear-cut criterion to judge whether two snapshots
represent a same model.

We once again use partition error as our criterion. The best
grouping will minimize the mean error of the resulting models.
Specifically, given a stream of sectors
$$P=\{D_1,D_2,\cdots, D_m\}$$ we find a partition
$$T=\{T_1,T_2,\cdots, T_t|T_i=D_{j_1}\cup D_{j_2}\cdots\}$$ that
minimizes
$$Q(T)=\frac{1}{|D|}\sum_i|T_i|\cdot (\VE(M_{T_i},T_i)+\delta)$$

The Iterative Clustering algorithm we propose finds an approximate
solution to the above optimization problem in near linear time. It
works like the EM or the K-means algorithm. We store a set of models
${\cal M}=\{M_1,M_2,\cdots,M_t\}$ that corresponds to existing
models, which is empty at the initial phase. Then the algorithm
alternates between an assignment step, which assigns each sector to
the most likely model, and an update step, which relearns each model
from data assigned to it. These steps do not increase the mean error
of models, and the algorithm repeats these steps until
convergence. Next, we discuss the these steps respectively in more
detail.

\subsection{The assignment step}

In this step, we assign every sector $D_i$ to a model in ${\cal M}$
that has the smallest validation error testing on $D_i$. At the
initial phase of the whole algorithm, $\cal M$ is empty, and we
pre-learn a model $M_{D_i}$ from every sector $D_i$. In each of the
assignment steps, we visit the sectors in random order. For each
sector $D_i$, we find from ${\cal M}\cup\{M_{D_i}\}$ a model $O$
that has the smallest validation error $\VE(O,D_i)$. Sector $D_i$ is
assigned to model $O$. We also put $O$ into $\cal M$ if it is not
already there (i.e., $O=M_{D_i}$ and $O\notin {\cal M}$).

\subsection{The update step}

The update step revises and refines every model in $\cal M$. It
relearns a new model $M_i^*$ for every existing model $M_i$, from all
data that have been assigned to it (except those models that the set
of assigned sectors is not changed by the last assignment step). $M_i$
can then be replaced with $M_i^*$. However, for ensuring the
convergence, and preventing any possible increase of the validation
error, we replace $M_i$ with $M_i^*$ only if $M_i^*$ has really
smaller validation error than $M_i$ testing on the assigned
sectors. If the old model $M_i$ is retained, we simply discard
$M_i^*$. Finally, if there are any models in $\cal M$ that have lost
all assigned sectors in the preceding assignment step, we simply
eliminate those models from $\cal M$.

\subsection{The whole algorithm}

The algorithm repeats the assignment and the update step until
convergency. The convergency is guaranteed by the monotonous decrease
of the mean validation errors -- neither of the steps increases the
(weighted) mean validation error of all models in $\cal M$. As
validated in the experiments, the number of iterations before converge
is quite small, and the mean validation error usually reaches close to
the extremum in the first a few iterations.

The assignment step assigns a sector to the model that has the
smallest validation error. By relearning models from new assigned
data, the update step improves the quality of each model.  This helps
futre assignment steps to assign each sector to a more suitable
model. In the process, if duplicate models (learned from data drawn
from the same final model) appear, the better model has more chance
than the rest to attract sectors of data, which then results in making
the model more competitive. The models that finally lose all assigned
sectors will be eliminated. Therefore, the remaining models after
convergency are expected to represent the true models.

Fig.~\ref{fig:ivem:steps} demonstrates an example of the entire
process. We have totally six sectors, generated by two hidden
models. The first diagram presents the validation errors of the
atomic models learned from every sector. In the first assignment
step, $D_1$, $D_2$, $D_4$ and $D_6$ are assigned respectively to the
atomic models $M_a$, $M_b$, $M_c$ and $M_d$ learned from themselves,
which are then put into the model set $\cal M$. The sectors $D_3$
and $D_5$ are respectively assigned to $M_b$ and $M_c$ which make
fewer validation errors than those atomic models learned from
themselves. In the following update step, $M_b$ and $M_c$ are
revised and refined by relearning and incorporating the new sectors
assigned to them, which lowers their validation errors on data drawn
from the same hidden models. The next assignment step assigns
$D_1$ and $D_6$ to the improved models $M_b$ and $M_c$, resulting
the poor models $M_a$ and $M_d$ lost all assigned sectors and
removed from $\cal M$. The final update step further refines the
remaining models that have won the game.

\subsection{Performance analysis}

The size of the model set $\cal M$ plays a critical role in our
performance analysis, because it decides both the time complexity and
the number of the remaining models after convergency. $\cal M$ varies
during the algorithm, and its size depends on the number of sectors,
$m$, and the number of distinct models producing data, $k$. Our first
theorem considers the simple case when $k=1$, that is, all data are
drawn from the same model, and we evaluate the expected size of $\cal
M$ after the first assignment step.

\begin{theorem}
\label{thm:ivem:msize} After the first assignment step, if all $m$
data sectors are drawn from the same model (or data distribution),
the expected size of $\cal M$ is at most $O(m^{0.5})$.
\end{theorem}

\begin{proof}
Let $t_i$ denote the size of $\cal M$ at the time we finishes
processing the $i$-th sector in the first assignment step.
Initially, $t_0=0$. Suppose we are processing $D_i$. At that time,
$\cal M$ contains $t_{i-1}$ atomic models. From models in $\cal M$
and model $M_{D_i}$, we choose one that has the smallest validation
error testing on $D_i$. Because all the $t_{i-1}$ models in $\cal M$
as well as $M_{D_i}$ are learned from a single data sector drawn
from the same model, and because we process the sectors in random
order, each of them has the same probability to get the smallest
validation error on $D_i$. Therefore, with probability at most
$\frac{1}{t_{i-1}+1}$ \footnote{The probability is an upper bound
rather than an exact value, because the $t_{i-1}$ models in $\cal M$
are superior models selected from $\{M_{D_1}\dots M_{D_{i-1}}\}$ by
defeating others in testing on some sectors}, $M_{D_i}$ is chosen
and the size of $\cal M$ becomes $t_{i-1}+1$. Otherwise, the size of
$\cal M$ stays the same.

To find out the value of $t_m$, we introduce another variable $y$,
whose initial value is $y_0=0$. After visiting the $i$-th sector, if
$t$ increases (i.e., $t_i=t_{i-1}+1$), we let $y$ increase by $t_i$,
i.e., $y_i=y_{i-1}+t_i$, otherwise, $y_i=y_{i-1}$. It is clear that
equality $t_i(t_i+1)=2y_i$ holds for any $i$. Because the
probability that $t$ increases at step $i$ is at most
$\frac{1}{t_{i-1}+1}$, we get
\begin{eqnarray}
  E(y_i-y_{i-1})&\leq&0\cdot(1-\frac{1}{t_{i-1}+1})+(t_{i-1}+1) \cdot \frac{1}{t_{i-1}+1} \nonumber\\
  &=& \frac{t_{i-1}+1}{t_{i-1}+1}=1 \nonumber
\end{eqnarray}
This leads to $E(y_m)\leq m$, and finally, because
$t_m(t_m+1)=2y_m$, we conclude that
$E(t_m)=O(E(y_m)^{0.5})=O(m^{0.5})$.
\end{proof}

When the number of distinct models generating data, $k$, is more
than 1, the sectors can be divided into $k$ groups by the unique
model they belong to. Then we can apply the above theorem to each
of the $k$ groups, and the total number of output models is scaled
by at most $k$, thus we get the following corollary

\begin{corollary}
\label{cor:ivem:msize} After the first assignment step, if the $m$
data sectors are drawn from $k$ distinct models, the expected size
of $\cal M$ is at most $O((km)^{0.5})$.
\end{corollary}

Thus, we have bounded the size of $\cal M$ after the first
assignment step. In usual cases, the update steps improve the
quality of every model, and each sector in future assignment steps
will always be assigned to existing model from $\cal M$. Therefore,
the size of $\cal M$ is usually decreasing, rather than increasing,
in succeeding iterations, and the running time of a single
assignment step is at most $O((km)^{0.5}n)$. In additional, the
running time of a single update step is not worse than learning a
model from the entire stream $D$.

How many models will remain in $\cal M$ at the end of the algorithm?
Obviously, Corollary~\ref{cor:ivem:msize} gives an upper bound
$O((km)^{0.5})$. However, it is still too large since this bound
increases with the number of sectors $m$. The following theorem
shows that the final size of $\cal M$ is usually independent of $m$.

\begin{lemma}
Let function $T=IC(P)$ denote the iterative clustering algorithm,
which accepts a set of $m$ data sectors, $P$, and returns a set of
clusters, $T$. Then $IC(IC(P))=IC(P)$.
\end{lemma}

\begin{theorem}
As long as $IC$ performs correct clustering (that is, at the end of
the algorithm, each model in $\cal M$ contains sectors drawn from
the same model), $E(|IC(P)|)$ is $O(1)$ to $m$.
\end{theorem}

\begin{proof}
Let $f_k(m)=E(|IC(P)|)$, for $|P|=m$ and the number of models
producing $P$ is $k$. Here we assume $k$ is a constant. Because $IC$
performs correct clustering, the numbers of models producing $P$
and $IC(P)$ are the same, and thus $E(f_k(|IC(P)|))=E(|IC(IC(P))|)$.
And because $IC(IC(P))=IC(P)$, we get
$f_k(m)=E(|IC(P)|)=E(f_k(|IC(P)|))=O(f_k(f_k(m)))$ for $|P|=m$.
According to Corollary~\ref{cor:ivem:msize}, we also know that
$f_k(m)=o(m)$. If $f_k(m)$ asymptotically increases with $m$, then
$f_k(m)=O(f_k(f_k(m)))=o(f_k(m))$, which could not be true.
Therefore, $f_k(m)=O(1)$ to $m$.
\end{proof}

% The presupposition of the above theorem, 
At the end of the algorithm, each model/cluster in $\cal M$ contains
only homogeneous data sectors that are generated by the same model. In
other words, it is supposed that the algorithm will not put sectors
from different models into the same cluster. Fortunately, this
assumption is usually true as long as the models are distinguishable
from each other, and our theorem just holds in common cases.

If we divide the sectors into $k$ groups by the unique model they
belong to, and apply the above theorem to each group, similarly, we
can get the following corollary

\begin{corollary}
As long as $IC$ performs correct clustering,\linebreak
$E(|IC(P)|)=O(k)$.
\end{corollary}
