\section{Iterative Clustering}
\label{sec:ivem}

\begin{figure*}[!htb]
    \centering
    \[
    \small
    \begin{array}{c|c|}
        & M_{D_i} \\ \hline
        D_1 & 0.3 \\ \hline
        D_2 & 0.2 \\ \hline
        D_3 & 0.3 \\ \hline
        D_4 & 0.3 \\ \hline
        D_5 & 0.4 \\ \hline
        D_6 & 0.2 \\ \hline
        \multicolumn{2}{c}{\text{before assignment}}
    \end{array}\quad
    \begin{array}{c|c|c|c|c|}
        & M_a & M_b & M_c & M_d \\ \hline
        D_1 & \underline{0.3} & & & \\ \hline
        D_2 & 0.3 & \underline{0.2} & & \\ \hline
        D_3 & 0.3 & \underline{0.2} & & \\ \hline
        D_4 & 0.8 & 0.7 & \underline{0.3} & \\ \hline
        D_5 & 0.6 & 0.7 & \underline{0.3} & \\ \hline
        D_6 & 0.7 & 0.8 & 0.3 & \underline{0.2} \\ \hline
        \multicolumn{5}{c}{\text{assignment 1}}
    \end{array}\quad
    \begin{array}{c|c|c|c|c|}
        & M_a & M_b & M_c & M_d \\ \hline
        D_1 & \underline{0.3} & 0.15 & 0.6 & 0.7 \\ \hline
        D_2 & 0.3 & \underline{0.16} & 0.8 & 0.7 \\ \hline
        D_3 & 0.3 & \underline{0.13} & 0.7 & 0.6 \\ \hline
        D_4 & 0.8 & 0.8 & \underline{0.2} & 0.25 \\ \hline
        D_5 & 0.6 & 0.7 & \underline{0.15} & 0.2 \\ \hline
        D_6 & 0.7 & 0.8 & 0.18 & \underline{0.2} \\ \hline
        \multicolumn{5}{c}{\text{update 1}}
    \end{array}\quad
    \begin{array}{c|c|c|}
        & M_b & M_c \\ \hline
        D_1 & \underline{0.15} & 0.6 \\ \hline
        D_2 & \underline{0.16} & 0.8 \\ \hline
        D_3 & \underline{0.13} & 0.7 \\ \hline
        D_4 & 0.8 & \underline{0.2} \\ \hline
        D_5 & 0.7 & \underline{0.15} \\ \hline
        D_6 & 0.8 & \underline{0.18} \\ \hline
        \multicolumn{3}{c}{\text{assignment 2}}
    \end{array}\quad
    \begin{array}{c|c|c|}
        & M_b & M_c \\ \hline
        D_1 & \underline{0.1} & 0.7 \\ \hline
        D_2 & \underline{0.1} & 0.7 \\ \hline
        D_3 & \underline{0.1} & 0.8 \\ \hline
        D_4 & 0.8 & \underline{0.15} \\ \hline
        D_5 & 0.9 & \underline{0.15} \\ \hline
        D_6 & 0.7 & \underline{0.15} \\ \hline
        \multicolumn{3}{c}{\text{update 2}}
    \end{array}
    \]
    \caption{Validation errors in interleaved assignment and update steps (underlined numbers denote which model each sector is assigned to)}
    \label{fig:ivem:steps}
\end{figure*}

Section~\ref{sec:svem} focused on partitioning a stream into
continuous sectors so that each sector represents an episode of a
concept. Such episodes are of low quality in terms of their accuracy
in representing the concepts, as they are learned from data in
single sectors, which are small in size. However, since concepts
repeat themselves over time, if we group sector-based concepts into
a set of distinct concepts, each concept will have high quality as
it is learned from more data. In this section, we propose an
iterative clustering algorithm for this purpose.

\subsection{Overview}
A major challenge in grouping episodes into distinct concepts is that
we do not have a clear-cut criterion to judge whether two episodes
represent identical concepts. Using a similarity threshold is elusive
as a meaningful threshold is determined only by the characteristics of
the data stream.

We once again use validation error as our criterion. The best grouping
will minimize the mean validation error of the resulting concepts.
Specifically, given a stream of sectors $$P=\{D_1,D_2,\cdots, D_m\}$$
we find a partition $$T=\{T_1,T_2,\cdots, T_t|T_i=D_{j_1}\cup
D_{j_2}\cdots\}$$
that minimizes
$$Q(T)=\frac{1}{|D|}\sum_i|T_i|\cdot\VE(T_i,T_i)$$

The Iterative Clustering algorithm we propose finds an approximate
solution to the above optimization problem in near linear time. It
works like the EM or the K-means algorithm. We store a set of models
${\cal M}=\{M_1,M_2,\cdots,M_t\}$ that corresponds to existing
concepts. The algorithm alternates between an assignment step, which
assigns each sector to the most likely model, and an update step,
which relearns each model from data assigned to it. These steps do
not increase the overall validation error, and the algorithm repeats
these steps until convergence. Next, we discuss the assignment and
the update step respectively in more detail.

\subsection{The assignment step}

In this step, we assign every sector $D_i$ to a model in ${\cal M}$
that has the smallest validation error on $D_i$. Initially, ${\cal M}$
is empty, and we learn a model $M_{D_i}$ from every sector $D_i$.
Then, we visit the sectors in random order in the assignment step.
For each $D_i$, we find a model $O$ from ${\cal M}\cup\{M_{D_i}\}$
such that $O$ has the smallest validation error $\VE(O,D_i)$.  Sector
$D_i$ is assigned to model $O$. We also add $O$ to ${\cal M}$ if it is
not already there (i.e., $O=M_{D_i}$ and $O\notin {\cal M}$).

An important question is how large will the model set ${\cal M}$
become?  The size of ${\cal M}$ decides the time complexity of the
assignment step. We prove the following theorem, which shows that the
size of ${\cal M}$ depends on the total number of distinct concepts in
the data.

\vspace{.1cm}
\begin{theorem}
  After the first assignment step, if all $m$ sectors are produced by
  the same concept (or class distribution), the expected
%  number $t$ of models put into
  size of ${\cal M}$ is $\Theta(m^{0.5})$.\footnote{A tighter bound of
    $E(|{\cal M}|)$ is $(2m)^{0.5}+O(1)$.}
\end{theorem}
\begin{proof}
  Let $t_i$ be the size of ${\cal M}$ after we process the $i$-th
  sector in the first assignment step. Initially, $t_0=0$.  Suppose we
  are processing $D_i$. At that time, ${\cal M}$ contains $t_{i-1}$
  atomic models. From models in ${\cal M}$ and model $M_{D_i}$, we
  choose one that has the smallest validation error on $D_i$. %% If
%%   $M_{D_i}$ is chosen, then the size of ${\cal M}$ will increase by 1. %%  $M_{D_i}$ is put into ${\cal
%% %%     M}$.
  Because all the $t_{i-1}$ models in ${\cal M}$ as well as $M_{D_i}$
  are learned from data produced by the same concept, each of them has
  the same probability to get the smallest validation error on $D_i$,
  and has the same probability to be chosen. Therefore, with
  probability $\frac{1}{t_{i-1}+1}$, $M_{D_i}$ is chosen and the size
  of ${\cal M}$ becomes $t_{i-1}+1$. Otherwise, the size of ${\cal M}$
  stays the same.
%% Thus, it leads to $E(t_i)=t_{i-1}\cdot (1-
%%   \frac{1}{t_{i-1}+1})+(t_{i-1}+1)\cdot\frac{1}{t_{i-1}+1} =
%%   t_{i-1}+\frac{1}{t_{i-1}+1}$.

  To find out the value of $t_m$, we introduce another variable $y$,
  whose initial value is $y_0=0$. After visiting the $i$-th sector, if
  $t$ increases (i.e., $t_i=t_{i-1}+1$), we increase $y$ by $t_i$,
  i.e., $y_i=y_{i-1}+t_i$, otherwise, $y_i=y_{i-1}$. It is clear that
  equality $t_i(t_i+1)=2y_i$ holds for any $i$. Because the
  probability that $t$ increases at step $i$ is $\frac{1}{t_{i-1}+1}$,
  we get
\begin{eqnarray}
  E(y_i-y_{i-1})&=&0\cdot(1-\frac{1}{t_{i-1}+1})+(t_{i-1}+1) \cdot \frac{1}{t_{i-1}+1} \nonumber\\
  &=& \frac{t_{i-1}+1}{t_{i-1}+1}=1 \nonumber
\end{eqnarray}
This leads to $E(y_m)=m$, and finally, because $t_m(t_m+1)=2y_m$, we
conclude that $E(t_m)=\Theta(E(y_m)^{0.5})=\Theta(m^{0.5})$.
\end{proof}
\vspace{.1cm}

\begin{corollary}
  After the first assignment step, if the $m$ data sectors are
  produced by $k$ distinct concepts, the expected size of ${\cal M}$
  is at most $O((km)^{0.5})$.
\end{corollary}
\vspace{.1cm}

Thus, we have bounded the cardinality of ${\cal M}$ after the first
assignment step. Moreover, the cardinality is usually decreasing,
rather than increasing, in succeeding iterations. To obtain the
validation error of different models on every sector, each of the
$n$ records will be tested $|{\cal M}|$ times. Hence, the time
complexity of a single assignment step is $O((km)^{0.5}nt)$, where
$t$ denotes the testing time on a single data.

\subsection{The update step}

The update step revises and refines every model in $M$ for better
adapting it to the assigned sectors. It relearns a new model $M_i^*$
for every existing model $M_i$, from all data that have been
assigned to it (except those models that the set of assigned sectors
is not changed by the last assignment step). $M_i$ can then be
replaced with $M_i^*$. However, for ensuring the convergence, and
preventing any possible increase of the validation error, we replace
$M_i$ with $M_i^*$ only if it really reduces the overall validation
error on the assigned sectors. If the old model $M_i$ is retained,
we simply discard $M_i^*$. Another important task of the update step
is removing all models that have not any sectors assigned.

By relearning the models from new assigned data, the update step
improves the quality of each model. This helps the assignment step
for assigning each sector to the correct or the most suitable model.
The total time of a single update step is not worse than learning a
model from the entire stream $D$.

\subsection{The whole algorithm}

Iterative Clustering algorithm alternatively repeats the assignment
step and update step until convergency. The convergency is offered
by the monotonously decreasing of the overall validation error ---
neither of the steps would increase the validation error. The number
of iterations before convergency is quite small in practice, and the
validation error usually reaches close to the extremum in the first
a few iterations.

The assignment step always assign a sector to the model that has the
smallest validation error. If at sometime duplicate models coexist
that are learned from data produced by the same concept, the finer
model has more chance than others to attract more sectors of data,
which then results in making the model more competitive. The models
that finally lose all assigned sectors will be eliminated in the
update step. Therefore, the cardinality of the model set is
decreasing more often than increasing, although it is not guaranteed
that only a unique model ultimately remains for a distinct concept.

Fig.~\ref{fig:ivem:steps} demonstrates an example of the entire
process. We have totally six sectors, generated by two hidden
concepts. The first diagram presents the validation errors of the
atomic models learned from every sector. In the first assignment
step, $D_1$, $D_2$, $D_4$ and $D_6$ are assigned respectively to the
atomic models $M_a$, $M_b$, $M_c$ and $M_d$ learned from themselves,
which are then put into the model set $\cal M$. The sectors $D_3$
and $D_5$ are respectively assigned to $M_b$ and $M_c$ which make
fewer validation errors than those atomic models learned from
themselves. In the following update step, $M_b$ and $M_c$ are
revised and refined by relearning and incorporating the new sectors
assigned to them, which lowers their validation errors on data drawn
from the same hidden concepts. The next assignment step assigns
$D_1$ and $D_6$ to the improved models $M_b$ and $M_c$, resulting
the poor models $M_a$ and $M_d$ lost all assigned sectors and
removed from $\cal M$. The final update step further refines the
remaining models that have won the game.
