\section{Overview}
\label{sec:pre}

In this section, we give an overview of our approach of discovering
models from stream data and explain the rationale behind it.
Following an outline of our approach (Section~\ref{sec:outline}), we
use two examples to demonstrate the general objective function for
clustering (Section~\ref{sec:q}), and we discuss its rationale
(Section~\ref{sec:why2}).

 %% first introduce some background knowledge about mining data of
%% changing concepts, then we give an overview of our approach.


\subsection{Two-step clustering\label{sec:outline}}
On a high level, our algorithm that discovers models in time-changing
stream data consists of two steps:

\begin{enumerate}
\item {\it Sequential Clustering.} Partition the data stream into a
  number of continuous data segment. Each continuous segment
  corresponds to an occurrence of a model;
\item {\it Iterative Clustering.} Cluster the segments based on the
  models they represent.  Each of the resulting cluster corresponds to
  a distinct, stable model.
\end{enumerate}

There are two questions. First, what is the criterion of partitioning
and clustering? Second, Is it possible to skip Step 1 (Sequential
Clustering) and perform Step 2 (Iterative Clustering) directly on the
original data?  We answer these questions below.

\subsection{Quality function\label{sec:q}}
Since our goal of clustering is to discover stable models from the
data, the criterion of clustering should reflect this goal.  Recall
that the problem of traditional clustering is defined as follows:
Given a set of data points $D=\{{x_1},\cdots,{x_n}\}$, partition $D$
into groups $P= \{D_1,\cdots,D_m | D_i \subset D\}$ in such a way that
data points in the same group are {\it similar} while data points in
different groups are {\it dissimilar}. The similarity is given by a
predefined distance function.

%% First, why is this a clustering problem? At first look, it is more
%% similar to supervised learning: We are given a labeled data set
%% $D=\{(x_i,c_i)\}$, and our goal is to learn a model from $D$ such that
%% we can predict the label of a future example.
We define a more general clustering strategy since distance functions
are not appropriate for general purpose model discovery. We seek to
minimize the error of the models obtained from the clusters.
Specifically, let $P=\{D_1,D_2,\cdots, D_m|D_i\subset D\}$ be a set of
disjoint clusters and let $M_{D_i}$ be the model for $D_i$.  We define
the error of the partition as
\begin{equation}
  Q(P)=   \frac{1}{|D|}\sum_{D_i\in P}|D_i| \cdot( Err(M_{D_i}) + \delta)
\label{eq:error}
\end{equation}
where $Err(M)$ is the error of the model $M$, which is application
specific, and $\delta$ is constant used to penalize partitions that
create a large number of clusters.

Our goal is to find the partition $P$ that minimizes
Eq~\ref{eq:error}.  Below, we give two examples where application
specific $Err(M)$ is used to find models of interest.

\subsubsection*{Example 1: Distance-based clustering}

Distance-based clustering, or time-series segmentation, is a special
case for Eq~\ref{eq:error}. We simply define the error of each
cluster as the average distance between every pair of data points in
the cluster.
\begin{equation}
Err(M_{D_i}) = \frac{1}{|D_i|\cdot(|D_i|-1)} \sum_{x,y\in D_i, x\neq
y} dist(x,y) \label{eq:clustering}
\end{equation}
where $dist(\cdot,\cdot)$ is the Euclidean distance or the Manhattan
distance. Note that $Err(M_{D_i})$ is undefined if $|D_i|=1$. We
will address this problem in more detail in Example 2.

Distance-based criterion may be too naive and useless in a more
complex application of time-series segmentation. However, we can
adjust the error function $Err(M_{D_i})$ to fit any given
requirement as long as the cluster boundary is still crisp.

\subsubsection*{Example 2: Stream classifiers}
%% In the above general definition, the similarity measurement is left to
%% each specific application. We are interested in the quality of the
%% models trained on the partitioned dataset. Thus, model quality is a
%% better and more direct measurement of the quality of clustering. In
%% other words, the problem statement of our task can be expressed as
%% follows:
Consider supervised learning for time-evolving data streams.  Each
$x_i\in D$ in the training dataset is a tuple $x_i = (\vec {d_i},
l_i)$, where $l_i$ is the class label of $\vec {d_i}$.  We want to
partition the stream into homogeneous segments such that each segment
is modeled by a classifier.  Intuitively, we think of labels as being
assigned by a non-stationary, hidden model, which is exactly the
classifier we want to learn. More specifically, the model or the
classifier can be represented by a class distribution $p(l|\vec d)$.
When we say it is not stationary, we mean, for example, at time $t_1$
the class distribution follows $p'(l|\vec d)$; later at time $t_2$, it
becomes $p''(l|\vec d)$.  Our goal is to find all the hidden models
that appear in stream $D$.

%% We need a new clustering strategy that best serves our purpose of
%% clustering, which is to learn distinct, precise concepts from the
%% clusters in our case.

We need to define $Err(M_i)$, the error of model $M_i$ trained from
segment $D_i$. Because stochastic factors may be involved in the
production of data, there is no determinate method that could verify
whether a partition is correct or not.  We derive the error by {\it
  cross-validation}.  For instance, $k$-fold cross validation splits
the original data into $k$ sample sets. A single set is retained as
the validation data for testing the model, and the remaining $k-1$
sets are used as training data.  The process is repeated $k$ times,
each time using a different validation data set.  In each test, we
find the percentage of samples that are misclassified.  The $k$-fold
cross validation error is the average classification error in the $k$
tests. It is a good estimate for the generalization error of a model,
even for small $k$.


%%  More specifically, we want to partition $D$ into a set of
%% clusters $P=\{D_1,D_2,\cdots, D_m|D_i\subset D\}$, such that in each
%% cluster $D_i$, class labels are assigned by the same hidden concept.



%% found For evaluating the
%% classification error, if we learn and test the model on the same data,
%% however, we will overestimate the model's generalization ability on
%% future data. For this reason, we employs cross-validation to evaluate
%% the generalization error of each model.
% Let $\VE(M_S)$ denote the cross validation error of model
% ${M}$ on data set $S$.  In our case, the quality of a partition $P$ given by
% Eq~\ref{eq:error} can be revised to:

% \begin{equation}
% Q(P)=\frac{1}{|D|}\sum_{D_i\in P}|D_i| \cdot \VE(M_{D_i})
% \label{eq:vem}
% \end{equation}
% In other words, we seek to minimize the {\it overall cross-validation
%   error} of classifiers $M_{D_i}$ learned from data $D_i$ ($1 \le i
% \le m)$.

% The keen reader might have noticed that if $|D_i|=1$ for all $i$
% then Eq~\ref{eq:vem} comes to 0, but certainly this is not a
% desirable partition. However, in order to derive the
% cross-validation error $M_{D_i}$, the dataset $D_i$ must contain at
% least 2 data points, which prevents the problem of trivial
% partitioning if the data is considered as a random sample.  We
% revisit this issue in more detail in Section~\ref{sec:why2}.

Intuitively, why is $Q(P)$ a good indicator of how well $P$ models the
data?  By definition, the best partition is the one that all data
generated by the same model is in the same cluster, and data generated
by different models is in different clusters.  Improper partition
either produces clusters that contain multiple models, or put data
from the same model into multiple clusters. We show that $Q(P)$ is
consistent with this goal: If a cluster contains conflicting models,
the model learned from the data will have lower classification
accuracy; and if data of a same model is put into several clusters,
each cluster will have fewer data samples and produces models of
larger overfitting error.  Both cases increase the value of
$Q(P)$. Thus, $Q(P)$ in Eq~\ref{eq:error} is a good indicator of the
quality of the clustering.


%% \subsection{Two Step Clustering}

%% However, in order to create a good model, we need a large training
%% dataset with a stationary class distribution.  It is difficult to
%% obtain such a training dataset. As we have seen in above examples,
%% each episode that corresponds to one hidden context lasts for a very
%% short period of time. With limited training data for that hidden
%% context, the learned model will have large overfitting errors.  On
%% the other hand, arbitrarily increasing the training data will
%% inevitably include data of different concepts, which leads to models
%% of low quality. Thus, a solution is to group the historical data by
%% their hidden concepts such that each group consists of a substantial
%% amount of data with a stationary class distribution, which induces a
%% model of good quality.  Naturally, this is a clustering problem.


%% Our problem is more than just replacing each $x_i$ in
%% Eq~\ref{equ:data} by $(x_i,c_i)$.

%% \vspace{.2cm}
%% ({\sc Problem Statement}) Given a set of data objects
%% \begin{equation}
%% D=\{(\vec{x_1},c_1),\cdots,(\vec{x_n},c_n)\}
%% \end{equation}
%% partition $D$ into
%% groups
%% $$P= \{D_1,\cdots,D_m | D_i \subset D\}$$


%% Consider a data stream $D=\{d_1,d_2, \cdots, d_n\}$ where $d_i =(\vec
%% x_i,c_i)$. Here, $\vec x_i$ is a data vector, and $c_i$ is the class
%% label of $\vec x_i$. We are interested in the mapping from data to its
%% class label.
%% \begin{figure}[!htb]
%%     \centering
%%     \includegraphics[width=\columnwidth]{SVEM/Stream.eps}
%%     \caption{Concept-changing data stream}
%%     \label{fig:svem:stream}
%% \end{figure}

%% As an example, Fig.~\ref{fig:svem:stream} shows a data stream with
%% changing concepts.  Each circle denotes a labeled tuple $d_i=(\vec
%% x_i,c_i)$, where $\vec x_i$ is represented by the direction of the
%% arrow, and $c_i$ the color of the arrow. The pattern inside the
%% circles denotes the hidden concept or the class distribution that
%% generates the data, which is unknown to us. The data in this example
%% are generated by three different concepts, and the concept changes
%% over time.

%% \subsection{A novel clustering criterion}
%% , and sometimes it does not work -- e.g., in
%% high dimensional space, any two objects are considered far away to
%% each other in terms of the Euclidean distance.  We argue that a good
%% clustering criterion is the one that best serves the purpose of
%% clustering.


%% Such data stream can be broken into many slices, and each slice refers
%% to an episode that corresponds to a hidden concept. A fundamental
%% problem is that, given a historical data stream, finds the partition
%% of consecutive data by their hidden concepts.

%% Instead, we measure the quality of a partition by its mean validation
%% error, which indicates
%% An improper partition $P_1$ is then shown in the figure, which
%% will have large validation error: $D_1$ and $D_2$, the first and
%% second chunk of $P_1$, contain conflicting concepts; and the last
%% chunk $D_3$ has too few data to learn the true model. A better
%% partition $P_2$ is also shown, and it is closer to the true process
%% of concept change.


%% The criterion we use in partitioning and clustering is to minimize the
%% average classification error of the models learned from all the data
%% chunks/concepts.

%% , which indicates the predictive power, is to
%% be minimized.



%%It is presumed that $T$ and $S$ follow a
%% consistent splitting structure, so that each $x\in T\cap S$ is
%% included in the coincident sets in $T$ and $S$.  The remaining of this
%% paper discusses validation error minimization algorithms for mining
%% changing concepts in data streams.




%% We give the details of the two steps in Section~\ref{sec:svem} and
%% Section~\ref{sec:ivem} respectively. One question that may arise is
%% the following: why do we use a two-step approach?  Can we skip Step 1
%% (Sequential Clustering) and perform Step 2 (Iterative Clustering)
%% directly on the original data?  This, in fact, is an optimization
%% consideration.

\subsection{Discussion\label{sec:why2}}

There are several interesting observations about our approach.

% \subsubsection*{Trivial partitioning}
% In both Example 1 and 2, we noticed that if $|D_i|=1$, the definition
% of $Err(M_{D_i})$ is invalid. Certainly, if $Err(M_{D_i})=0$ when
% $|D_i|=1$, then the trivial partition wherein each partition contains
% a single data point will have the best quality, which is certainly not
% a desirable outcome.

% Trivial partition can be avoided by employing carefully devised
% error function $Err(M_{D_i})$. Cross validation error is a good
% instance as it adds a potential penalty to small segments. However,
% in some applications, explicit penalty may be necessary or otherwise
% it may result in too many segments. In this case, we can augment the
% error function with a constant penalty factor for every segment,
% e.g., $Err(M_{D_i})=\VE(M_{D_i})+\delta$, wherein $\delta$ is a
% tunable parameter.

%Is the contraint $|D_i|>1$ strong enough to prevent trivial
%partitioning? The answer is positive if the data in the stream can
%be considered as more or less a random sample from its distribution.
%For instance in Example 2, if every two consecutive data points have
%the same class label, we have a trivial partition where $|D_i|=2$
%for all $i$. However, this means the data is not a random sample.

\subsubsection*{Rational for two-step clustering}
The datasets we want to cluster are usually very large. In traditional
clustering, the major cost comes from computing the similarity between
two objects.  Similarly, for every two data sectors, Step~2 must
consider the possibility that they belong to a same cluster.  If
Step~1 is skipped, the total number of sectors is the same as the
total number of data records (each sector consists of a single
record). Thus, it is extremely expensive as there are too many
possibilities to consider.  Fortunately, this is unnecessary. In our
setting, the possibility that a pair of records end up in the same
cluster is not always the same as another pair of records.  Consider
the underlying data generating mechanism.  As long as it stays in one
state, it generates data that belongs to the same cluster/model, until
it switches to another state. Assuming more than one record is
generated in each state, we can conclude that neighboring data is more
likely to be of the same model than non-neighboring data. Instead of
deciding whether any two records should be in the same cluster, Step~1
decides whether two neighboring records should be in the same state,
which is much less costly.  The number of data sectors produced by
Step~1 is usually orders of magnitude smaller than the number of
original records, which in turn reduces the cost of Step~2.

\subsubsection*{Using discovered models}
A potential use of our method is for classifying data stream of
evolving concepts. The quality of the models learned from the
clustered data is much higher than the quality of the model learned
from the last window.
If %We assume, for the purpose of simplifying our discussion,
all existing models/concepts have appeared in the training dataset,
which means data that arrives in the future can always be described by
a known model,
then %% .  Under this assumption, after clustering discovers all
%% concepts,
the remaining classification problem is to simply find the
{\it current} model % , and use the model corresponding to the current
% concept
to classify the data. %% In other words, there is no need of
%% learning new models or concepts online, and all we need to know is
%% what is the current concept in play.
Furthermore, finding the current model can be achieved by using a
Markov model learned from the data,
which %% we do not discuss in detail
%% as it
is outside the scope of this paper but not difficult to do.

In most cases, assuming all existing models have appeared in the
training data is realistic (especially when the given historical
dataset is large) because in many applications, the system that
generates the data operates in a limited number of states only, and
within each state the streaming data can be represented by a
stationary model. In cases where the assumption is not valid (e.g.,
when the historical dataset is small), we can easily switch to an
incremental learning approach, for example, we learn additional models
on-line by focusing on incorrectly classified instances.
%% Consider an East Asian student Juliet as an example, who's
%% confronting a paper deadline and uses an IME program for East Asian
%% text input. Feeling tired in writing her paper ``VEM Approach to
%% Mining Changing Concepts in Data Streams'', she visits a web forum
%% about movie and posts a comment for ``Titanic''. After that, having
%% noticed her friend Rose's logging on on the instant messenger, she
%% talks with Rose about their boyfriends and pets. When Rose leaves,
%% Juliet returns to her paper writing. In this example, Juliet works
%% in three states, each with different probability distribution of
%% words in text input. If a historical text input dataset is
%% available, it is beneficial for the input program to learn specific
%% models for distinct states, for improving its predictive power. To
%% accomplish this, however, mining the underlying states hidden in the
%% text input stream is an inevitable task.

%% Since the natural language processing required in the last example
%% is beyond the scope of this paper, we consider the simpler problem
%% of classification, where each streaming data is to be labeled with a
%% class tag. In every state, the class distribution over the data
%% domain is represented by a stationary model that is referred as a
%% concept. Given a historical data stream with correct labels, this
%% paper proposes algorithms for mining changing concepts in the data
%% stream, which falls into the category of supervised learning.

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../vem"
%%% End:
