\section{Problem Definition}
\label{sec:pre}


In this section, we give a formal problem definition and the objective
function for our clustering approach.
%% changing concepts, then we give an overview of our approach.

\subsection{Clustering for supervised learning}
At first look, the problem we have is more similar to supervised
learning: We are given a labeled data set $D=\{(x_i,c_i)\}$, and our
goal is to learn a model from $D$ such that we can predict the label
of a future example.

However, in order to create a good model, we need a large training
dataset with a stationary class distribution.  It is difficult to
obtain such a training dataset. As we have seen in above examples,
each episode that corresponds to one hidden context lasts for a very
short period of time. With limited training data for that hidden
context, the learned model will have large overfitting errors.  On
the other hand, arbitrarily increasing the training data will
inevitably include data of different concepts, which leads to models
of low quality. Thus, a solution is to group the historical data by
their hidden concepts such that each group consists of a substantial
amount of data with a stationary class distribution, which induces a
model of good quality.  Naturally, this is a clustering problem.

Second, this is a non-traditional clustering problem. Recall that the
problem of clustering is generally defined as follows: Given a set of
data objects
\begin{equation}
D=\{\vec{x_1},\cdots,\vec{x_n}\}
\label{equ:data}
\end{equation}
partition $D$ into
groups
$$P= \{D_1,\cdots,D_m | D_i \subset D\}$$
in such a way that objects
in the same group are {\it similar} while objects in different groups
are dissimilar based on a predefined similarity measurement.

Our problem is more than just replacing each $x_i$ in
Eq~\ref{equ:data} by $(x_i,c_i)$. %% In the above general definition, the
%% similarity measurement is left to each specific application. 
We are interested in the quality of the models trained on the
partitioned dataset. Thus, model quality is a better and more direct
measurement of the quality of clustering. The problem statement of our
task can be expressed as follows:

\vspace{.2cm}
({\sc Problem Statement}) Given a set of data objects
\begin{equation}
D=\{(\vec{x_1},c_1),\cdots,(\vec{x_n},c_n)\}
\end{equation}
partition $D$ into
groups
$$P= \{D_1,\cdots,D_m | D_i \subset D\}$$
in such a way that the {\it
  overall cross-validation error} of classifiers $C_i$ learned from
data $D_i$ is minimized ($1 \le i \le m)$.  We give the objective
function for minimizing overall cross-validation error below.
%% The overall cross-validation error is a measurement of the quality of
%% the learned models. %% We will define the overall cross-validation error
%% and the minimization problem in a more rigorous manner later in the
%% paper.



%% \subsection{Our contributions}

%% Knowledge is acquired from experience. For example, a human perfects
%% his tennis skills by practicing tennis everyday. Machine learning is
%% more or less similar: a learner tries to model a phenomenon by
%% studying a large data log.  The challenge is that, while humans
%% naturally group pieces of relevant (tennis-playing) experiences
%% together to build up his (tennis) skills, learning algorithms do not
%% know which scattering pieces of the data are relevant and may be
%% used to reinforce the knowledge about the phenomenon of interest.
%% This paper makes the following contributions in solving this
%% problem.

%% \begin{itemize}
%% \item We extend data clustering techniques on labeled data $(\vec
%%   x,c)$, where $c$ is the class label of data $\vec x$.  Our goal is
%%   to generate a set of clusters such that each cluster corresponds to
%%   a single concept in the data, i.e., data in each cluster has a
%%   stable class distribution $p(c|\vec x)$.
%% \item We devise a novel measure for quality of clustering.  Instead of
%%   maximizing similarity of data in the same cluster (and dissimilarity
%%   of data in different clusters), we partition the data in such a way
%%   that models learned from partitioned dataset have the lowest
%%   validation error.
%% \item We provide several solutions to the above optimization problem.
%%   Our ultimate solution creates models of as good quality as those
%%   created by brute force dynamic programming approaches.
%% \item Experimental results show that our approach is the best known
%%   approach of classifying data of evolving concepts. It is both most
%%   efficient (on-line training is reduced to the minimum) and most
%%   accurate (concepts reinforced by every historical instances in the
%%   data).
%% \end{itemize}


%% \subsection{Our Approach}

%% The first step toward building a high-order model is to capture all
%% stable concepts in the evolving data. However, as in the examples we
%% mentioned above, concept changes may occur at any time, instead of
%% exhibiting simple patterns such as periodicity~\cite{highperiod}.  The
%% second component of the high-order model is the concept change
%% patterns, which are also learned from the historical data, that is, we
%% analyze how individual concepts interact with each other by collecting
%% the statistics of concept changes. At runtime, with cues from an
%% online training stream, the high-order model identifies the current
%% concept in the stream and uses offline trained classifiers
%% corresponding to the concept for prediction.

%% The primary advantage of our approach is its very high accuracy.
%% Experiments show that in benchmark datasets, classification error of
%% the high-order model is only about one tenth of the current best
%% approaches. Furthermore, unlike state-of-the-art approaches, the
%% high-order model has no user parameters. It does not require users
%% to tune any parameters on the basis of the characteristics of
%% different data streams in order to attain satisfying classification
%% accuracy.

%% The primary task of data mining is to develop models based on
%% existing data. In classification, usually the training data is
%% fixed, for example, it is stored in a data warehouse, and the
%% models, once trained from the stored data, can be applied to future
%% data without much change. Thus, the knowledge discovery process can
%% be regarded as consisting of two sequential phases: a
%% \emph{training} phase, where models are learned from past data, and
%% a \emph{testing} phase, where models are applied on the future data.


%% This introduces negative impacts on the accuracy of a stream
%% classifier. Model training is often a time consuming, offline
%% process. To keep up with the high data throughput in testing, we
%% create impromptu models of low quality. In particular, it is hard to
%% find out what data an up-to-date model should rely on. A large set
%% of data may include changing concepts, and a small set will cause
%% model over-fitting.

%% \subsection{Our Motivation}

%% As data streams through the learning system, we train individual
%% models from small windows on the stream as if taking fast snapshots
%% of the evolving data. After a certain amount of time, we have
%% accumulated many snapshots. The question is, can we mine these
%% historical snapshots to derive a big picture about the underlying
%% data generating mechanism, and stop wasting time taking endless
%% snapshots?

%% This is desirable because big pictures are more revealing, and
%% likely to have more predictive power, than individual snapshots.
%% When data evolves, base models trained directly from small data
%% chunks will become unstable. Instead of chasing ephemeral patterns
%% in the data stream, we should learn a high-level, stable model from
%% historical base models.

%% In this paper, we show that this approach is not only desirable, but
%% also feasible. In fact, many systems work in a limited set of
%% states, and within each state, data's class distributions are
%% stable. For example, in network and system monitoring, most of the
%% time the system is in a stable state. When certain events occur
%% (e.g., heap exceeds physical memory), the system goes into another
%% state (e.g., one characterized by paging operations). The state may
%% switch back again (e.g., when memory usage recedes). As another
%% example, we predict traffic patterns in a metropolitan road network.
%% Under normal conditions, traffic behaves in one way, and under other
%% conditions, e.g., after an accident, traffic behaves in another way.
%% Note in both cases above, transitions among stable concepts may
%% occur at any time, instead of exhibiting simple patterns such as
%% periodicity.



%% \subsection{Paper organization}
%% The rest of the paper is organized as follows. Section~\ref{sec:pre}
%% gives some background information of the topic as well as an overview
%% of our approach. Section~\ref{sec:svem} introduces a clustering method
%% that finds continuous occurrences of hidden concepts in the data.
%% Section~\ref{sec:ivem} discusses how to group data of non-continuous
%% occurrences of same concepts together.  Section~\ref{sec:exp}
%% discusses empirical results of our approach.  Related work is
%% discussed in Section~\ref{sec:related}, and we conclude in
%% Section~\ref{sec:con}.


%% \subsection{Basics}
%% Consider a data stream $D=\{d_1,d_2, \cdots, d_n\}$ where $d_i =(\vec
%% x_i,c_i)$. Here, $\vec x_i$ is a data vector, and $c_i$ is the class
%% label of $\vec x_i$. We are interested in the mapping from data to its
%% class label. In concept-changing data streams, we think of labels as
%% being assigned by a hidden mechanism, or a hidden concept, which
%% changes over time.  In other words, the hidden concept at any point of
%% time is just the distribution $p(c|\vec x)$ at that time. The hidden
%% concept is not stationary, which means, for example, at time $t_1$ the
%% class distribution follows $p'(c|\vec x)$; later at time $t_2$, it
%% becomes $p''(c|\vec x)$.

%% Our goal is to find all the hidden concepts that appeared in data
%% stream $D$.  More specifically, we want to partition $D$ into a set of
%% clusters $P=\{D_1,D_2,\cdots, D_m|D_i\subset D\}$, such that in each
%% cluster $D_i$, class labels are assigned by the same hidden concept.

%% %% \begin{figure}[!htb]
%% %%     \centering
%% %%     \includegraphics[width=\columnwidth]{SVEM/Stream.eps}
%% %%     \caption{Concept-changing data stream}
%% %%     \label{fig:svem:stream}
%% %% \end{figure}

%% %% As an example, Fig.~\ref{fig:svem:stream} shows a data stream with
%% %% changing concepts.  Each circle denotes a labeled tuple $d_i=(\vec
%% %% x_i,c_i)$, where $\vec x_i$ is represented by the direction of the
%% %% arrow, and $c_i$ the color of the arrow. The pattern inside the
%% %% circles denotes the hidden concept or the class distribution that
%% %% generates the data, which is unknown to us. The data in this example
%% %% are generated by three different concepts, and the concept changes
%% %% over time.

%% \subsection{A novel clustering criterion}
%% The fundamental question is how to cluster the data so that each
%% cluster corresponds to a unique concept.  Traditional clustering
%% strategy is to maximize similarity between objects in the same
%% cluster, and minimize similarity between objects in different
%% clusters.  A widely used similarity measure is the Euclidean distance.
%% However, this is not necessarily a good clustering strategy for all
%% purpose.
%% %% , and sometimes it does not work -- e.g., in
%% %% high dimensional space, any two objects are considered far away to
%% %% each other in terms of the Euclidean distance.  We argue that a good
%% %% clustering criterion is the one that best serves the purpose of
%% %% clustering.
%% In our case, the distance between two records is not a good indicator
%% of whether the two records are from the same concept.

%% We need a new clustering strategy that best serves our purpose of
%% clustering, which is to learn distinct, precise concepts from the
%% clusters in our case.  Because concepts are not given explicitly, and
%% stochastic factors may be involved in the production of data drawn
%% from the concept distribution, there is no determinate method that
%% could verify whether a partition is correct or not.

\subsection{Clustering Strategy}

We define a clustering strategy that seeks to minimize the
classification error of the models learned from the clusters.  Let
$P=\{D_1,D_2,\cdots, D_m|D_i\subset D\}$ be a set of disjoint clusters
and let $M_i$ be the model learned from $D_i$.  Our goal is to
minimize
\begin{equation}
Q(P)=\frac{1}{|D|}\sum_{D_i\in P}|D_i| \cdot Err(M_i)
\label{eq:error}
\end{equation}
where $M_i$ is the model learned from $D_i$, and $Err(M_i)$ is the
error of $M_i$. We define $Err(M_i)$ later.

We argue that $Q(P)$ is a good indicator of the quality of the
partition $P$.  By definition, the best clustering is the one that all
data generated by the same concept is in the same cluster, while data
generated by different concepts is in different clusters.  Improper
partitions either produce clusters that contain multiple concepts, or
put data of the same concept into different clusters.  If a cluster
contains conflicting concepts, the model learned from the data will
have low classification accuracy; and if data of a same concept is put
into several clusters, each cluster will have less data and produce
models of larger overfitting error.  Both cases increase $Q(P)$. Thus,
$Q(P)$ is a good indicator of the quality of the clustering.

%% An improper partition $P_1$ is then shown in the figure, which
%% will have large validation error: $D_1$ and $D_2$, the first and
%% second chunk of $P_1$, contain conflicting concepts; and the last
%% chunk $D_3$ has too few data to learn the true model. A better
%% partition $P_2$ is also shown, and it is closer to the true process
%% of concept change.


%% The criterion we use in partitioning and clustering is to minimize the
%% average classification error of the models learned from all the data
%% chunks/concepts.

%% , which indicates the predictive power, is to
%% be minimized.

We estimate $Err(M_i)$, the error of model $M_i$, through
cross-validation.  In $k$-fold cross validation, the original data is
split into $k$ sets. A single set is retained as the validation data
for testing the model, and the remaining $k-1$ sets are used as
training data.  The process is repeated $k$ times, each time using a
different validation data set, to find the average classification
error in the $k$ tests.  It is a good estimate for the generalization
error of a model, even for small $k$.  
%% Thus, we estimate $Err(M_i)$ 
%% %% Let $\VE(M,S)$ denote the cross
%% %% validation error of model ${M}$ on data set $S$.  The quality of a
%% %% partition $P$ given by
%% in Eq~\ref{eq:error} by ${\cal E}(M_i,D_i)$.

%% \begin{equation}
%% Q(P)=\frac{1}{|D|}\sum_{D_i\in P}|D_i| \cdot \VE(M_i, D_i)
%% \label{eq:vem}
%% \end{equation}


%%It is presumed that $T$ and $S$ follow a
%% consistent splitting structure, so that each $x\in T\cap S$ is
%% included in the coincident sets in $T$ and $S$.  The remaining of this
%% paper discusses validation error minimization algorithms for mining
%% changing concepts in data streams.

%% \subsection{Discussion}
%% A potential use of our method is for classifying data stream of
%% evolving concepts. The quality of the models learned from the
%% clustered data is much higher than the quality of the model learned
%% from the
%% last window. If %We assume, for the purpose of simplifying our discussion,
%% all existing concepts have appeared in the training dataset, which means data
%% that arrives in the future always belongs to one of the known
%% concepts, then %% .  Under this assumption, after clustering discovers all
%% %% concepts,
%% the remaining classification problem is to simply find the
%% {\it current} concept, and use the model corresponding to the current
%% concept to classify the data. %% In other words, there is no need of
%% %% learning new models or concepts online, and all we need to know is
%% %% what is the current concept in play.
%% Furthermore, finding the current concept can be achieved by using a
%% Markov model built on the concepts, which %% we do not discuss in detail
%% %% as it
%% is outside the scope of this paper but not difficult to do.

%% In most cases, assuming all existing concepts have appeared in the
%% training data is realistic (especially when the given historical
%% dataset is large) because in many applications, the system that
%% generates the data operates in a limited number of states only, and
%% within each state the streaming data can be represented by a
%% stationary model. In cases where the assumption is not valid (e.g.,
%% when the historical dataset is small), we can easily switch to an
%% incremental learning approach, for example, we learn additional models
%% on-line by focusing on incorrectly classified instances.
%% Consider an East Asian student Juliet as an example, who's
%% confronting a paper deadline and uses an IME program for East Asian
%% text input. Feeling tired in writing her paper ``VEM Approach to
%% Mining Changing Concepts in Data Streams'', she visits a web forum
%% about movie and posts a comment for ``Titanic''. After that, having
%% noticed her friend Rose's logging on on the instant messenger, she
%% talks with Rose about their boyfriends and pets. When Rose leaves,
%% Juliet returns to her paper writing. In this example, Juliet works
%% in three states, each with different probability distribution of
%% words in text input. If a historical text input dataset is
%% available, it is beneficial for the input program to learn specific
%% models for distinct states, for improving its predictive power. To
%% accomplish this, however, mining the underlying states hidden in the
%% text input stream is an inevitable task.

%% Since the natural language processing required in the last example
%% is beyond the scope of this paper, we consider the simpler problem
%% of classification, where each streaming data is to be labeled with a
%% class tag. In every state, the class distribution over the data
%% domain is represented by a stationary model that is referred as a
%% concept. Given a historical data stream with correct labels, this
%% paper proposes algorithms for mining changing concepts in the data
%% stream, which falls into the category of supervised learning.

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "vem-icde09"
%%% End: 
