\section{Introduction}
%% In all sorts of life, from practicing tennis to mathematically
%% modeling the nature, lessons are learned from past experience. The
%% more experience we accumulate, the better understanding we gain of the
%% activity.


% For example, in speech recognition, as variability of
% the acoustic signal changes over time, we need to partition a
% continuous audio stream into homogeneous segments that can be handled
% by speech recognition algorithms.  In system monitoring, because
% system workload changes over time, the criteria of anomaly predicting
% must change accordingly. % In Web search, to better interpret user
% % intent in search queries, we turn our attention to search logs which
% % are a sequence of queries that encode different user intent at
% % different time.
% \subsection{Goal and Challenge}

Many applications deal with data of changing characteristics.
However, data characteristics do not change arbitrarily. Often, the
data we observe is controled by hidden models of an underlying system.
When models change, data changes, but during the time when a fixed
model is in control, the data shows stable characteristics.  For
example, in speech recognition, two high level models of acoustic
segments may be ``containing speeches'' and ``not-containing
speeches.'' In system monitoring, a system is in a stable state until
a certain event (e.g., physical memory is exhauted) leads it into
another state (e.g., one characterized by paging operations).  Later,
the system may switch back to the previous state (e.g., when memory
usage recedes).  In other words, the time-evolving data we observe is
a history of different models alternating with each other in the
underlying system. The changing data exhibits two key characteristics:

\begin{enumerate}
\item Models change unpredictably, and models repeat in history
  (previous models may come back to re-claim control of the data).
\item A segment of data generated by a single occurrence of a model is
  often imperfect or incomplete to learn the model. %  representation of the model. It is often
%   impossible to derive an accurate model by studying individual
%   segments independently.
\end{enumerate}

Discovering {\em stable % concepts or stable
  models} in time-changing data enables us to understand the dynamics
of the underlying system, and also to recognize future instance of a
model as soon as it occurs.

% and
% improve the efficiency and/or effectiveness of the application.
%Different applications bestow models different semantics.  

\subsection{State of the art}
Existing research has addressed a wide range of issues in mining
evolving data in both unsupervised and supervised learning settings.
However, most of the methods focus on most recent data only.  In other
words, historical data is discarded as long as they are considered no
longer consistent with the current data
distribution~\cite{domingos00mining,hulten01time}. This is like taking
snapshots of the evolving data continuously, but focusing on the
latest snapshots only.

%% This approach makes sense to many applications. In unsupervised
%% learning, for example, clusters found on the entire dataset may not be
%% meaningful as data ``moves'' over time, and more importantly, users
%% are often only interested in the current patterns. Likewise, in
%% supervised learning, classifiers learned from data with mixed class
%% distributions will have low predictive power on data whose
%% distribution is governed by the most current concept.

Models produced by these approaches are often of low quality. As a
model may last only for a short time before it changes abruptly to
another model, we can only trust the very recent snapshots as being
``current''.  However, from these limited snapshots, it is unlikely
that we can obtain a big picture of the data and the system. Usually,
the models discovered are incomplete and misleading, and generalize
poorly to new data.


Our idea presented in this paper is the following. Instead of looking
at only the {\it current data}, can we study {\it data of the current
  model}?  The difference is that the current data may only consists
of a few current snapshots, while data of the current model may
consist of many historical snapshots. % that fit into the current big
% picture perfectly. 
Thus, they can provide a much better understanding
of the complex underlying data generating mechanisms.

However, it is not easy to find all the {\it data of the current
  model}.  Some approaches tried to utilize historical data that is
consistent with the current model. For example, the ensemble
approach~\cite{streamensemble} divides historical data into chunks of
fixed size, learns an individual model from each chunk, and forms a
classifier ensemble by selecting models that have low predictive error
on the current training data. But the approach has two major
weaknesses. One is that each individual model is trained on data of
questionable quality, as data is partitioned is a way irrelevant to
how multiple models alternate in the data stream. The other is that
the current training data, which consists of no more than several
snapshots, is very likely to be skewed or noisy, and hence is often a
bad criterion to tell which classifier is relevent to the current
model.

\subsection{Challenges}

There are two major challenges in mining stable models from evolving
data.

\subsubsection*{Data segmentation}

Figure~\ref{fig:alternate} shows an example wherein 3 models alternate
in the stream. Our foremost task is to segment the stream such that
each segment is ``internally homogeneous'', or in other words, data of
each segment belongs to a single model or is generated by a single
mechanism.

\begin{figure}[!h]
  \centering \includegraphics[width=\columnwidth]{Introduction/Stream1.eps}
    \caption{Data characteristics change over time}
    \label{fig:alternate}
\end{figure}

It is easy to see that the task of segmentation is similar to that of
clustering, with the constraint that all points in a cluster must be
contiguous on the time line. The question is, what criteria should we
use for clustering? A distance-based clustering strategy that
uses % , aims at
% maximizing similarity between objects in the same cluster, and
% minimizing similarity between objects in different clusters.  A widely
similarity measures such as the Euclidean distance does not make much
sense here, as data points that are closer to each other in the
feature space do not mean they are more likely generated by the same
model.  % However, this is by no means an all-purpose clustering
% strategy for model discovery.

%% The process of clustering groups
%% data points based on their similarity.  In clustering, similarity
%% between two data points is given by some distance function, and the
%% most widely used distance function is the Manhattan distance (1-norm)
%% and the Euclidean distance (2-norm).  The process of clustering, as in
%% hierarchical clustering, K-means, and many others, is the process of
%% grouping and partitioning data points using the distance function.

%% However, the distance between two data points is by no means a good
%% criterion for model discovery. Our goal of segmenting the data is to
%% ensure that each segment corresponds to a good model.

Since our fundamental goal is to discover models in the data, a more
meaningful criterion is one that directly seeks to maximize the
overall quality of the discovered models.  Naturally, different
applications have different ways of judging the quality of a
model.  % For instance, if we want to segment the data such that each
% segment forms a traditional cluster (e.g., one that uses the
% Euclidean distance as the similarity measure), then maximizing the
% quality of the model is equivalent to minimizing the average
% distance between any pair of data points in the segment.
In speech recognition,
segmentation aims at minimizing the variance of the data, which means
variance is used as the quality function. In system monitoring, where
a classifier is trained from each segment, the criterion is thus the
accuracy of the classifier in predicting imminent system anomalies.

Our foremost challenge is to devise an algorithm framework that is
independent of the specifics of the models we want to
discover. Existing segmentation algorithms are designed and optimized
for specific quality functions (e.g., distance and variance), and they
often assume that the quality functions are simple and efficient to
evaluate.  In our endeavor to achieve generality, we want to ensure
that the algorithm is able to discover models whose quality is costly
to evaluate.  For example, in system monitoring, we need to find
classifier models, and the cost of learning a classifier and
evaluating its quality is super-linear with respect to the dataset
size. %% Thus, a challenge is to design an algorithm framework
%% that is independent of the models and their quality functions,
%% including quality functions that are costly to compute.


\subsubsection*{Model reconstruction from imperfect segments}

Our goal is to discover high quality models in the data, and data
segmentation only provide preliminary results.  The data in each
segment represents nothing more than a single occurrence of the
underlying model, and usually it is an incomplete, or even a biased
representation of the model.

\begin{figure}[!h]
    \centering
    \includegraphics[height=3cm]{Introduction/scattering.eps}
    \caption{Occurrences of a same model at different times bear little similarity among themselves.}
    \label{fig:scattered}
\end{figure}

\begin{figure}[!h]
    \centering
    \includegraphics[height=1.8cm]{Introduction/union.eps}
    \caption{A better understanding of the underlying model is only achievable if we study many of its occurrences together. }
    \label{fig:union}
\end{figure}

Figure~\ref{fig:scattered} shows an example, where the goal is to
discover stable classifier models in the stream, and the data has been
partitioned into homogeneous segments.  In the figure, a {\it model}
is represented by a class boundary.  For example, in the first segment
(arrived around time 100), the {\it model} dictates that everything
above the boundary is negative (represented by white balls), while
everything under the boundary is positive (represented by black
balls).

However, none of these representations is accurate for describing the
model. Each of the three segments of Figure~\ref{fig:scattered} is in
fact a separate occurrence of the same model, and the actual model is
only revealed when the three segments are merged into one dataset,
which is shown in Figure~\ref{fig:union}.

There are two possible causes for this phenomenon. One is that each
segment contains limited data due to the fact that models change
frequently, which creates segments of very short time span.  Another
cause is that in many real applications, data arriving in a burst may
be highly correlated (e.g., packets sent from a single IP) and amounts
to nothing more than a biased sample of all possible data.  Clearly,
this leads to a distorted representation of the model.

Our second challenge is thus to reconstruct accurate models from many
imperfect instances identified through data segmentation.  As shown in
Figure~\ref{fig:scattered}, the three segments are quite different
from each other, and yet they are also ``consistent'' in some way, as
revealed by Figure~\ref{fig:union}.  We must recognize the consistency
hidden behind the differences in order to recover the whole picture of
the model.


% \subsection{Our contributions}

% Knowledge is acquired from experience. For example, a human perfects
% his tennis skills by practicing tennis everyday. Machine learning is
% more or less similar: a learner tries to model a phenomenon by
% studying a large data log.  The challenge is that, while humans
% naturally group pieces of relevant (tennis-playing) experiences
% together to build up his (tennis) skills, learning algorithms do not
% know which scattering pieces of the data are relevant and may be used
% to reinforce the knowledge about the phenomenon of interest.  This
% paper makes the following contributions in solving problems in this
% category.

% \begin{itemize}
% \item We identify the problem of recovering models from time-changing
%   data streams. Incomplete instances of hidden models are scattered
%   throughout the data, and these models can only be recovered
%   accurately if we find and piece all these instances together.
% \item We formulate the model discovery task as a clustering problem,
%   and we introduce a general-purpose quality function for clustering.
%   This greatly generatlizes previous clustering approaches which are
%   mostly relies on distance functions.
% \item We provide several solutions to the above optimization problem.
%   Our ultimate solution recovers models in near linear time, and the
%   quality of the recovered models are as good as those found by brute
%   force dynamic programming approaches.
% \item We show that classifying time-changing data stream is a special
%   case in our framework.  Experimental results show that our approach
%   is the best known approach of classifying data of evolving concepts.
%   It is both most efficient (on-line training is reduced to the
%   minimum) and most accurate (concepts reinforced by every historical
%   instances in the data).
% \end{itemize}

\subsection{Paper organization}
The rest of the paper is organized as follows. Section~\ref{sec:pre}
gives some background information of the topic as well as an overview
of our approach. Section~\ref{sec:svem} introduces a method that finds
continuous occurrences of hidden models in the data.
Section~\ref{sec:ivem} discusses how to group data of non-continuous
occurrences of same models together. Section~\ref{sec:exp} discusses
empirical results of our approach. Related work is discussed in
Section~\ref{sec:related}, and we conclude in Section~\ref{sec:con}.


%% It is mostly in each segment may not be
%% comprehensive enough in describing the underlying concept. This may be
%% due to the fact that each segment only have limited data, especially
%% when concepts change frequently, not enough data has accumulated
%% before a concept switch.

%% because the quality of the model depends very much on the quality of
%% the data it is created from: if the data is noisy, skewed, or having
%% conflicting concepts (i.e., the data is a mixture of instances
%% generated by very different mechanisms), the model will not generalize
%% well on new data.


%% We want to ensure that when data evolves, a model evolves along with
%% the data so that it always accurately captures the current data
%% characteristics.



%%   But, what data can we rely on to create the model?
%% Most would focus on the latest data to avoid conflicting concepts in
%% the data.  However, because the latest data is not substantial in
%% size, it is almost always noisy and skewed.

%% To tackle this problem, we assume data characteristics is governed by
%% a hidden data generation mechanism, which changes from one state .


%% Accurately capturing the current data characteristics is important in
%% decision making.


%% Unfortunately, in many applications have to deal with evolving data.
%% Because the data does not have stable characteristics, it creates a
%% lot of problem for data analysis.

%% data clustering is known as an unsupervised learning technique in
%% exploratory data analysis.  We apply data clustering in a supervised
%% learning setting.
%% Our goal is to partition historical training data into clusters such
%% that each cluster corresponds to a unique concept (i.e., data in each
%% cluster is generated by a single mechanism). By doing this, we will be
%% able to train precise models from data that is free of conflicting
%% concepts, noise, and skewness.  However, unlike traditional
%% clustering, where partition is performed based on data similarity, we
%% cluster by concept similarity.  In order to do this, we need a novel
%% similarity metric that induces clusters of unique and precise
%% ``concepts.''

%% and group is In the latter task, d It is
%% used to reveal the underlying data distribution that is difficult to
%% characterize otherwise.  In this paper, we study an important
%% application scenario where each data record is tagged with a class
%% label, and the distribution of the data and their class labels changes
%% over time.


%% \begin{figure*}[!htb]
%%     \centering
%%     \includegraphics[height=3.4cm]{Introduction/allinone.eps}
%%     \caption{Concept-changing data stream}
%%     \label{fig:svem:stream}
%% \end{figure*}


%% \subsection{Motivating applications}
%% Before we formally describe the problem and its solution, consider the
%% following applications.


%% \begin{itemize}
%% \item{\it User intent in Web search.}  In Web search, a user refines
%%   his search several times before he finds the information he needs.
%%   For example, when the user searches for {\it matrix}, it is not
%%   clear whether he is interested in the mathematical concept of
%%   matrix, or {\it The Matrix} movie, or a venture capital company
%%   called {\it Matrix Partners}. The user may refine his search by
%%   adding new keywords such as {\it math}, {\it Morepheus}, or {\it
%%     venture capital} to make his intention clearer. In many searches,
%%   however, refinement is not easy, as the user may not be able to find
%%   keywords to characterize his search intention.

%%   But it is very likely that, in the search log, similar sequences of
%%   search refinements appear many times, as many users had searched the
%%   Web with the same intention.  However, precisely interpreting the
%%   intention of the user is difficult, even with the help of the
%%   historical search log: there might be numerous instances of such
%%   intention scattering in small pieces in the log, but none of them is
%%   comprehensive enough to describe the intention precisely.


%% \vspace{.1cm}
%% \item {\it Anomaly prevention in complex systems.}
%%   Assume a system is monitored by a vector of metrics $\vec x$ (e.g.,
%%   free cpu cycles, available memory size, virtual page in/out rate,
%%   virtual memory usage on the heap/stack, etc).  We want to tell
%%   whether the system, characterized by its current measures $\vec x$,
%%   is moving toward a state $c$ of normal or abnormal in the near
%%   future.  Because of changing workload and other hidden factors, the
%%   distribution $P(c|\vec x)$ changes over time.  The most recent
%%   training dataset, which hopefully has the same distribution
%%   $P(c|\vec x)$, might be noisy, skewed, and limited in size, and is
%%   hence insufficient to build an accurate classifier. On the other
%%   hand, in many systems, the states are limited, and within each
%%   state, $P(c|\vec x)$ is stable.  This motivates us to stitch
%%   together pieces scattering in the history to form a good
%%   understanding of each state.


%% %% When certain events occur (e.g., heap exceeds physical
%% %%   memory), the system goes from the current state to another state
%% %%   (e.g., one characterized by paging operations). The state may switch
%% %%   back again (e.g., when memory
%% %%   usage recedes).

%%  %% As another example, we predict traffic patterns in a
%% %%   metropolitan road network.  Under normal conditions, traffic behaves
%% %%   in one way, and under other conditions, e.g., after an accident,
%% %%   traffic behaves in another way.

%% \end{itemize}

%% \vspace{.2cm}

%% The two applications above demonstrate some common characteristics
%% that we are interest in:


%% \vspace{.2cm}
%% \noindent{\em First, each ${\vec x}$ is labeled with a class, or
%%   concept $c$.} In the first application, keywords are associated with
%% unknown concepts, which are the intent of the user who submits the
%% query. The searches in the log are annotated with concepts (by
%% analyzing the click-through information). In the second application,
%% each measurement vector is associated with a normal/abnormal flag.

%% \vspace{.2cm}
%% \noindent{\em Second, concepts change and recur over time.}
%% Knowing the concept, i.e., the distribution $P(c|\vec x)$, we can make
%% prediction for data $\vec x$.  The concepts are not stationary, which
%% makes prediction difficult. However, since concepts are recurring, we
%% can learn from the history. This is clear in both of the two applications above. %% , the concept of
%% %% interest depends on some hidden context not given explicitly in the
%% %%   known data. %%  In other words, the concepts which we try to learn from
%% %%   those data drift with time [16][24][25]. For example, the buying
%% %%   preferences of customers may change with time, depending on the
%% %%   current day of the week, availability of alternatives, discounting
%% %%   rate, etc.

%% \vspace{.2cm} \noindent{\em Third, change may occur at any time.} In
%% both cases above, context transitions occur at any time, instead of
%% exhibiting simple patterns such as periodicity. It also means
%% episodes of stable contexts have variable lengths.


%\section{Background\label{sec:backgroun}}



%% As data streams through the learning system, After a certain amount of
%% time, we have accumulated many snapshots. The question is, can we mine
%% these historical snapshots to derive a big picture about the
%% underlying data generating mechanism, and stop wasting time taking
%% endless snapshots?

%% This is desirable because big pictures are more revealing, and
%% likely to have more predictive power, than individual snapshots.
%% When data evolves, base models trained directly from small data
%% chunks will become unstable. Instead of chasing ephemeral patterns
%% in the data stream, we should learn a high-level, stable model from
%% historical base models.

%% In this paper, we show that this approach is not only desirable, but
%% also feasible.



%% \subsection{Our Approach}

%% The first step toward building a high-order model is to capture all
%% stable concepts in the evolving data. However, as in the examples we
%% mentioned above, concept changes may occur at any time, instead of
%% exhibiting simple patterns such as periodicity~\cite{highperiod}.  The
%% second component of the high-order model is the concept change
%% patterns, which are also learned from the historical data, that is, we
%% analyze how individual concepts interact with each other by collecting
%% the statistics of concept changes. At runtime, with cues from an
%% online training stream, the high-order model identifies the current
%% concept in the stream and uses offline trained classifiers
%% corresponding to the concept for prediction.

%% The primary advantage of our approach is its very high accuracy.
%% Experiments show that in benchmark datasets, classification error of
%% the high-order model is only about one tenth of the current best
%% approaches. Furthermore, unlike state-of-the-art approaches, the
%% high-order model has no user parameters. It does not require users
%% to tune any parameters on the basis of the characteristics of
%% different data streams in order to attain satisfying classification
%% accuracy.

%% The primary task of data mining is to develop models based on
%% existing data. In classification, usually the training data is
%% fixed, for example, it is stored in a data warehouse, and the
%% models, once trained from the stored data, can be applied to future
%% data without much change. Thus, the knowledge discovery process can
%% be regarded as consisting of two sequential phases: a
%% \emph{training} phase, where models are learned from past data, and
%% a \emph{testing} phase, where models are applied on the future data.


%% This introduces negative impacts on the accuracy of a stream
%% classifier. Model training is often a time consuming, offline
%% process. To keep up with the high data throughput in testing, we
%% create impromptu models of low quality. In particular, it is hard to
%% find out what data an up-to-date model should rely on. A large set
%% of data may include changing concepts, and a small set will cause
%% model over-fitting.

%% \subsection{Our Motivation}

%% As data streams through the learning system, we train individual
%% models from small windows on the stream as if taking fast snapshots
%% of the evolving data. After a certain amount of time, we have
%% accumulated many snapshots. The question is, can we mine these
%% historical snapshots to derive a big picture about the underlying
%% data generating mechanism, and stop wasting time taking endless
%% snapshots?

%% This is desirable because big pictures are more revealing, and
%% likely to have more predictive power, than individual snapshots.
%% When data evolves, base models trained directly from small data
%% chunks will become unstable. Instead of chasing ephemeral patterns
%% in the data stream, we should learn a high-level, stable model from
%% historical base models.

%% In this paper, we show that this approach is not only desirable, but
%% also feasible. In fact, many systems work in a limited set of
%% states, and within each state, data's class distributions are
%% stable. For example, in network and system monitoring, most of the
%% time the system is in a stable state. When certain events occur
%% (e.g., heap exceeds physical memory), the system goes into another
%% state (e.g., one characterized by paging operations). The state may
%% switch back again (e.g., when memory usage recedes). As another
%% example, we predict traffic patterns in a metropolitan road network.
%% Under normal conditions, traffic behaves in one way, and under other
%% conditions, e.g., after an accident, traffic behaves in another way.
%% Note in both cases above, transitions among stable concepts may
%% occur at any time, instead of exhibiting simple patterns such as
%% periodicity.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../vem"
%%% End:
