\section{Introduction}

Time series data is being generated at an unprecedented speed and
volume in a wide range of applications in almost every domain. For
example, daily fluctuations of the stock market, traces produced by
a computer cluster, medical and biological experimental
observations, readings obtained from sensor networks, position
updates of moving objects in location-based services, etc, are all
represented in time series. Consequently, there is an enormous
interest in analyzing (including query processing and mining) time
series data, which has resulted in a large number of works on new
methodologies for indexing, classifying, clustering, and summarizing
time series data~\cite{dft94,datamining05,keogh06}.

In this paper, we propose to study time series from a new angle. Our
goal is to understand the complex system that produces the time
series. Thus, instead of finding isolated historic patterns, or
predicting the next time series value based on the pattern in the
most recent time window, we focus on explaining the relationships
between the patterns, in particular, how they fit into a big,
holistic picture that describes the underlying system.



\subsection{State of the art}
Much work has been done on time series analysis, including time
series
prediction~\cite{timeseries94,markovseries99,keogh06,xhhx09,tan2010adaptive},
time series segmentation and symbolization~\cite{keogh01,sax07},
time series representation~\cite{keogh08,wang2010algorithmic}, and
similar time series matching~\cite{dft94,perng00}. However, not much
attempt has been made to use the time series data to explain how the
underlying system works.

Well known time series models such as ARIMA and Linear Regression
models, have been used for time series forecasting, which is
concerned with the problem of predicting time series values $X_{t}$
given observations $X_1, X_2, \cdots, X_{t-1}$.
% . But since a single time series
% value or a window of time series contains very little semantics, it is
% difficult to use these models to describe a global view of the
% system. For example, time series forecasting
One frequently used assumption in time series forecasting is that
the time series has a short memory, which means current values are
only related to values in a recent time window. In other words,
these approaches focus on local characteristics in the time series,
and do not attempt to explain observations using the internal
dynamics of the system.

Approaches such as
%including % approaches try to model the entire time series. These include
Discrete Fourier Transform~\cite{dft94}, Discrete Wavelet
Transform~\cite{dwt99}, Piecewise Linear
Representation~\cite{plr98}, and Symbolic Aggregate
Approximation~\cite{sax07}, try to model the whole time series.
Their goal lies in representing the original time series in a more
concise way so that we can summarize the time series or index the
time series for fast pattern matching or pattern discovery. These
methods cannot reveal the internal dynamics of the system, though.
In DFT~\cite{dft94}, for example, a time series is described by a
set of coefficients in the frequency domain. However, the
coefficients are not interpretable, that is, knowing the
coefficients in the frequency domain does not necessarily enable us
to understand how the system works. Similarly, in PLR~\cite{plr98},
a time series is segmented into disjoint intervals, each of which is
represented by a line segment. However, the line segments are
isolated. For instance, we do not know whether the similarity
between the line segments at time $t_1$ and $t_2$ means the system
is in the same internal state at $t_1$ and
$t_2$. % , or whether line segment $L_1$ is more likely to be followed by
% line segment $L_2$.

Much work has been done on discovering frequent patterns (also
called motifs) in time
series~\cite{keoghicdm09,keoghkdd10}.  However, % knowing which pattern is the most
frequent patterns may not necessarily be important patterns, in
terms of whether they
can %have significance in the system, and they do not necessarily
inform us how the system works. Many mining algorithms discover a
large number of patterns that are hard to interpret, which adds to
the complexity of understanding the system instead of helping reduce
it.

% In summary, state of the art approaches are insufficient in describing
% the underlying systems
% In general, these models do not attemp to use focus on learning the
% patterns, but the approach about how to connect the patterns to
% build a higher level model has not been exploited yet.

\subsection{Revealing system dynamics}

Our goal is to obtain a better understanding of the system that
generates the time series.  We assume the system that generates the
time series operates under a number of latent
states~\cite{wang03drift,ensembleoverfitting,xhhx09,tan2010adaptive,chen2008stop,chen2009concept}.
Many systems fall into this category.
% In each state, the system exhibits certain behavior
% distinct from behaviors in other states. The behaviors are reflected
% in the patterns of time series fluctuation. For example,
% % temperature of a region tends to peak in the summer and then declines
% % in the fall.
% bio-medical signals of respiration have regular patterns when the
% patient inhales and exhales. A computer demonstrates certain behavior
% in resource consumption when its usuage of memory space is below
% physical memory capacity, and  ; 2) Moreover, the system demonstrate certain
% regularity in state transition. For example, in time series of
% temperature, pattern in winter are always followed by that in spring.
Our approach is based on two important observations we made about
such systems. The first observation is that once a system is in a
latent state, it will stay in the state for a period of time until a
certain event occurs which leads the system to another latent state.
For example, when memory usage is below the physical memory
capacity, the system behaves in a certain way. The system exhibits
stable patterns until memory usage exceeds the physical memory
capacity, when the system starts paging.  The second observation is
that the system goes through the same states over and over again.
For example, once memory usage recedes, the system will return to
its old state (without paging).

To have a better understanding of the system means to reveal the
latent states of the system, and how they alternate among
themselves. In this paper, we regard the time series as the output
generated by a state transition machine.  Time series produced in
each state demonstrates a certain pattern of fluctuation, and
transitions between states reflect system dynamics. That is, the
information of once the system ending a certain state, what's the
most likely state the system will enter.


The next question is what constitutes a state, or what constitutes a
unit of observation? This is the most challenging question when
using a state transition machine (e.g., an HMM) on time series data,
since it is not trivial to align an observation sequence with a
state
sequence. % But we have no idea of both observation sequence and
% state sequence.
% as it is difficult to decide what constitutes an
%individual observation, which can be mapped to a latent state.
A na\"ive and straightforward choice is to consider a single
observed value as an output token of a state.  However, in time
series data, a single value contains very little semantics. We
illustrate this by an example in Figure~\ref{fig:single}. Although
$A$ and $B$ have the same value, the underlying system is likely to
be in two different states when it outputs $A$ and $B$, because $A$
is in an upward trend and $B$ is in a downward trend. If our goal is
to predict the next time series value, then knowing the system in
that value state has no predictive power. As a consequence, with
single values as states, the obtained machine will be full of
uncertainty, which is not we want.

%
\begin{figure}[!htp]
  \centering
\includegraphics[width=4cm]{figure/value.eps}
  \caption{Single value as state}
\label{fig:single}
\end{figure}



A better choice is to group neighboring values into a pattern, and
regard a pattern as a unit of observation (an observation token). A
good choice for a pattern is a line segment or a polynomial curve.
If line segments are used, the time series in
Figure~\ref{fig:single} will be represented by two line segments
with different shapes, indicating that the underlying system is in
two different states. There are several benefits of using line
segments: 1) lines have simple shapes and the trends they represent
are easy to understand; 2) in many applications, a time series can
be represented well by a sequence of line segments.



Thus, our task is to: i) define a set of observation tokens, each
being a representative line segment in the time series; ii) convert
the time
series to an observation sequence %consisting of observation tokens
such that each observation token aligns with a state in the unknown
state sequence; iii) learn an HMM from the observation
sequence. % Each latent state will have a probability
% distribution over the set of line segments, and the distribution
% decides what are the most likely output when the system is in that
% state.
However, the task of obtaining the observation sequence, or more
specifically, the task of segmenting a time series and then
clustering the segments to obtain representative line segments, is
not trivial. The objective of many time series segmentation and
clustering approaches is to minimize the difference between the time
series and the resulting line segment sequence.  However, this
objective is not necessarily aligned with that of finding the best
state transition machine.



To see this, consider a toy example in Figure~\ref{fig:segment}.
Suppose we have 4 line segments $A_1, \cdots, A_4$, and the question
is whether we should consider $A_1$ and $A_2$ as the same
observation (i.e., representing $A_1$ and $A_2$ using the same
observation token), or consider $A_3$ and $A_4$ as the same
observation. It is clear that $A_1$ and $A_2$ are more similar to
each other in shape. Thus, traditional clustering approaches, which
aim at minimizing approximation error, will group $A_1$ and $A_2$
together.

\begin{figure}[!htp]
  \centering
\includegraphics[width=3.5cm,height=4cm]{figure/cluster1.eps}
  \caption{Segmenting}
\label{fig:segment}
\end{figure}



% However, instead of minimizing approximation error, our goal is to
% ensure that the observed time series is generated by a state
% transition machine with high probability.
However, clustering line segments as if they are a set of
disconnected
elements is problematic. % is
% not disIt means that not only we need to consider current we that are
% so only consider similarity between lines' shape.  It is well for
% approximate representation of time series, but not for building
% semantic machine. We not only aim to find the states with specific
% patterns, but also hope states can be as certain about next state as
% possible. With only similarity as the criteria, it is hard to obtain
% the optimal machine.
% Figure~\ref{fig:cluster} illustrates the effect of the temporal relation.
For instance, the line segments following $A_3$ and $A_4$ have exactly
the same shape, which suggests that $A_3$ and $A_4$ may be generated
by the system in the same state. On the other hand, the line segments
following $A_1$ and $A_2$ are totally different, which suggests that
the slight difference between $A_1$ and $A_2$ may indicate that they
actually belong to two different states.  To understand the internal
dynamics of the system requires us to pay attention to these temporal
constraints instead of just minimizing the approximation error.

% that supports evidence are
% Line $A_1$ and $A_2$, $A_3$ and $A_4$ have similar shapes . However,
% we should not cluster $A_1$ and $A_2$, because the states that follow
% them ($B$ and $C$) have very different shapes, which indicate that
% although $A_1$ and $A_2$ have slight difference, such difference is
% important in semantics. On the other hand, although $A_3$ and $A_4$
% are more different, their difference is not important in semantics,
% and can be safely clustered.

\subsection{A Two Phase, Iterative Approach}
To achieve the goal of revealing internal system dynamics, we
propose a pattern-based Hidden Markov model (pHMM) for time series
data. We discover patterns in the time series, and we ensure that
the discovered patterns are not disconnected or isolated, but
rather, they are organic components of a state transition machine,
which produces the original time series. Then, the challenges are
the following: i) in order to segment the time series and discover
patterns we should know the state transition machine first, because
it tells us the likelihood of one pattern being followed by another
pattern, otherwise we run into problems demonstrated by the example
in Figure~\ref{fig:segment}; and ii) to build the state transition
machine we must know the patterns first, as patterns are the sole
components of the state transition machine.

%Not only do we discover patterns in time series, we ensure that they
%are not disconnected or isolated, but are organic components of a
%state transition machine, which produces the original time series.
%The challenge is that we do not know the observation tokens, i.e.,
%the patterns. Specifically, i) to segment the time series into
%patterns we need to know the state transition machine, which tells
%us how likely one pattern is followed by another pattern, otherwise
%we run into problems demonstrated by the example in
%Figure~\ref{fig:problems}(b); and ii) to build the state transition
%machine we must know the patterns first, as patterns are the sole
%components of the state transition machine.

\begin{figure}[!htp]
  \centering
\includegraphics[width=7cm,height=6.5cm]{figure/arch1.eps}
  \caption{Overview of our approach}
\label{fig:overview}
\end{figure}



To solve this dilemma, we propose a two phase approach. In phase
one, we first segment the time series using traditional optimization
techniques.  Because we do not have any knowledge about the
underlying state transition machine, the best thing we can do is to
use a standard approach~\cite{plr98} to convert the time series into
a piecewise linear representation. In other words, we approximate
the time series using line segments that minimize the approximation
error. Then, we cluster the resulting segments using a greedy
clustering method (considering both the similarity and the temporal
constraints). Finally, from the segmented time series, we learn a
hidden Markov model.

% As analyzed above, absence of information of
% hidden states causes it difficult to obtain best semantical patterns.
% To solve this problem,
In phase two, we use an iterative process to refine the model.
Specifically, in each round, we first segment and cluster the time
series based on the learned pHMM.  The pHMM provides important
guidance for segmenting and clustering, resulting in higher quality
patterns. Then we update the pHMM based on learned patterns. We prove
that the iteration process always improves the quality of the
model. The whole framework is illustrated in
Figure~\ref{fig:overview}.



















\subsection{Applications and Contributions}
Our goal is to reveal the system beneath the time series data it
produces. With the knowledge of the underlying system, we will be
able to perform a large variety of challenging tasks. A
representative list of tasks include the following:
\begin{itemize}
\item Trend prediction. With the knowledge of the state transition
  machine, we can derive the temporal relations between patterns. This
  enables us to answer queries such as: What the trend of the time series is in 10 minutes; or when will the time series end the current
  downward trend and enter an upward trend?

\item Accurate multi-step value prediction. Predicting time series
  values long into the future is a challenging and important task.
  Specifically, given time series before time point $t$, we want to
  predict the values at time $t+\delta$, where $\delta$ is much bigger
  than $1$.

\item Pattern based correlation detection. In traditional approaches,
  in order to compute correlation between two time series, we map the
  time series into a vector space (e.g. using DFT or DWT), and use a
  distance measure (e.g. Euclidean distance or Dynamic Time
  Warping~\cite{keogh08}) to calculate their similarity.  Now we
  can compute correlation based on patterns. Furthermore, we can
  correlate the time series by rules such as: whenever pattern $P_1$
  occurs in time series $S_1$, $P_2$ will occur in time series
  $S_2$. % Note that $P_1$ and $P_2$ can be any pattern.  In other
%   words, we detect the correlations based on patterns.
\end{itemize}
 In summary, the contributions we make in this paper are the following:
 \begin{itemize}
 \item We introduce a pattern-based hidden Markov model (pHMM) for time
   series data. It focuses on revealing the internal dynamics of the
   system that produces the time series. % , whose hidden states correspond to semantic
 %   patterns (line segments), which can describe the meaningful patterns
 %   in time series, and the temporal relations between them.
 \item We propose an iterative approach to refine the
   model. Furthermore, we propose several pruning strategies to speed
   up the refinement process.
 \item We propose algorithms that use pHMM to perform multi-step value
   prediction, trend prediction and pattern based correlation detection.
 \item We conduct extensive experiments to verify the effectiveness and
   efficiency of the proposed approach.
\end{itemize}

\subsection{Paper Organization}

%The rest of the paper is organized as
%follows. Section~\ref{sec:overview} discusses the problem and the
%challenges. Section~\ref{sec:model} introduces the algorithm in the
%initial phase. Section ~\ref{sec:refine} describes the method to
%refine pHMM. Section~\ref{sec:expr} shows experimental results.
%Section~\ref{sec:related} discusses related work, and we conclude in
%Section~\ref{sec:conclusion}. Furthermore, Appendix~\ref{sec:appl}
%shows a few applications that can benefit from pHMM.

The rest of the paper is organized as follows.
Section~\ref{sec:overview} discusses the problem and the challenges.
Section~\ref{sec:model} introduces the algorithm in the initial
phase. Section ~\ref{sec:refine} describes the method to refine
pHMM. Section~\ref{sec:appl} shows how to utilize the learned model.
Section~\ref{sec:expr} shows experimental results. In
Section~\ref{sec:related}, we discuss related work, and we conclude
in Section~\ref{sec:conclusion}.







%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
