\section{Introduction}
% Mining time series data has important applications in a wide range of
% fields spanning physics, engineering, biology, social science, and
% business~\cite{chatfield04series}. In this paper, we focus on the
% problem of time series forecasting. More specifically, given
% observation of a time series denoted by $y_1, y_2, \cdots, y_t$, we
% want to forecast $y_{t+n}$ for $n=1,2,\cdots$ with a given level of
% confidence. The technique we develop can be used to answer a variety
% of forecasting related questions such as

% \begin{itemize}
% \item How long does it take for the temperature to reach $35^oC$ with
%   probability higher than 85\%?
% \item What is the probability that the system is going to crash within
%   the next 10 minute?
% \end{itemize}

% \paragraph*{State of the Art}
In this paper, we introduce a novel Hidden Markov approach for time
series forecasting. The problem of time series forecasting can be
defined as follows. Given observations of a time series denoted by
$X_1, X_2, \cdots, X_{t-1}$, we want to forecast $X_{t+n}$ for
$n=0,1,\cdots$ with a given level of confidence.

Time series forecasting has been a topic of extensive research.
State of art approaches use regression based methods, which
correlate the current value of a time series with values in a
previous
window. % Many extensions of autoression and moving average
% models have been proposed, including linear models such as ARMA
% (autoregressive moving average), ARIMA (autoregressive integrated
% moving average), and non-linear models such as ARCH (autoregressive
% conditional heteroskedasticity).
An important assumption taken by these models is that the
correlation between the values at time $t$ and time $t-j$, or
$Corr(X_t,X_{t-j})$, depends only on the time gap $j$, and not on
the time location $t$. Furthermore, it is also assumed that the time
series has a short memory, which means values separated by a time
gap bigger than a certain window size $w$ are not correlated: $Corr
(X_t, X_{t-w})=0$.

% Thus, current approaches assumed that that the time series is
% stationary.
In many applications, these two assumptions may not be valid. The
oversimplification in modeling greatly impedes the ability of time
series forecasting. In this paper, we introduce a novel approach of
time series forecasting based on the following observation: in many
applications, the underlying system operates in hidden states, and
the time series observed from the system reflects the interplay of
the hidden states. This brings fundamental changes to modeling.  For
instance, $Corr(X_t,X_{t-j})$ no longer depends on the time gap $j$
only, but also on the hidden state of the system at time $t$.  In
other words, the way a time series value correlates with its
preceeding values depends on the hidden state the system operates
in. In particular, we know $Corr (X_t, X_{t-w}) \neq 0$, especially
when the system happens to be in the same state at time $t$ and
$t-w$. Thus, we can improve the quality of forecasting if we are
able to model the hidden states directly and accurately.

Segmental Hidden Moakov Model (SHMM) is an extension of traditional
HMM. It decomposes the time series into several disjoint segments,
each of which consists of a parametric function with additive noise,
and the segments are "linked" in a Markov manner. But it doesn't
consider the similarity between different segments. In other words,
in SHMM, each hidden state corresponds to only one segment. However,
in many applications, the similar segments of time series appears
more than once, and SHMM cannot describe these similar segments with
one hidden state.

Our approach is based on Hidden Markov Model (HMM). The biggest
challenge in using Markov Model or Hidden Markov Model for time
series data is to decide what constitutes a {\it state} or a {\it
hidden
  state} in time series data.  Before we delve into the details of our
approach, we give a brief overview of Markov Model and the
challenges in using Markov model for time series forecasting.

\paragraph*{Markov Model}
% Predict when specific values will happen in a user specified threshold
% of probability.  This model can be used to analyse the system and
% predict its behaviour under a changing environment. The information
% yielded by an analysis can further be employed to alter possible
% factors and variables in the system to achieve an optimal
% performance. For example, we may want to ask the following questions
% given related sequence data.

% Traditionally, sequence prediction is defined as predicting the value
% of $s_j$ given the sequence $s_1, s_2,\cdots, s_{j-1}$.



% Some term this as early prediction\cite{XingP08}.

Markov chains are used to forecast sequential
events~\cite{markovseries99,oates01,li00,smyth97} in a wide range of
applications from mathematical biology to gambling. Recently, Markov
models have also been used to forecast real valued time series data,
in applications such as system
monitoring~\cite{icde08-haixun,xhhx09} and intelligent load
shedding~\cite{loadstar,loadstardemo}.  To understand how Markov
model is applied on time series data, let us study a simple example,
which is taken from options pricing~\cite{hull1997ofa}.
\begin{figure}[htbp]
  \centering
\begin{tabular}{ccc}
    \includegraphics[width=3cm,height=3.5cm]{figure/mm.eps} &
 &
    \includegraphics[width=3.5cm,height=3.5cm]{figure/zigzag.eps}\\
(a) Markov forecasting && (b) Trending\\
\end{tabular}
    \caption{Markov modeling of time series data}\label{fig:mm}
\end{figure}

Assume we model a variable (e.g., price) using a Markov
chain. % models the value over time using a
% sequence of random variables $X_1, X_2, X_3, \cdots$, and
According to the Markov assumption, given its present value $x_n$,
its future values depend on $x_n$ only: {\small
\begin{equation}
P(X_{n+1}=x|X_n=x_n,\cdots,X_1=x_1) = P(X_{n+1}=x|X_n=x_n) \nonumber
\end{equation}
} Figure~\ref{fig:mm}(a) demonstrates a simple case where the value
either goes up by $10$ (with probability $p$) or goes down by $10$
(with probability $1-p$) in the next time slot, which means {\small
\begin{equation}
P(X_{n+1}=x+10|X_n=x) =p, \;\;\;P(X_{n+1}=x-10|X_n=x) =1-p \nonumber
\end{equation}}
This enables us to derive the distribution of the value at any time
in the future, and answer forecasting related questions such as the
two we mentioned before.

\paragraph*{Challenges}
The above approach of using Markov model for time series data has
some fundamental weaknesses, which greatly limit its use in many
real life applications.  The fundamental issue is, what constitutes
a state in a Markov model for time series data? In
Figure~\ref{fig:mm}(a), a state is nothing more than an observed
value (e.g., 100). However, in time series data, a single value
contains very little semantics, and has very limited predictive
power.

One extension that is widely used to improve the applicability of a
simple Markov chain is the Hidden Markov Model, which uses a
probability distribution to associate states and observations.  For
each state $s$, the chance that we observe a value $v$ is given by a
probability distribution $p(v|s)$. For time series data, however, it
is still not very meaningful because, for example, a share price
alone does not tell much about a company. In fact, in the worst
case, $p(v|s)$ can be a uniform distribution.

To resolve this problem, we must find the basic semantic components
of time series. Many time series exhibit trend. In
Figure~\ref{fig:mm}(b), the values exhibit a down trend followed by
an up trend. The trends are more informative than a single value: if
the time series reaches value 100 in a down trend, the next value is
more likely to be 90 than 110, while the Markov chain in
Figure~\ref{fig:mm}(a) cannot differentiate the two cases.

Another extension to Markov chain is the variable length Markov
chain, which goes beyond the Markov assumption and bases its
forecast on a variable number of previous states instead of just one
state. In the setting of time series forecasting, the problem comes
to decide what a ``trend'' consists of, or how to identify
``trends'' that have strong predictive power.  Previous work of time
series segmentation and clustering can be used to discover trends,
but most of them is only good for data representation or
summarization, instead of forecasting. In this paper, we introduce a
forecasting-oriented segmentation and clustering approach for time
series Markov modeling.

\paragraph*{Our Approach and Contribution}
Our approach of time series forecasting is an iterative refinement
loop, as shown in Figure~\ref{fig:overview}(a). The first 3 steps,
namely, time series segmentation, clustering, and aggregation, focus
on finding the semantic components in time series. With the semantic
components, we construct a Hidden Markov Model in step 4, which
enables us to perform forecasting. Meanwhile, the Markov model is
used to interactively refine the segmentation, clustering, and
aggregation process until the system reaches a stable state.

Our contribution lies in our forecasting-oriented approach for time
series segmentation and clustering. Specifically, the first time we
segment time series (the 1st step), we do not have any knowledge
about the hidden states or the underlying generation mechanism, so
the best thing we can do is to use a standard approach to convert
the time series into a Piecewise Linear Representation, which
approximates the time series using line segments that have the
minimal approximation error. But the segmentation may not be optimal
for the forecasting. To see this, suppose a time series
$(y_1,\cdots, y_{100})$ has minimal approximate error if it is
represented by two line segments, $s_a$ and $s_b$. The second best
choice is to segment it into three pieces, $s_u$, $s_v$, and $s_w$,
which have slightly higher approximation error. Later on, we map
each segment into a hidden state, and after constructing the Markov
model, we find that the state transition probability from $s_a$ to
$s_b$ is almost 0, while the probabilities from $s_u$ to $s_v$ and
from $s_v$ to $s_w$ is very high. This clearly indicates that the
original segmentation is not optimal.

The same situation may occur in clustering. Many clustering methods
are available for sequence data. The
Micro-Cluster~\cite{Aggarwal03aframework} approach, for instance, is
known for its ability to find clusters in a changing environment.
However, these methods may not be appropriate for forecasting.  An
example is shown in Figure~\ref{fig:overview}(b). Segments $A_1$ and
$A_2$, $A_3$ and $A_4$ have similar shapes (with $A_1$ and $A_2$
being more similar to each other than $A_3$ and $A_4$). However, we
should not cluster $A_1$ and $A_2$, because the states that follow
them ($B$ and $C$) have very different shapes, which indicate that
although $A_1$ and $A_2$ have slight difference, such difference is
important in semantics. On the other hand, although $A_3$ and $A_4$
are more different, their difference is not important in semantics,
and can be safely
clustered. % to
% each other.  Although $A_1$ and $A_2$ are similar, they should be in
% different clusters, while $A_1$ and $A_3$, which are less similar to
% each other than $A_1$ and $A_2$, should be in the same cluster.

\begin{figure}[!htp]
  \centering
\includegraphics[width=7cm,height=4cm]{a.eps}
  \caption{Overview of  our approach}
\label{fig:overview}
\end{figure}

\begin{figure}[!htp]
  \centering
\includegraphics[width=3.4cm,height=3.8cm]{figure/cluster1.eps}
  \caption{Challenges}
\label{fig:challenge}
\end{figure}


%\begin{figure}[!htp]
%  \centering
%\begin{tabular}{cc}
%  \includegraphics[width=3.4cm,height=3cm]{figure/cluster1.eps}&
%  \includegraphics[width=3.4cm,height=3cm]{figure/cluster1.eps}\\
%(a) Segmenting & (b) Clustering\\
%\end{tabular}
%  \caption{Challenges}
%\label{fig:overview}
%\end{figure}



\paragraph*{Paper Organization}

The paper is organized as follows. Section~\ref{sec:model} and
~\ref{sec:refine} describes the method we use to create Markov
models for time series data. Section~\ref{sec:predict} shows how the
HMM is used for prediction. Section~\ref{sec:expr} shows
experimental results. In Section~\ref{sec:related}, we discuss
related work, and we conclude in Section~\ref{sec:conclusion}.

% \begin{enumerate}
% \item First, we use Piecewise Linear Representation to summarize the data.

% \begin{enumerate}
% \item First, we use Piecewise Linear Representation to summarize the data.

% from E. Keogh: Several high level representations of time series have
% been proposed, including Fourier Transforms [1,13], Wavelets [4],
% Symbolic Mappings [2, 5, 24] and Piecewise Linear Representation
% (PLR). In this work, we confine our attention to PLR, perhaps the most
% frequently used representation [8, 10, 12, 14, 15, 16, 17, 18, 20, 21,
% 22, 25, 27, 28, 30, 31].
==================================================
Time series processing has been a topic of extensive research, which
includes a lot of research topics. We can split them into two
groups. The first group is single-value-based approaches, like time
series prediction, anomaly value detection; the second is
trend-based approaches, like time series compression, similar time
series detection.

In single-value-based approach, based on the single values,
different types of models are proposed, like regression model,
Hidden Markov model. Although these models are powerful and
successful to solve some specific problem, but the fact that a
single value contains very little semantics limits the power of the
models. In trend-based approach, higher level patterns are learned
to describe the time series, like DFT, DWT, PLR. But there are few
works to exploit the relations between these patterns. In other
words, how these patterns together to describe the time series.

In this paper, we combine these two types of approaches to propose a
new Hidden Markov model, in which, states corresponds to high level
patterns. Our model combine the advantages of both types of
approaches, and can be used to solve more complex problems,
including:
\begin{itemize}
\item value prediction. Instead of making prediction based on
value of previous time point (HMM), or based on values of previous
fixed number of time points (regression), we make prediction based
on current pattern. Since high level patterns have more semantics,
our model achieves higher prediction accurate. Moreover, it can deal
with multi-step prediction effectively. In other words, given time
series seen so far, we can predict the values of more than one
future time points.
\item trend prediction. our model can answer questions like: after
50 seconds, what is the trend of time series; when will the time
series will end the current downward trend and enter a upward trend;
\item general correlation detection. There exist approaches to mine
the correlation between two time series based on correlation
coefficient, or different distance measure. But they fail to mine
more general correlation: whenever a sine wave occurs in time series
$S_1$, a line with fixed slope will occur in time series $S_2$.
Since our model is built based on higher level patterns, it can deal
with this problem effectively.
\end{itemize}

In this work, we confine the pattern in time series as segment
lines, since they are easy to interpret, and can model time series
well in many applications. Specifically, we build a Hidden Markov
mode, in which each state corresponds to a line with fixed length
and slope.



\paragraph*{Challenges and Our Approach}
The fundamental issue is, which kind of pattern constitutes a state
in a Markov model for time series data, and how to mine the set of
representative patterns which can describe the time series
accurately. In this work, we confine the patterns in time series as
segment lines, since they are easy to interpret, and can describe
time series well in many applications.

It is a non-trivial challenge to mine the patterns efficiently and
accurately, and then build a HMM based on them. There exist
approaches to segment time series into disjoint intervals, each of
which is represented by a line. To learn the representative lines, a
straightforward way is to cluster the learned segment line, then
pick out one line from each cluster to form
 the set of representative lines. But since the learned segment lines are
isolated from each other, it is often the case that the
cluster-based approach cannot obtain the optimal lines. Moreover,
mining the optimal patterns from enormous possible segmentations
efficiently is even more challenging.

Our approach is an iterative refinement loop, as shown in
Figure~\ref{fig:overview}. The first 2 steps, namely, time series
segmentation and clustering, focus on finding the semantic
components in time series. With the semantic components, we
construct a Hidden Markov Model in step 3. Meanwhile, the Markov
model is used to interactively refine the segmentation and
clustering process until the system reaches a stable state.

Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a new Hidden Markov model, in which states corresponds to
segment lines, instead of values of single time point.
\item We propose an iterative approach to refine the model. Moreover, several optimization strategies are proposed to speed up
processing.
\item We illustrate how to utilize the model to solve problems
of time series prediction or correlation detection.
\item We conduct extensive experiments to prove the effectiveness
the proposed model. Our model can be used to predict values of next
multiple time points, the future trends accurately and learn the
initial correlations between different time series. Moreover, the
efficiency of learning approach is also verified.
\end{itemize}

\comment{ Our contribution lies in our approach for time series
segmentation and clustering. Specifically, the first time we segment
time series (the 1st step), we do not have any knowledge about the
hidden states or the underlying generation mechanism, so the best
thing we can do is to use a standard approach to convert the time
series into a Piecewise Linear Representation, which approximates
the time series using line segments that have the minimal
approximation error. But the segmentation may not be optimal. To see
this, suppose a time series $(y_1,\cdots, y_{100})$ has minimal
approximate error if it is represented by two line segments, $s_a$
and $s_b$. The second best choice is to segment it into three
pieces, $s_u$, $s_v$, and $s_w$, which have slightly higher
approximation error. Later on, we map each segment into a hidden
state, and after constructing the Markov model, we find that the
state transition probability from $s_a$ to $s_b$ is almost 0, while
the probabilities from $s_u$ to $s_v$ and from $s_v$ to $s_w$ is
very high. This clearly indicates that the original segmentation is
not optimal. The same situation may occur in clustering. Many
clustering methods are available for sequence data. The
Micro-Cluster~\cite{Aggarwal03aframework} approach, for instance, is
known for its ability to find clusters in a changing environment.
However, these methods don't consider the relations between
clusters. An example is shown in Figure~\ref{fig:challenge}.
Segments $A_1$ and $A_2$, $A_3$ and $A_4$ have similar shapes (with
$A_1$ and $A_2$ being more similar to each other than $A_3$ and
$A_4$). However, we should not cluster $A_1$ and $A_2$, because the
states that follow them ($B$ and $C$) have very different shapes,
which indicate that although $A_1$ and $A_2$ have slight difference,
such difference is important in semantics. On the other hand,
although $A_3$ and $A_4$ are more different, their difference is not
important in semantics, and can be safely clustered. }

\paragraph*{Paper Organization}

The paper is organized as follows. Section~\ref{sec:model} and
~\ref{sec:refine} describes the method we use to create Markov
models for time series data. Section~\ref{sec:predict} shows how the
HMM is used for prediction. Section~\ref{sec:expr} shows
experimental results. In Section~\ref{sec:related}, we discuss
related work, and we conclude in Section~\ref{sec:conclusion}.
































%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
