\section{Introduction}
% Mining time series data has important applications in a wide range of
% fields spanning physics, engineering, biology, social science, and
% business~\cite{chatfield04series}. In this paper, we focus on the
% problem of time series forecasting. More specifically, given
% observation of a time series denoted by $y_1, y_2, \cdots, y_t$, we
% want to forecast $y_{t+n}$ for $n=1,2,\cdots$ with a given level of
% confidence. The technique we develop can be used to answer a variety
% of forecasting related questions such as

% \begin{itemize}
% \item How long does it take for the temperature to reach $35^oC$ with
%   probability higher than 85\%?
% \item What is the probability that the system is going to crash within
%   the next 10 minute?
% \end{itemize}

% \paragraph*{State of the Art}
Modeling and analysis of time series data is a rich and
rapidly growing research field. Time series data often arise
when following industrial processes, monitoring patient treatments,
or tracking corporate business metrics. Analysis of time series data is widely used for many applications such as economic
forecasting, stock market analysis, process and quality control, budgetary
analysis, and workload projections. In database research,
there has been an explosion of interest on time series databases, for example,
representations of time series, ~\cite{dft94,dwt99,sax07,plr98}, time series
prediction~\cite{markovseries99}, anomaly value detection~\cite{xhhx09},
classification and clustering~\cite{dft94,datamining05,keogh06}.

However, far less attention has been paid to the semantics extraction
in the time series. In many applications, time series data are external
manifestation of latent contexts or states. Once the system which generates data enters a certain
context, time series demonstrate regular patterns. For
example, temperature of a region tends to peak in the summer and
then declines in the fall. It reaches the lowest in the winter and then
climbs up in the spring. Bio-medical signals of respiration have regular patterns
when the patient inhales and exhales. Moreover, the system alternates among a
set of contexts over and over again. Fixed temporal relations exist between
different contexts. In time series of temperature, pattern in winter are always followed
by that in spring. The set of contexts and the temporal relations between them
form the semantic information of time series.
The semantic information gives meaning to a time series. It not only helps to
provide better understanding of time series, but also allows for a more accurate forecast,
and high level correlation detection.

In this paper, we introduce a novel Hidden Markov model to exploit the semantics
in time series. Markov model and hidden Morkov model~\cite{hmm89} are classic models
of pattern recognition and has been applied in many fields, such as
speech recognition, handwriting recognition. But to describe the semantics,
what constitutes a state?

In Markov Chain model, a state is the observed value at a time point.
However, in time series data, a single value contains very little semantics, and has
very limited predictive power.
One extension that is widely used to improve the applicability of a
simple Markov chain is the Hidden Markov Model, which uses a
probability distribution to associate states and observations.  For
each state $s$, the chance that we observe a value $v$ is given by a
probability distribution $p(v|s)$. For time series data, however, it
is still not very meaningful because, for example, a share price alone
does not tell much about a company. In fact, in the worst case,
$p(v|s)$ can be a uniform distribution. Consider one example in Figure~\ref{fig:value}.
Although $A,B,C,D,E,F$ all have the same value $1,400$, but we cannot
predict the next values based only on them, since they can be in either
an upward trend or a downward trend.

\begin{figure}[!htp]
  \centering
\includegraphics[width=8cm,height=4cm]{figure/state.eps}
  \caption{Time series}
\label{fig:value}
\end{figure}

To resolve this problem, we must find the basic semantic components
of time series to be the states of our model. Many time series exhibit
trends, as shown in Figure~\ref{fig:value}, which can be well represented by lines.
So in this paper,
we use lines as the semantic patterns. The reason
is: 1) lines have simple shapes and easy to understand the
corresponding trends; 2) in many applications, the time series can
be represented well by a sequence of disjoint lines. Our
model can be extended easily to other semantic observations, such as
polynomial curves.


In existing works, researches also use lines to represent time series
approximately. For example, in Piecewise Linear Representation
(PLR)~\cite{plr98},
time series is segmented into a sequence of disjoint intervals, each
of which is represented by a line. But the lines are isolated.
It cannot provide information like:
whether a line occurs more than once; whether line $L_1$ is always followed by line $L_2$?
The goal of ~\cite{plr98} is to represent time series with a sequence of lines
which have minimal approximation error,
while in our work, we hope to find the semantic
patterns, which not only occurs frequently, but also exhibit stable temporal relations.

Recently, some works of building model based on patterns are
proposed. In ~\cite{tang07}, the authors propose Pattern Growth
Graph (PGG) to detect and manage variations over pseudo periodical
streams. The proposed approach first splits time series into segments, which is an occurrence
of pseudo period, and then describes time series by several connected lines.
However, this word can only deal with Pseudo Periodical time series, and cannot apply for
general time series.
In ~\cite{reeves09}, a multi-scale schema is proposed to compress
time series stream. Different techniques, like FFT and random
projection, are combined to compress time series. Still, these
patterns are isolated, and hence cannot be used to make prediction.

\paragraph*{Challenges}
Although there exist extensive research works about Hidden Markov model (HMM), and it is utilized in
many applications, it is a non-trivial challenge to build the HMM based
on patterns. %It is a non-trivial challenge to mine the patterns efficiently and
%accurately in time series data.
First, unlike mining frequent itemset, in which the possible
itemset candidates are finite, the number of possible patterns in time series is infinite, since slopes and durations are
all continuous values. Hence, mining patterns in time series cannot be solve by apriori like approach.

A related challenge lies in how to learn HMM given training time series while the observation
sequence is unknown. As we all know,
In traditional HMM theory, we can estimate parameters in HMM with Baum-Welch algorithm. But the premise is
the observation sequence is available beforehand. In our work, the unit observation is lines instead of
single values, which means the observation sequence is not known during
the learning process. We have to learn the observation sequence while training HMM. Hence, how to
do this operation efficiently is also one of the key issue needed to be addressed.

A straightforward way to build pattern-based HMM is a cluster-based approach. First, we segment time series
into continuous disjoint intervals under a maximal approximation error threshold. From
each interval a line is learned, and finally the set of lines are clustered into several group.
From each group, a representative pattern is extracted as one of the
final patterns. Then we build a HMM based on the representative patterns.
But this approach has several drawbacks: 1) It is
difficult to set a uniform maximal error threshold for all
groups. It is often the case that in certain group, the subsequences
are more similar than those in another group. So too large threshold
will bring about meaningless groups, and too small threshold will cause certain meaningful
group is split. 2) It only considers similarity between subsequences,
but not temporal relations. If a pattern occurs frequently, but is followed by many different
patterns, it is still not an ideal pattern we want. 3) Due to inappropriate threshold in both segmentation
and clustering algorithm, it might happen that
some meaningful patterns are missed.


\paragraph*{Our Approach and Contribution}
In this paper, our goal is to build a Hidden Markov model based on
patterns (pHMM). To learn the patterns and build Hidden Markov model
efficiently, we propose an two phase approach. In initial phase, a
cluster-based algorithm is performed, in which we initialize the
parameters in HMM by segmenting time series and clustering obtained
lines. In this phase, since we do not have any knowledge about the
hidden states, so the best thing we can do is to use a standard
approach to convert the time series into a Piecewise Linear
Representation(PLR)~\cite{plr98}, which approximates the time series
using line segments that have the minimal approximation error. Then
a cluster operation is executed to obtain representative patterns.
As analyzed above, absence of information of hidden states causes it
difficult to obtain best semantical patterns. To solve this problem,
in the second phase, we execute an iterative process to refine the
model. Specifically, in each round, we first do segmentation and
clustering based on the learned HMM. The guidance of HMM is
beneficial to improve the quality of pattern learning. Then we
update HMM based on segmentation and learned patterns. We will prove
that the iteration process always improves the quality of the model.
The whole framework is illustrated in Figure~\ref{fig:overview}.


\begin{figure}[!htp]
  \centering
\includegraphics[width=8cm,height=4cm]{figure/arch1.eps}
  \caption{Overview of  our approach}
\label{fig:overview}
\end{figure}


To address the challenge of absence of observation sequence, we learn observation sequence
and state sequence simultaneously. To speed up the processing,
several pruning strategies are proposed. They take advantage of
the fact that many segments are not likely to be
an observation line.

Our contributions can be summarized as follows:
\begin{itemize}
\item We propose a pattern-based Hidden Markov model (pHMM), in which states corresponds to
semantic patterns (segment lines), which can describe the meaningful
patterns in time series, and the temporal relations between them.
\item We propose an iterative approach to refine the model. Moreover, several pruning strategies are proposed to speed up
refinement process.
\item We introduce how to utilize the proposed model to perform
multi-step value prediction, trend prediction and general correlation
detection.
\item We conduct extensive experiments to verify the effectiveness
and efficiency of the proposed approach.
\end{itemize}

\paragraph*{Paper Organization}

The paper is organized as follows. The problem is defined in
Section~\ref{sec:prob}. Section~\ref{sec:model} introduces the
algorithm in the initial phase. Section ~\ref{sec:refine} describes
the method to refine pHMM. Section~\ref{sec:appl} shows how to
utilize the learned model. Section~\ref{sec:expr} shows experimental
results. In Section~\ref{sec:related}, we discuss related work, and
we conclude in Section~\ref{sec:conclusion}.
