\section{Preliminary and Approach}
\label{sec:overview} In this paper, we propose the pattern-based
Hidden Markov models (pHMM) to reveal system dynamics from time
series data.

\subsection{Background of HMM}
A hidden Markov Model (HMM) is a statistical model in which the
system being modeled is assumed to be a Markov process. It includes
a finite set of states, each of which is associated with a
probability distribution over all possible output tokens.
Transitions among the states are governed by a set of probabilities.
The states are not visible, but outputs produced by the states are.
Given a sequence of observations, we learn an HMM and derive a
sequence of hidden states that correspond to the sequence of
observations.

Formally, an HMM, denoted by $\lambda=\{S,A,B,\pi\}$, is described
with the following parameters:
\begin{itemize}
\item A set of states $S=\{1,2,\cdots,K\}$.
\item State transition probabilities $A=\{a_{ij}\}$, $1\leq i,j\leq
  K$, i.e., $a_{ij}$ is the probability of state $i$ transiting to
  state $j$.
\item Output probabilities $B=\{b_i(o)\}$, $1\leq i\leq
  K$. $o$ is an observation with continuous or discrete value (or value vector). $b_i(o)$ is the
  probability of state $i$ generating observation $o$.
\item Initial probabilities $\pi=\{\pi_i\}$, $1\leq i\leq K$. $\pi_i$ is
  the probability of the time series beginning with state $i$.
\end{itemize}

When we learn an HMM, the basic assumption is that the observation
sequence and the hidden state sequence is aligned. In our case,
however, one big challenge is to derive the alignment.

 %Also, computing
%the most likely state sequence up to a certain point $t$ depends
%only on the observed event at point $t$, and the most likely
%sequence at point $t-1$.

%One of the fundamental problem in HMM is the \emph{decoding
%problem}: given a model, $\lambda$, and a sequence of observations,
%$O$, what is the most likely state sequence that produces the
%observations. It is measured by \emph{production probability}:
%\begin{equation}
%P(O,S|\lambda)=\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}
%b_{s_j}(o_j)
%\end{equation}
%which is the probability of HMM $\lambda$ generating $O$ along the
%state sequence $S=(s_1,\cdots,s_n)$. A larger production probability
%means that $\lambda$ and the state sequence $S$ can describe $O$
%better.


One fundamental problem associated with HMMs is the \emph{decoding
  problem}: given a model, $\lambda$, and a sequence of observations,
$O$, find the optimal state sequence that produces the observations.
Another problem is the \emph{learning problem}: how to estimate the
parameter $\lambda$ of the HMM, so that the probability of the
observation sequence generated by the optimal state sequence is
maximized.  In both cases, an important measurement is the
\emph{production probability}. Given an HMM $\lambda$, and a
sequence of observations $O$, the production probability is the
probability of HMM $\lambda$ generating $O$ along a state sequence
$\mathbf{s}=(s_1,\cdots,s_n)$, and it is computed as:
\begin{equation}
P(O,\mathbf{s}|\lambda)=\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}
b_{s_j}(o_j)
\end{equation}
The larger the production probability, the better $\lambda$ and the
state sequence $\mathbf{s}$ describe $O$.

\subsection{Problem Statement}


We want to solve the following problem: given a time series
$X=x_1,x_2,\cdots,x_n$, we conduct two operations:

\begin{itemize}
\item Convert the time series $X$ into a sequence of line segments,
  $\mathbf{L}=(L_1,L_2,\cdots,L_m)$;
\item Learn a hidden Markov model (HMM) from the observation sequence
  $\mathbf{L}$.
\end{itemize}
so that the production probability
\begin{equation}
P(\mathbf{L},\mathbf{s}^*|\lambda)=\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}b_{s_j}(L_j)
\label{eq:prod}
\end{equation}
is maximized, where $\mathbf{s}^*=(s_1,s_2,\cdots,s_m)$ is the
optimal state sequence, in which $s_i$ generates line $L_i$ with
probability $b_{s_i}(L_i)$.


\subsection{Challenges} % Although much research has been done on HMMs,
% and HMMs have been successfully applied in many applications, it is a
% non-trivial challenge to learn a pattern based HMM for time series
% data.

A traditional HMM is learned from an observation sequence. We can
estimate the parameters of an HMM using the Baum-Welch algorithm.
But the premise is that we have the observation sequence. In our
case, the observation sequence is essentially unknown, since it
needs to be learned from the time series itself.  Moreover, in the
time series, the number of possible patterns is infinite. For
example, in our case, we represent a pattern by a line segment,
whose slope and durations are all continuous values.

Furthermore, the process of learning an observation sequence from
the time series cannot be decoupled from the process of learning a
hidden Markov model from the observation sequence.  Intuitively,
producing an observation sequence from a time series is done by time
series segmentation. Existing approaches segment time series by
solving an optimization problem where the objective is to minimize
the difference between the time series and the line segment
sequence. But these approaches consider segments as independent and
isolated, and ignore the temporal relations between them, but such
temporal relations are critical in learning HMM. In our work, we
learn the observation sequence and the HMM simultaneously.

\comment{ In traditional clustering, the elements to be clustered
are a set of independent objects, and the clustering algorithm puts
similar elements into a cluster. In our case, the elements are
connected as a chain, and the similarity between two elements not
only depends on the two elements themselves, but also the elements
that come before or after the two elements. Thus, in order to
cluster the two elements, we need to know the elements that have
temporal relationships with them. This information, however, is
unknown unless we know the HMM already. In other words, we must
learn the observation sequence and the HMM simultaneously. }












\subsection{A Two Phase, Iterative Approach}
Not only do we discover patterns in time series, we ensure that they
are not disconnected or isolated, but are organic components of a
state transition machine, which produces the original time series.
The challenge is that we do not know the observation tokens, i.e.,
the patterns. Specifically, i) to segment the time series into
patterns we need to know the state transition machine, which tells
us how likely one pattern is followed by another pattern, otherwise
we run into problems demonstrated by the example in
Figure~\ref{fig:problems}(b); and ii) to build the state transition
machine we must know the patterns first, as patterns are the sole
components of the state transition machine.

\begin{figure}[!htp]
  \centering
\includegraphics[width=7cm,height=6.5cm]{figure/arch1.eps}
  \caption{Overview of  our approach}
\label{fig:overview}
\end{figure}

To solve this dilemma, we propose a two phase approach. In phase
one, we first segment the time series using traditional optimization
techniques.  Because we do not have any knowledge about the
underlying state transition machine, the best thing we can do is to
use a standard approach~\cite{plr98} to convert the time series into
a piecewise linear representation. In other words, we approximate
the time series using line segments that minimize the approximation
error. Then, we cluster the resulting segments using a greedy
clustering method (considering both the similarity and the temporal
constraint). Finally, From the segmented time series, we learn a
hidden Markov model.

% As analyzed above, absence of information of
% hidden states causes it difficult to obtain best semantical patterns.
% To solve this problem,
In phase two, we execute an iterative process to refine the model.
Specifically, in each round, we first segment and cluster the time
series based on the learned pHMM.  The pHMM provides important
guidance for segmenting and clustering, resulting in higher quality
patterns. Then we update the pHMM based on learned patterns. We
prove that the iteration process always improves the quality of the
model. The whole framework is illustrated in
Figure~\ref{fig:overview}.



















\subsection{Applications}
Our goal is to reveal the system beneath the time series data it
produces. With the knowledge of the underlying system, we will be
able to perform a large variety of challenging tasks. A
representative list of tasks include the following:
\begin{itemize}
\item Trend prediction. With the knowledge of the state transition
  machine, we can derive the temporal relations between patterns. This
  enables us to answer queries such as: what is the trend of the time
  series in 10 minutes; or when will the time series end the current
  downward trend and enter an upward trend.

\item Accurate multi-step value prediction. Predicting time series
  value long into the future is a challenging and important task.
  Specifically, given time series before time point $t$, we want to
  predict the values at time $t+\delta$, where $\delta$ is much bigger
  than $1$.

\item Pattern based correlation detection. In traditional approaches,
  in order to compute correlation between two time series, we map the
  time series into a vector space (using DFT or DWT, e.g.), and use a
  distance measure (Euclidean distance, dynamic time
  warping~\cite{keogh08}, etc.) to calculate their similarity.  Now we
  can compute correlation based on patterns. Furthermore, we can
  correlate the time series by rules such as: whenever pattern $P_1$
  occurs in time series $S_1$, $P_2$ will occur in time series
  $S_2$. % Note that $P_1$ and $P_2$ can be any pattern.  In other
%   words, we detect the correlations based on patterns.
\end{itemize}
% In summary, the contributions we make in this paper are the following:
% \begin{itemize}
% \item We introduce a pattern-based Hidden Markov Model (pHMM) for time
%   series data. It focuses on revealing the internal dynamics of the
%   system that produces the time series. % , whose hidden states correspond to semantic
% %   patterns (line segments), which can describe the meaningful patterns
% %   in time series, and the temporal relations between them.
% \item We propose an iterative approach to refine the
%   model. Furthermore, we propose several pruning strategies to speed
%   up the refinement process.
% \item We propose algorithms that use pHMM to perform multi-step value
%   prediction, trend prediction and general correlation detection.
% \item We conduct extensive experiments to verify the effectiveness and
%   efficiency of the proposed approach.
%\end{itemize}
