\section{Preliminary and Approach}
\label{sec:overview} In this paper, we propose using the
pattern-based hidden Markov model (pHMM) to reveal system dynamics
from time series data.

\subsection{Background of HMM}
A hidden Markov model (HMM) is a statistical model in which the
system being modeled is assumed to be a Markov process. It includes
a finite set of states, each of which is associated with a
probability distribution over all possible output tokens.
Transitions among the states are governed by a set of probabilities.
The states are not visible, but outputs produced by the states are.
Given a sequence of observations, we learn an HMM and derive a
sequence of hidden states that correspond to the sequence of
observations.

Formally, an HMM, denoted by $\lambda=\{S,A,B,\pi\}$, is described
with the following parameters:
\begin{itemize}
\item A set of states $S=\{1,2,\cdots,K\}$.
\item State transition probabilities $A=\{a_{ij}\}$, $1\leq i,j\leq
  K$, i.e., $a_{ij}$ is the probability of state $i$ transiting to
  state $j$.
\item Output probabilities $B=\{b_i(o)\}$, $1\leq i\leq
  K$. $o$ is an observation with a continuous or discrete value (or value vector). $b_i(o)$ is the
  probability of state $i$ generating observation $o$.
\item Initial probabilities $\pi=\{\pi_i\}$, $1\leq i\leq K$. $\pi_i$ is
  the probability of the time series beginning with state $i$.
\end{itemize}

When we learn an HMM, the basic assumption is that the observation
sequence and the hidden state sequence is aligned. In our case,
however, one big challenge is to derive the alignment.

 %Also, computing
%the most likely state sequence up to a certain point $t$ depends
%only on the observed event at point $t$, and the most likely
%sequence at point $t-1$.

%One of the fundamental problem in HMM is the \emph{decoding
%problem}: given a model, $\lambda$, and a sequence of observations,
%$O$, what is the most likely state sequence that produces the
%observations. It is measured by \emph{production probability}:
%\begin{equation}
%P(O,S|\lambda)=\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}
%b_{s_j}(o_j)
%\end{equation}
%which is the probability of HMM $\lambda$ generating $O$ along the
%state sequence $S=(s_1,\cdots,s_n)$. A larger production probability
%means that $\lambda$ and the state sequence $S$ can describe $O$
%better.


One fundamental problem associated with HMMs is the \emph{decoding
  problem}: given a model, $\lambda$, and a sequence of observations,
$O$, find the optimal state sequence that produces the observations.
Another problem is the \emph{learning problem}: how to estimate the
parameter $\lambda$ of the HMM, so that the probability of the
observation sequence generated by the optimal state sequence is
maximized.  In both cases, an important measurement is the
\emph{production probability}. Given an HMM $\lambda$, and a
sequence of observations $O$, the production probability is the
probability of HMM $\lambda$ generating $O$ along a state sequence
$\mathbf{s}=(s_1,\cdots,s_m)$, and it is computed as:
\begin{equation}
P(O,\mathbf{s}|\lambda)=\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}
b_{s_j}(o_j)
\end{equation}
The larger the production probability, the better $\lambda$ and the
state sequence $\mathbf{s}$ describe $O$.

\subsection{Problem Statement}
Given a time series $X=x_1,x_2,\cdots,x_n$, we aim at learning a
pattern-based Hidden Markov Model (pHMM), which reveals the dynamics
of the system that generates the time series.


Formally, we want to solve the following problem:
\begin{itemize}
\item Convert the time series $X$ into a sequence of line segments,
  $\mathbf{L}=(L_1,L_2,\cdots,L_m)$;
\item Learn a hidden Markov model from observation sequence
  $\mathbf{L}$.
\end{itemize}
so that production probability
\begin{equation}
P(\mathbf{L},\mathbf{s}^*|\lambda)=\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}}b_{s_j}(L_j)
\label{eq:prod}
\end{equation}
is maximized, where $\mathbf{s}^*=(s_1,s_2,\cdots,s_m)$ is the
optimal state sequence, where $s_j$ generates line $L_j$ with
probability $b_{s_j}(L_j)$.

%It is meaningless that segment line approximates a subsequence with
%too much error. So the segment line should satisfy the following
%conditions. Assume line $L$ is used to approximate subsequence $$
%\begin{itemize}
%\item For



\subsection{Challenges and Overview of Our Approach} Although much
research has been done on HMMs, and HMMs have been successfully
applied in many applications, it is a non-trivial challenge to learn
a pattern based HMM for the time series data.

As we know, a traditional HMM is learned from an observation
sequence. We can estimate the parameters of an HMM using the classic
Baum-Welch algorithm. However, the premise is that we have the
observation sequence. In our case, the observation sequence is
essentially unknown, since it needs to be learned from the time
series itself. Moreover, in the time series, the number of possible
patterns is infinite. For example, in our case, we represent a
pattern by a line segment, whose slope and duration are all
continuous values.

Furthermore, the process of learning an observation sequence from
the time series cannot be decoupled from the process of learning a
hidden Markov model from the observation sequence.  Intuitively,
producing an observation sequence from a time series is done by time
series segmentation. Existing approaches segment time series by
solving an optimization problem where the objective is to minimize
the difference between the time series and the line segment
sequence. However, these approaches consider segments as independent
and isolated, and ignore the temporal relations between them, but
such temporal relations are critical in learning HMM. In our work,
we learn the observation sequence and the HMM simultaneously.

\comment{ In traditional clustering, the elements to be clustered
are a set of independent objects, and the clustering algorithm puts
similar elements into a cluster. In our case, the elements are
connected as a chain, and the similarity between two elements not
only depends on the two elements themselves, but also the elements
that come before or after the two elements. Thus, in order to
cluster the two elements, we need to know the elements that have
temporal relationships with them. This information, however, is
unknown unless we know the HMM already. In other words, we must
learn the observation sequence and the HMM simultaneously. }


We solve the above problem using a two-phase approach. In the first
phase, we initialize the pHMM using a cluster-based approach. The
observation sequence used to learn the initial pHMM is learned from
the time series without any knowledge of the pHMM.  In the second
phase, we refine the model with an iterative process. In each round,
we first segment the time series and cluster the line segments under
the guidance of a previously learned pHMM, then the
pHMM is updated based on the new segmentation. 


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
