\section{Preliminaries}
\label{sec:prob}

In this section, we introduce the background of HMM, give the
problem statement of our work, and discuss its challenges.

\subsection{Background of HMM}
Hidden Markov Model (HMM) is a classic models of pattern recognition
and has been applied in many fields. It includes a finite set of
states, each of which is associated with a probability distribution
over the possible output tokens. Transitions among the states are
governed by a set of probabilities. The state is not directly
visible, but token dependent on the state is visible. Given a
sequence of observable tokens, HMM describes it by a sequence of
hidden states.

Formally, an HMM is described by the following parameters:
\begin{itemize}
\item The set of states $S=\{1,2,\cdots,K\}$.
\item The state transition probability $A=\{a_{ij}\}$, $1\leq i,j\leq
  K$, i.e., $a_{ij}$ is the probability of state $i$ transiting to
  state $j$.
\item The output probability $B=\{b_i(o)\}$, $1\leq i\leq
  K$. $o$ is an observation with continuous or discrete value (or vector). $b_i(o)$ is the
  probability of state $i$ generating observation $o$.
\item The initial probability $\{\pi_i\}$, $1\leq i\leq K$. $\pi_i$ is
  the probability of the time series beginning with state $i$.
\end{itemize}

There exist some assumptions in HMM. First, both the observed tokens
and hidden events must be in a sequence. This sequence often
corresponds to time. Second, these two sequences need to be aligned,
and an observed token corresponds to exactly one state. Third,
computing the most likely state sequence up to a certain point $t$
depends only on the observed event at point $t$, and the most likely
sequence at point $t-1$.

\paragraph*{Production Probability and Viterbi algorithm}
Given a HMM $\lambda$, for an observation sequence
$O=(o_1,o_2,\cdots,o_n)$, the production probability
\begin{equation}
P(O,S|\lambda)=\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{m}a_{s_j,s_{j-1}}
b_{s_j}(o_j)
\end{equation}
is the probability of HMM $\lambda$ generating $O$ along the state
sequence $S=(s_1,\cdots,s_n)$. A larger production probability means
that $\lambda$ and the state sequence $S$ can describe $O$ better.

One of the fundamental problem in HMM is the \emph{decoding
problem}: given a model, $\lambda$, and a sequence of observations,
$O$, what is the most likely state sequence that produces the
observations. In other words, it tries to find the state sequence
$S^*$, which maximizes production probability $P(O,S^*|\lambda)$.
Viterbi algorithm is an efficient method to learn the optimal state
sequence $S^*$. It takes a recursive procedure, and works in
parallel for all states in a strictly time synchronous manner.

The key component in Viterbi algorithm is \emph{optimal
probability}, denoted as $\delta_t(i)$, which is the probability of
the maximal probability of HMM generating observation segment
$o_1,\cdots,o_t$, along the optimal state sequence $s_1,\cdots,s_t$,
in which $s_t=i$. That is,


\begin{eqnarray}
\delta_t(i)&=&\max\limits_{s_1,\cdots,s_{t-1}} P(o_1,\cdots,o_t,s_1,\cdots,s_{t-1},s_t=i|\lambda)\nonumber\\
           &=&\max\limits_{s_1,\cdots,s_{t-1}}\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{t}(a_{s_{j-1},s_{j}}b_{s_j}(o_j))\nonumber
\end{eqnarray}

The algorithm scans the entire time span from $t=1$, at which,
optimal probability for state $i$ is initiated as
\[\delta_1(i)=\pi_ib_i(o_1)\]
To compute $\delta_t(i)$, assume all $\delta_{t-1}(j),1\leq j\leq K$
are already obtained. According to the third assumption of HMM
mentioned above, the algorithm computes $\delta_t(i)$ as:
\[\delta_t(i)=\max\limits_{j}(\delta_{t-1}(j)a_{ji})b_i(o_t)\]


When the algorithm reaches the last time point $n$, all optimal
probabilities: $\delta_n(i),1\leq i\leq K$, are obtained. By
comparing all probabilities, and backtracking the largest one, this
algorithm obtains the optimal state sequence.


\subsection{Problem Statement and Challenge}
Given a time series $X=x_1,x_2,\cdots,x_n$, we aim at learning a
pattern-based Hidden Markov Model (pHMM), which reveals the system
dynamics well. To make observations meaningful, and embody more
information about the states, we adopt pattern as the observation
token.


Formally, we want to solve the following problem:
\begin{itemize}
\item Convert the time series $X$ into a sequence of line segment, $\mathbf{L}=(L_1,L_2,\cdots,L_m)$, as the observation sequence.
\item Learn a hidden Markov model based on the $\mathbf{L}$.
\end{itemize}
so that production probability
\begin{equation}
P(\mathbf{L},\mathbf{s}^*|\lambda)=\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_j,s_{j-1}}b_{s_j}(L_j)
\label{eq:prod}
\end{equation}
is maximized, where $\mathbf{s}^*=(s_1,s_2,\cdots,s_m)$ is the
optimal state sequence and $s_i$ generates segment line $L_i$ with
probability $b_{s_i}(L_i)$.

%It is meaningless that segment line approximates a subsequence with
%too much error. So the segment line should satisfy the following
%conditions. Assume line $L$ is used to approximate subsequence $$
%\begin{itemize}
%\item For



\paragraph*{Challenges}
Although much research has been done on HMM, and it is successfully
applied in many applications, it is a non-trivial challenge to build
a pattern based HMM for time series data.

As we know, a traditional HMM is learned from an observation
sequence. We can estimate the parameters of an HMM using the
Baum-Welch algorithm. But the premise is that the observation
sequence is available beforehand, and it stays unchanged during
learning process. In our case, the observation sequence is
essentially unknown, since it needs to be learned from the time
series itself. Furthermore, in time series, the number of possible
patterns is infinite. For example, in our case, we represent a
pattern by a line segment, whose slopes and durations are all
continuous values.

Furthermore, the process of learning an observation sequence from
the time series cannot be decoupled from the process of learning a
hidden Markov model from the observation sequence.  Intuitively,
producing an observation sequence from a time series is done by time
series segmenting. Existing approaches segment the time series by
solving an optimization problem where the objective is to minimize
the difference between the time series and the line segment
sequence. But these approaches consider segments as isolated, and
ignore temporal relations between them, which is critical in
learning HMM. In other words, we must learn the observation sequence
and the HMM simultaneously.

\comment{ In traditional clustering, the elements to be clustered
are a set of independent objects, and the clustering algorithm puts
similar elements into a cluster. In our case, the elements are
connected as a chain, and the similarity between two elements not
only depends on the two elements themselves, but also the elements
that come before or after the two elements. Thus, in order to
cluster the two elements, we need to know the elements that have
temporal relationships with them. This information, however, is
unknown unless we know the HMM already. In other words, we must
learn the observation sequence and the HMM simultaneously. }


We solve the above problem using a two-phase approach. In the first
phase, we initialize a pHMM using a cluster-based approach, that is,
the observation sequence used to learn the initial pHMM is learned
from the time series without any knowledge of the pHMM.  In the
second phase, we refine the model with an iterative process. In each
round, we first segment the time series and cluster the line
segments under the guidance of a previously learned pHMM, then the
pHMM is updated based on the new segments. %We use two criterion to
%measure the quality of learned pHMM. The first is about the quality of
%the pattern itself, and the second is about the temporal relations
%between patterns. At each learning phase, we present specific form of
%criteria.



















%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
