
\section{The Initial pHMM}
\label{sec:model}

We introduce our approach to build the initial pHMM in 3 steps.
First, we segment the time series. Second, we cluster line segments.
Third, based on the clusters, we learn the first pHMM.


\subsection{From Time Series to Line Segments}
\label{sec:segment}


%Our first step of hidden state discovery is to represent time series
%using line segments, which will be the basic component of a time
%series pattern or trend. We partition $X$ into disjoint segments,
%each represented by a line. This converts $X$ to a sequence of line
%segments $\mathcal{L}=\{L_1, \cdots, L_m\}$.

In phase one, information about latent states is not available, so
we perform a traditional segmentation. We use a bottom-up approach
to convert the time series $X$ into a piecewise linear
representation~\cite{keogh01}. Initially, we approximate $X$ with
$\lfloor \frac{n}{2}\rfloor$ line segments. The $i$-th line, $L_i$,
connects $x_{2i-1}$ and $x_{2i}$. Next, we iteratively merge the
neighboring lines. In each iteration, %we use a
%least-square-error line $y=ax+b$ to approximate two neighboring line
%segments.
we merge the two neighboring segments into one new line segment that
has the minimal approximation error.  The merging process repeats
until every possible merge leads to a line whose error exceeds a
user specified threshold, denoted by $\varepsilon_r$. % Line segments
% $\{L_1,L_2,\cdots,L_m\}$ constitute the segmentation of phase
% one. % Later, in the refinement phase, threshold $\varepsilon_r$ is also
% % used to limit the maximal square error of the lines.
Clearly, without knowledge of the latent states, it is very likely
that the initial segmentation is not optimal for our goal (see
Section~\ref{sec:why}).


\subsection{From Line Segments to Clusters}

After obtaining the line segments, we group them into clusters
$\{C_1,C_2,\cdots, C_K\}$. A key issue is to define the similarity
between line segments.  Had our goal been to summarize or compress
the time series, we could have used the approximation error or
minimal description length as the objective function. However, as
our goal is to learn a pattern-based HMM, such an approach is not
always optimal.


%\paragraph*{Clustering Criteria}
%The objective of traditional clustering methods is to maximize
%intra-cluster similarity and minimize inter-cluster similarity. It
%is however not optimized for learning a pHMM.
We consider two clustering criteria:

\begin{itemize}
\item The similarity criterion.  This is the same criterion for
  traditional clustering. In our case, the line segments in the same
  cluster should have similar shapes (slopes and lengths), and the
  line segments in different clusters have different shapes.
\item The temporal criterion. If $L_i$ and $L_j$ belong to the same
  cluster, then $L_{i+1}$ and $L_{j+1}$, which follow $L_i$ and $L_j$ respectively,
  should have the same distribution (in terms of which cluster they belong
  to); more often than not they belong to the same cluster.
\end{itemize}

We now formalize these two criteria. For the similarity criterion,
we
measure the variance of the line segments in each cluster. % Since
% slopes and lengths of different lines vary, we use relative
% error. %instead of the difference of two lines'
% %lengths or slopes.
For cluster $C_i$, the relative error is computed as:
\[R(i)=\frac{1}{|C_i|}\sum_{(l_j,\theta_j)\in C_i}\{(\frac{l_j-{\bar l_{i}}}{ \bar
  l_{i}})^2+(\frac{\theta_j-{\bar \theta_{i}}}{\bar
  \theta_{i}})^2\}\]
where $|C_i|$ is the number of lines in cluster $C_i$, $\bar l_{i}$
and $\bar \theta_{i}$ are the average length and the average slope
of lines in $C_i$. Clearly, the smaller $R(i)$ is, the more similar
the line segments in $C_i$ are.

For the second criterion, we use entropy to measure the uncertainty
of the clusters following the lines of cluster $C_i$:
\begin{equation}
I(i)=\sum_{j=1}^{K} -p(j|i)\log p(j|i) \label{eq:entro}
\end{equation}
where $p(j|i)$ denotes the probability that a line in $C_i$ is
followed by a line in $C_j$. Intuitively, the smaller the $I(i)$,
the more certain we are about the clusters that follow lines in
$C_i$.

A straightforward way of clustering is to construct an objective
function based on these two criteria as:
\begin{equation}
F = \alpha\cdot R+ (1-\alpha)\cdot I \label{eq:objective}
\end{equation}
where $R = \sum_{i=1}^K |C_i| R(i)$ is the overall relative error of
all clusters, $I=\sum_{i=1}^{K} p(i)I(i)$ is the overall entropy of
all clusters, and $\alpha \in[0,1]$ is a user-provided parameter.
Then we cluster the segment lines to minimize the objective function
$F$.  However, it is hard to set a reasonable $\alpha$. We
illustrate it with an example.  Assume we want to cluster three
lines, $L_1$, $L_2$ and $L_3$. They have the same length, and their
slopes satisfy: $\theta_1<\theta_2<\theta_3$. Moreover, $L_1$ and
$L_3$ are followed by lines in the same cluster while $L_2$ is
followed by a line in another cluster, and the lines in these two
clusters have very different shapes. If we set a very large
$\alpha$, the similarity criterion is dominant. It is likely that
all three lines are clustered into one cluster. On the other hand,
if we set a small $\alpha$, the temporal criterion will dominate. A
possible result is that $L_1$ and $L_3$ are put into the same
cluster $C$ while $L_2$ is not.  However, this is unreasonable, as
$L_2$ is ``enveloped'' by $L_1$ and $L_3$.

%group $L_1$ and $L_3$ to a cluster $C$, $L_2$ and $L_4$ to another
%cluster $C'$. With this choice, it happens that cluster $C$ and $C'$
%interact with each other. Clearly, using Eq.\ref{eq:objective}
%cannot avoid this phenomena.
%\item It is hard to set parameter $\alpha$, since $R$ and $I$ have a very
%different value range.




%\paragraph*{Algorithm}
In this paper, instead of optimizing Eq.~\ref{eq:objective} directly,
we adopt a greedy approach. Initially, each line is considered as a
cluster on its own. Then at each iteration, we merge two clusters by
considering two criteria in turn. The approach includes three steps:

\textbf{Step 1.}  (Similarity Criterion) For each cluster $C_i$,
find its most similar cluster, called $C_i$'s candidate
cluster. % In this step, we consider the
% similarity criterion. The cluster and its candidate cluster form a
% cluster pair. More specifically, we have:
\begin{definition}[Candidate cluster]
  For each cluster $C_i$, its candidate cluster, denoted as
  $T_i$, is the cluster that satisfies:
\begin{enumerate}
\item $R(T_i\cup C_i)\leq R(C\cup C_i)$ holds for any
  $C\neq T_i$, where $T_i\cup C_i$ is the new cluster generated by merging $T_i$ and
  $C_i$, and $R(T_i\cup C_i)$ is its relative error.
\item $R(T_i\cup C_i)\leq \varepsilon_c$, where $\varepsilon_c$ is a
user-specified threshold, called the relative error threshold.
\end{enumerate}
\end{definition}

\textbf{Step 2.} (Temporal Criterion) For any cluster pair
$(C_i,T_i)$, compute the entropy of new cluster $C_i\cup T_i$.

\textbf{Step 3.} Merge a pair with minimal entropy to a new cluster.

This process continues until every possible merge results in a
relative error that exceeds the threshold $\varepsilon_c$.

\paragraph*{Connection between the two measurements}
In clustering, we use the similarity and the temporal criteria to
measure the quality of clusters. In the problem statement, we use the
production probability to measure the quality of the learned model.
In fact, we can build a connection between these two measurements.

We decompose production probability into $P'$ and $P''$.
\begin{eqnarray}
P(\mathbf{L},\mathbf{s}^*|\lambda)&=&\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_{j-1},s_{j}} b_{s_j}(L_j)\nonumber\\
           &=&\prod_{j=1}^{m}b_{s_j}(L_j)\cdot\pi_{s_1}\prod_{j=2}^{m}a_{s_{j-1},s_{j}}\nonumber\\
           &=&P'\cdot P''
\label{eq:p1p2}
\end{eqnarray}
$P'$ measures how well the states match the occurrences of the line
segments, and it corresponds to the similarity criterion.  $P''$
measures the certainty of transitions between states, and it
corresponds to the temporal criterion. So they are consistent with
each other.



\subsection{From Clusters to HMM}
Based on the obtained clusters, we initialize the hidden Markov
model $\lambda$ as follows. Assume the clusters are
$\{C_1,C_2,\cdots,C_K\}$, we initialize an HMM with $K$ states,
$\{1,2,\cdots,K\}$, in which state $i$ corresponds to cluster $C_i$.
In other words, we assume lines in each cluster represent the
typical fluctuation of time series when the system stays in the same
state.

Before discussing how to initialize the output probabilities, we
first define the output probability in terms of segment lines as
observations. We assume slopes and lengths are independent to each
other. For line $L=(l,\theta)$, the output probability is defined as
a product of two probabilities:
\[b_i(L)=p_l(l|i)p_s(\theta|i)\]
in which $p_l(l|i)$ is the probability of state $i$ generating the
line with length $l$, and $p_s(\theta|i)$ is the probability of $i$
generating the line with slope $\theta$.

In many real life applications, the system operates in different
states; and in each state, the system exhibits stable behavior. Each
observation of one state can be regarded as the stable behavior plus
some slight fluctuations, or errors. Since the observational error
in an experiment is often described by Gaussian Distribution, we use
it here to describe the distribution of segment lines. Formally, we
assume $p_l(\cdot|i)$ and $p_{s}(\cdot|i)$ follow 1-dimension
Gaussian Distributions, $\mathcal{N}(\mu_{il},\sigma^2_{il})$, and
$\mathcal{N}(\mu_{is},\sigma^2_{is})$ respectively.

Some experiments are conducted to verify the assumption and results
are shown in Figure~\ref{fig:gau}. For both the Spot and Power
datasets, we select one big cluster randomly, since the big cluster
contains more lines that can demonstrate the distribution more
clearly. It can be seen that both length and slope can be
approximated by Gaussian distribution.

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[width=3.8cm,height=3.8cm]{figure/gaussian1.eps} &
\includegraphics[width=3.8cm,height=3.8cm]{figure/gaussian2.eps} \\
(a)  Spot Dataset & (b)  Power Dataset
\end{tabular}
\caption{Slope and length distribution \label{fig:gau}}
\end{figure}



To initialize output probabilities of state $i$, we need to
initialize 4 parameters: $\mu_{il}, \sigma^2_{il}$,
$\mu_{is},\sigma^2_{is}$. We estimate $\mu_{il}$ and $\sigma^2_{il}$
as the mean and variance of lines' lengths in $C_i$, and estimate
$\mu_{is}$ and $\sigma^2_{is}$ as the mean and variance of lines'
slopes in $C_i$.

We build a state sequence to initialize transition probabilities and
initial probabilities. In line sequence $(L_1,L_2,\cdots,L_m)$, we
replace each line with the cluster it belongs to. Since cluster
$C_i$ corresponds to state $i$ ($1\leq i\leq K$), a state sequence
is obtained. Then based on this, we estimate transition
probabilities and initial probabilities as in the traditional HMM.

%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
