\section{Problem Statement}
\label{sec:prob}
Given time series $X=\{x_1,x_2,\cdots,x_n\}$, our goal is to build a pattern-based
Hidden Markov model. Each pattern is a line represented by its length and slope, denoted
as $L = (l, \theta)$, where $l$ is the length
and $\theta$ is the slope. The intercept of the line segment is
ignored to allow time series shifting.

PHMM, denoted as $\lambda$, includes following components:
\begin{itemize}
\item The set of states $S=\{1,2,\cdots,K\}$, where state $i$ corresponds to the $i$-th pattern.
\item The state transition probability $A=\{a_{ij}\}$, $1\leq i,j\leq K$, i.e., $a_{ij}$ is the probability of state
  $i$ transiting to state $j$.
\item The output probability $B=\{b_i(L)\}$, $1\leq i\leq K$. $L=(l,\theta)$ is a line as the observation. $b_i(L)$ is the probability of state $i$ generating
  line segment $L$. Note that, different from traditional HMM, observation in our work is a line, instead of a single value.
\item The initial probability $\{\pi_i\}$, $1\leq i\leq K$. $\pi_i$ is the probability of the time series begins with state $i$.
\end{itemize}

We use two criterion to measure the quality of learned PHMM. The first is
about the quality of the pattern itself, and the second is about the temporal
relations. At different phases of learning, we will present the details
of criterion.






\section{Initialize HMM}
\label{sec:model}

In this section, we discuss the approach of initialing HMM. Our
focus is on discovering patterns that have specific semantics. The
refinement phase will be discussed in next section.


\subsection{From Time Series to Line Segments}
\label{sec:segment}

Our first step of hidden state discovery is to represent time series
using line segments, which will be the basic component of a time
series pattern or trend. We partition $X$ into disjoint segments, each represented by
a line. This converts $X$ to a sequence of line segments $\{L_1,
\cdots, L_m\}$.


Since in initial phase, information about latent states is not available, we perform a
traditional segmentation. A
bottom-up approach is utilized to convert a time series $X$ into piecewise
linear representation~\cite{keogh01}. Initially, we approximate $X$
with $\lfloor \frac{n}{2}\rfloor$ linear segments. In other
words, $L_i$ is the line segment that connects $y_{2i-1}$ and
$y_{2i}$. Next, we try to use a least-square-error line $y=ax+b$ to
approximate two neighboring line segments.  In each iteration, we
merge the two neighboring segments into one new line segment that
has the minimal approximation error. The merging process repeats
until every possible merging leads to a line whose error exceeds a
user specified threshold, denoted by $\varepsilon_r$. Finally, we obtain a sequence of line
segments $\{L_1,L_2,\cdots,L_m\}$.

In refinement phase, we re-segment $X$ based on learned PHMM, in which
$\varepsilon_r$ is still the threshold of maximal approximation error
of the lines.

\subsection{From Line Segments to Clusters}

After obtaining linear segments, we group
them into $K$ clusters\footnote{$K$ might be different in different
  rounds.} $C=\{C_1,C_2,\cdots, C_K\}$.

A key issue is of course how to define the similarity between lines.
If our goal is to summarize or compress the data, then we can use
approximation error or minimal description length as our objective
function. However, when our goal is to find semantic patterns, such approach is
not always optimal.
%We use an example to illustrate the problem.
%Let the system contains 4 simple states $\{a,b,c,d\}$, with state
%transition probabilities
%$$p(b|a)=0, \;\; p(c|a)=0.9, \;\; p(d|c)=0.9,\; \cdots$$
%Suppose an approximation error based approach finds that if we split
%the time series into two segments, we get the minimal error. Assume
%the first segment maps to state $a$ and the second segment to state
%$b$. However, since $p(b|a)=0$, the state sequence detected by the
%naive approach is extremely unlikely.  On the other hand, if we
%partition it into 3 segments, which maps to state $a$, $c$, and $d$
%respectively, we may find that it has slightly higher line
%approximation error, but the likelihood of the state sequence
%becomes much higher since $p(c|a)=0.9$ and $p(d|c)=0.9$. Clearly, we
%need to take the hidden states and their transition probabilities
%into consideration in segmenting the time series.

\paragraph*{Clustering Criteria}
% Each $C_i$ can be represented by the
% 'central' line segment of those line segments in the cluster. More
% specifically, we use $L_i$ and $\Theta_i$, the median length and the
% median slope, to represent $C_i$:
% \[L_i=\frac{\sum_{j=1}^{k}l_{i,j}}{k}\;\;\;\;\;
% \Theta_i=\frac{\sum_{j=1}^{k}\theta_{i,j}}{k}\] where ($l_{i,j},
% \theta_{i,j}$) is the length and slope of the j-th line segment in
% cluster $C_i$.


The objective of traditional clustering methods is to maximize
intra-cluster similarity and minimize inter-cluster similarity. It is
however not optimized for forecasting. Our approach considers two
clustering criteria:

\begin{itemize}
\item The similarity criterion.  This is the same criterion for
  traditional clustering. In our case, the line segments in the same
  cluster should have similar shapes (slopes and lengths), and the
  line segments in different clusters have different shapes.
\item The temporal relation criterion. We want to there exist stable temporal relations
between cluster. To be specific, if $L_i$ and $L_j$ belong to cluster $C_1$, we hope $L_{i+1}$ and $L_{j+1}$ also belongs to the same cluster.
\end{itemize}

%\begin{figure}[htbp]
%  \centering
%  \begin{tabular}[h]{cc}
%    \includegraphics[width=0.44\linewidth]{figure/cluster22.eps} &
%    \includegraphics[width=0.44\linewidth]{figure/cluster33.eps}\\
%    (a)  Similarity only &
%    (b)  Similarity and temporal
%  \end{tabular}
%  \caption{Two clustering strategies}\label{fig:clus}
%\end{figure}

\begin{figure}[!htp]
  \centering
\includegraphics[width=3.4cm,height=3.8cm]{figure/cluster1.eps}
  \caption{Clustering Strategy}
\label{fig:cluster}
\end{figure}

Figure~\ref{fig:cluster} illustrates the effect of the temporal relation
criterion on the clustering process. Segments $A_1$ and
$A_2$, $A_3$ and $A_4$ have similar shapes (with $A_1$ and $A_2$
being more similar to each other than $A_3$ and $A_4$). However, we
should not cluster $A_1$ and $A_2$, because the states that follow
them ($B$ and $C$) have very different shapes, which indicate that
although $A_1$ and $A_2$ have slight difference, such difference is
important in semantics. On the other hand, although $A_3$ and $A_4$
are more different, their difference is not important in semantics,
and can be safely
clustered.

\comment{
Assume we use an agglomerative
clustering algorithm, and we want to merge 3 clusters $\{C_1,C_2,
C_3\}$ into 2 clusters. In the figure, each triangle and square represents a line.
The distance between triangles and squares means the similarity
between lines. Assume all triangle line
segments are followed by line segments in cluster $C_4$, and all
square line segments are followed by line segments in cluster $C_5$.
Which two of $\{C_1, C_2, C_3\}$ should we merge?
Traditional clustering algorithms, which observe the similarity
criterion only, will merge $C_1$ and $C_2$, since they are closer
(based on the Euclidean distance) than $C_2$ and $C_3$. Let $C= C_1
\cup C_2$ be the cluster resulted by merging $C_1$ and $C_2$. Now,
half of the line segments in $C$ will be followed by line segments in
cluster $C_4$, and half followed by those in cluster $C_5$. Thus, if
the current state is $C$, we cannot tell whether the next state is
$C_4$ or $C_5$. On the other hand, if we merge $C_2$ and $C_3$,
although we lose a little on the similarity aspect, the merged cluster
will have higher predictive power, as far as predicting the next state
is concerned.}

\paragraph*{Objective Function} We now formalize the two criteria.
For the similarity criterion, we measure the variance of the line
segments in a cluster.

We use $R(i)$ to denote the relative error of cluster $C_i$:
\[R(i)=\sum_{(l_j,\theta_j)\in C_i}\{(\frac{l_j-{\bar l_{i}}}{ \bar
  l_{i}})^2+(\frac{\theta_j-{\bar \theta_{i}}}{\bar
  \theta_{i}})^2\}\]
where $\bar l_{i}$ and $\bar \theta_{i}$ is the average length and
the average slope of lines in $C_i$. Clearly, the smaller $R(i)$ is,
the more similar the line segments in $C_i$ are.

For the temporal relation criterion, we use entropy to measure the
uncertainty of the clusters following all lines in a cluster. For cluster $C_i$, we have
\begin{equation}
I(i)=\sum_{j=1}^{K} -p(j|i)\log p(j|i) \label{eq:entro}
\end{equation}
where $p(j|i)$ denotes the probability that a line in $C_i$ is followed by line
in $C_j$. Intuitively, the smaller the $I(i)$, the more certain we
are about the clusters that following lines in $C_i$.


We can construct an objective function based on these two criterion as:
\begin{equation}
F = \alpha\cdot R+ (1-\alpha)\cdot I
\label{eq:objective}
\end{equation}
to guild the cluster process,
where $R = \sum_{i=1}^K |C_i| R(i)$ is the overall relative error of all
clusters, $I=\sum_{i=1}^{K} p(i)I(i)$ is the overall entropy of all
clusters, and $\alpha \in[0,1]$ is a user-provided parameter that
decides how the user favors the similarity criterion over the
temporal relation criterion. But some factors make this approach not
a good choice.
\begin{itemize}
\item we need to avoid interaction of clusters caused by temporal relation criteria.
We illustrate it with an example. Assume we want to cluster three lines, $L_1$, $L_2$
and $L_3$. They have same lengthes, and their slopes holds: $l_1<l_2<l_3$.
Moreover, $L_1$ and $L_3$ is followed by lines in same cluster while $L_2$ not.
It is obvious that as for temporal relation criteria, it is better to group $L_1$ and $L_3$ to get
cluster $C$.
%Let the representative line of a cluster is the average length and slope.
With this choice, it happens
that $L_2$ is in the region of $C$, but it doesn't belong to $C$, which we call
interaction of clusters. Clearly, using Eq.\ref{eq:objective} cannot avoid this phenomena.
\item $R$ and $I$ measure different properties of the cluster, and
their influence is variable during clustering. So it is unreasonable to
combine them with a fix weight $\alpha$.
\end{itemize}


\paragraph*{Algorithm}
%We can devise a moving-based algorithm that uses Eq.~\ref{eq:objective}
%as a direct optimization goal for clustering. In the beginning, we
%randomly partition all segments into $k$ clusters. Then we move a
%segment from one cluster to another if such a move results in the
%biggest decrease of Eq.~\ref{eq:objective}. We repeat the process until
%no such move is possible. Albeit simple, this approach has several
%drawbacks. First, it takes a long time to converge, and second, we
%need to know $k$, the total number of clusters, before hand.

In this paper, instead of optimizing Eq.~\ref{eq:objective} directly, we adopt a
greedy approach that finds, among all possible clusters that have
small relative error, a cluster that minimizes the overall
entropy.  Initially, each line segment is regarded as a cluster of its
own. At each iteration, we merge two clusters. We consider two criterion respectively. First, we generate
``candidate'' clustering pairs, which minimizes $R$, and then choose the pair that leads to
minimal entropy.

\begin{definition}[Candidate cluster]
  For each cluster $C_i$, its candidate cluster, denoted as
  $\candidate(C_i)$, is the cluster that satisfies:
\begin{enumerate}
\item The relative error of $\candidate(C_i) \cup C_i$ is smaller than that of
  $C_j \cup C_i$, for all $C_j$. % where $C_j \cup C_i$
\item The relative error of $\candidate(C_i) \cup C_i$ is less than a
  user provided error bound $\varepsilon _c$.
\end{enumerate}
\end{definition}

Note that, $C_i$ does not necessarily have a candidate cluster. Also,
the candidate relationship is not symmetrical: Consider the 3 clusters
in Figure ~\ref{fig:clus}, since $C_2$ is closer to $C_1$, we have
$\candidate(C_3) = C_2$, but $\candidate(C_2) = C_1$. It can be proven easily
that candidate cluster pair can avoid interaction of clusters.

At each iteration, for each cluster, we compute the relative error of
the new cluster generated by merging it with every other cluster.  If
the merged cluster satisfies the above two conditions, it is picked as
its candidate cluster and a candidate pair is generated.  Assume we
get $k$ candidate pairs $\{(C_i, \candidate(C_i)), i=1,2,\cdots, k\}$.
Note that $k$ may not equal to $K$, number of clusters, since it happens that
certain clusters don't have candidate cluster. After
obtaining all pairs, we compute the overall entropy of the new cluster
for each pair to be merged, and merge the pair with the minimal
overall entropy. If more than one pairs have the minimal overall
entropy, we choose the pair with smaller relative error. This process
continues until every possible merging results a relative error that
exceeds the threshold $\varepsilon_c$.





\subsection{From Clusters to HMM}

Based on the clusters, we initialize the Hidden Markov model $\lambda$ as follows.
Assume the obtained clusters are $\{C_1,C_2,\cdots,C_K\}$. we initialize
HMM with $K$ states, $\{1,2,\cdots,K\}$, in which state $i$ corresponds to cluster $C_i$.

In our model, the output probability $b_i(L)$ denote the probability
of the line $L=(l,\theta)$ generated by state $i$. We assume for
each cluster, the length and the slope are independent with each
other, and each follows a Gaussian distribution. We use
\[p_l(L|i)=p(l|\bar{l}_i,varl_i)\]
to measure the probability of length of $L$, where $varl_i$ is the
variance of length of all segment lines belonging to $C_i$, and we
use
\[p_{s}(L|i)=p(\theta|\bar{\theta}_i,var\theta_i)\]
to measure the probability of slope of $L$, where $var\theta_i$ is
the variance of slope of all segment lines belonging to $C_i$.

With $p_l(L|i)$ and $p_s(L|i)$, we define output probability with the following formula:
\[b_i(L)=p_l(L|i)p_s(L|i)\]

The mean and variance of slopes and lengthes in each cluster can be
estimated based on the current line sequence. Also, both the
transition probability and initial probabilities can be estimated
similarly. Thus, based on the line clusters, we initialize the HMM $\lambda$.

Given HMM $\lambda$, for any segmentation $\mathbf{L}=(L_{1},L_{2},\cdots,L_{m})$ of time series $X$, we can compute
the production probability of $\lambda$ generating $\mathbf{L}$ along a state sequence $\mathbf{s}=(s_{1},s_{2},\cdots,s_{m})$,
\begin{equation}
P(\mathbf{L},\mathbf{s}|\lambda)=\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_j,s_{j-1}} b_{s_j}(L_j)
\end{equation}
Note that for traditional application of HMM, the observation sequence is fixed; while in our work, the observation
sequence can be variable, as segmentation of $X$ changes.

In refine phase, we use production probability to measure the quality
of learned PHMM. It combines both similarity criteria and temporal
relation criteria naturally.
\begin{eqnarray}
P(\mathbf{L},\mathbf{s}|\lambda)&=&\pi_{s_1}b_{s_1}(L_1)\prod_{j=2}^{m}a_{s_j,s_{j-1}} b_{s_j}(L_j)\nonumber\\
           &=&\prod_{j=1}^{m}b_{s_j}(L_j)\cdot\pi_{s_1}\prod_{j=2}^{m}a_{s_j,s_{j-1}}\nonumber\\
           &=&P'\cdot P''
\label{eq:p1p2}
\end{eqnarray}
By above transformation, we can split production probability into two part: $P'$ and $P''$.
$P'$ measures how well the states match the occurrence of lines, and $P''$ measure
how likely the states transit to other states. It is obvious that larger
production probability means a PHMM of higher quality. In the second phase,
we use an iterative process to refine the learned PHMM, so that
production probability increases.




\section{Iterative HMM Refinement}\label{sec:refine}

When we partition the time series in initial phase using the
bottom-up approach (Section~\ref{sec:segment}), we have no knowledge
about the underlying data generation mechanism.
Once PHMM is learned, in the subsequent iterations we will use the learned PHMM to guide time
series segmentation and line clustering.

Each iteration round in the refinement process includes two steps.
Assume current round is $k$, and the PHMM learned in previous round is $\lambda^{k-1}$. In the first step, based on the current PHMM $\lambda^{k-1}$, we use a
Viterbi-like approach to learn a new segmentation
$\mathbf{L}^k$ and a corresponding state
sequence $\mathbf{s}^k$, so that the probability of the current PHMM
generating $\mathbf{L}^k$ along $\mathbf{s}^k$ is maximized.

In the second step, based on new line sequence and state sequence
$\mathbf{L}^k$ and $\mathbf{s}^k$, we update current PHMM to $\lambda^{k}$, so that the probability
$P(\mathbf{L}^k,\mathbf{s}^k|\lambda^k)$ is maximized.
That is,
\[P(\mathbf{L}^k,\mathbf{s}^k|\lambda^K)\geq P(\mathbf{L}^k,\mathbf{s}^k|\lambda')\]
where $\lambda'$ is any PHMM. In this section, we introduce the first
step, and the second step is discussed in next section.

\subsection{HMM-based Segmentation and Clustering}


Given the observation sequence, traditional Viterbi algorithm can be used to
find the most likely sequence of hidden states. However, it works
for the case that the observation sequence is already known and fixed in
each round. In our case, both the two premises don't hold, since we must segment the
time series in order to ``define'' the observations. Moreover, the
observation sequence may vary at different iterative rounds. This
difference causes the process of detecting state sequence more
complex.

First, we give a brief overview of viterbi algorithm,
and then we extend it to our case where observation sequence is not known during learning model.


\subsubsection{Traditional Viterbi Algorithm}
In traditional HMM, the state detection task is: for observation
sequence, $O=\{o_1,o_2,\cdots,o_n\}$, find the optimal state
sequence, $\mathbf{s}=\{s_1,s_2,\cdots,s_n\}$, which maximizes
production probability
\[P(O,\mathbf{s}|\lambda)=\pi(s_1)b_{s_1}(o_1)\prod_{i=2}^K a_{s_{i-1},s_i}b_{s_i}(o_i)\]


The Viterbi takes a recursive process. The key component is \emph{forward probability},
$\delta_t(i)$, which is the probability
of the optimal state sequence up to $t$ in which $s_t=i$. The
following formula holds
\begin{eqnarray}
\delta_t(i)&=&\max\limits_{s_1,\cdots,s_{t-1}} P(s_1,s_2,\cdots,s_{t-1},s_t=i)\nonumber\\
           &=&\max\limits_{s_1,\cdots,s_{t-1}}\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{t}(a_{s_{j-1},s_{j}}b_{s_j}(o_j))\nonumber
\end{eqnarray}

The algorithm scans the
entire time span from $t=1$. when it reaches $t$, the algorithm
computes forward probability based on
those of last time point, as follows:
\[\delta_t(i)=\max\limits_{j}(\delta_{t-1}(j)a_{ji})b_i(o_t)\]
When we get
$\max\limits_{i}\delta_n(i)$, we can obtain the optimal state
sequence by backtracking. The advantage of Viterbi algorithm is that computing $\delta_t(i)$
 only depends on $\delta_{t-1}(i)$, and all those before $t-1$ can be ignored safely.

\subsubsection{Extension of Viterbi Algorithm}
But in our approach, we need to decide the new observation sequence,
while learning the optimal state sequence. To solve this problem, we extend
$\delta_t(i)$ with the new meaning:
\[\delta_t(i)=\max\limits_{L_1,\cdots,L_{k}}\max\limits_{s_1,\cdots,s_{k-1}}P(L_1,\cdots,L_{k},s_1,\cdots,s_{k}=i|\lambda)\]
where $\{L_1,\cdots,L_k\}$ is the a line sequence, in which last line, $L_k$, ends at time
point $t$. Note that $k$ can be any value not exceeding $\lfloor\frac{t}{2}\rfloor$.
The extended $\delta_t(i)$ mean the maximal probability of current HMM generating any
line sequence up to $t$ along a state sequence.

\begin{figure}[!htp]
  \centering
\includegraphics[width=6cm,height=4cm]{figure/forward1.eps}
  \caption{Computing forward probability}
\label{fig:forward}
\end{figure}

Traditional forward probability $\delta_t(i)$ is computed
based on only probabilities at last time point
$\delta_{t-1}(j)$s. But for extended forward probability, it is more complex.
The challenge is that we don't know the beginning point of last line, $L_k$, as well
as those of all previous lines.

Now we introduce our solution. Assume the process arrived at $t$ and all forward probabilities before $t$ are already known.
If last line $L_k$ begins from $t-d+1$ and line $L_{k-1}$ ends at $t-d$, we can compute $\delta_t(i)$ as follows,
\begin{equation}
\delta_t(i)=\max\limits_{j}(\delta_{t-d}(j)a_{ji})b_i(L_k)
\end{equation}

But since last line can begin from any time point, and it is unknown before obtaining
optimal state sequence, we compute $\delta_t(i)$ based on all probabilities
before it, which means
\[\delta_t(i)=\max\limits_{d,j}(\delta_{t-d}(j)a_{ji})b_i(L_k)\]
where $L_k$ is the least-square-error of $(x_{t-d+1},x_{t-d+2},x_t)$.
Fig.~\ref{fig:forward} illustrates this process.


When the algorithm reaches time $n$, we can obtain optimal
observation sequence and state sequence by backtracking the maximal forward
probability, which are denoted by $\mathbf{L}^k$ and $\mathbf{s}^k$ respectively.





The detailed algorithm is shown below.

\begin{algorithm}
\caption{Detect\_state\_sequence}\label{euclid}
\begin{algorithmic}[1]
\State \textbf{Input} $\varepsilon$:maximal error of a line
approximation

\For{$t\gets 1, n$}
    \For{$i\gets 1,K$}
        \State $\delta_t(i)=1$
        \For{$d\gets 1,t$}
            \State $L=BestLine(t-d+1,t)$
            \If{$err(L)>\varepsilon$}
                \State Break
            \Else
                \State $temp=\max\limits_{j}(\delta_{t-d}(j)\cdot a_{ji})b_i(L)$
                \If{$temp>\delta_t(i)$}
                    \State $\delta_t(i)=temp$
                    \State $prev_d(t)=t-d$
                    \State $prev_s(t)=j$
                \EndIf
            \EndIf
        \EndFor
    \EndFor
\EndFor

\State Find $\max\limits_{i}\delta_{n}(i)$

\State Obtain maximal forward probability $\delta_n(i)$, which holds
\[\delta_n(i)\geq \delta_n(j),j\neq i\]
\State Compute state sequence by backtracking sequence of $prev_s$
\State Compute line sequence by backtracking sequence
of $prev_d$
\end{algorithmic}
\end{algorithm}
In above algorithm, function $BestLine(x,y)$ (line 6) is to learn the line
beginning from $x$ and ending at $y$, which has the minimal approximation error.



\comment{
\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=2.7cm]{figure/forward1.eps} &
\includegraphics[height=2.7cm]{figure/forward1.eps} \\
(a) Compression ratio & (b)  Runtime
\end{tabular}
\caption{Vary number of event types $m$. Other parameters are:
$n=100$, $d=m/2$, $k=5$, $k_{ini}=10$, $l=40$\label{fig:exp2}}
\end{figure}
}

\paragraph*{Performance analysis} In traditional viterbi
algorithm, for each time point, we need to compute $K$
probabilities: $\delta_t(i)$, $i=1,\cdots,K$, and for each
$\delta_t(i)$, we need to check $K$ probabilities
$\delta_{t-1}(j)$, $j=1,2,\cdots,K$. So the time complexity
is $O(nK^2)$ in each round.

In our task, for each time point, we also need to compute $K$
probabilities: $\delta_t(i)$,$i=1,2,\cdots,K$. But to compute each
$\delta_t(i)$, we need to check $(t-1)*K$ possible probabilities
$\delta_{t-d}(j)$, $j=1,2,\cdots,K$, $1<d<t$.
In summary, the time complexity is $O(n^2K^2)$ in each round.
It is much time consuming and infeasible in practice.

\subsubsection{The Three Pruning Strategies}
To speed up this
process, we propose three pruning strategies, Two of them are
lossless strategies and the third is a lossy one.

\textbf{ Strategy 1: Prune with approximation error bound.} The first one is based on
the requirement that the error of each line cannot exceed the threshold, $\varepsilon_r$.
So we need not check the previous forward probabilities, which can cause last line, $L_k$, whose
approximation error is larger than $\varepsilon_r$.
To filter these forward probabilities, we maintain a
point to represent the farthest time point before which the forward probability
need not be checked. Specifically, when computing
$\delta_t(i)$,   if we find a $t'$, which satisfies
\[err(BestLine(t',t))<\varepsilon_r, err(BestLine(t'-1,t))>\varepsilon_r\]
$t'$ is set as the farthest point and denoted as $FP$. Later when we compute forward possibilities after $t$,
we need not check the forward probabilities before $FP$. Note that while the process continues,
$FP$ will move toward $n$ gradually.


\textbf{Strategy 2: Prune with forward possibility.}
The second pruning strategy is based on the observation that to compute
$\delta_t(i)$, only a few splitting points and previous
states is meaningful, and others are not likely to
be in the final optimal observation sequence and state sequence.
Hence once we obtain certain 'meaningful' line
segmentation up to $t$, and corresponding forward possibility, we can use it to prune the
other meaningless line segmentations. To be specific, when the process
arrives at $t$, each time we
obtain a forward possibility candidate $\delta_t(i)$ based on certain splitting point and previous state,
we estimate whether there exist larger forward probability.
If it is not, we can ignore all other previous forward possibilities, and go to
compute $\delta_t(i+1)$ or $\delta_{t+1}(1)$.

We maintain all obtained forward probabilities in a list, denoted as $LT$, in which
all probabilities are sorted in a descending order. Note that the probabilities
before $FP$ are not contained in $LT$. We use $LT_i$ is indicate
the $i$-th forward probability in $LT$.
To compute $\delta_t(i)$, we check the entries in $LT$ from top to bottom.
Assume after we check the first $j$ entries in $LT$, and the obtained maximal candidate is $\delta'_t(i)$.
Then we check the $(j+1)$-th entry in $LT$, $LT_{j+1}$. We compute whether the following inequality holds:
\[LT_{j+1}\cdot a_{max}(i)\cdot b_{max}(i)<\delta'_t(i)\]
where $b_{max}(i)$ is the maximal output probability generated by
state $i$ and $a_{max}(i)$ is the highest transition probability
from any state to state $i$.

If it holds, it indicates that from $LT_{j+1}$, as
well as all entries after it, we cannot obtain
candidate larger than $\delta'_{t}(i)$. So $\delta'_t(i)$ is the fine $\delta_t(i)$. We add it into $LT$ for computing later
forward probability and move ahead.

\textbf{Strategy 3: Prune with less time points.}

Although the first two strategies reduce the unit cost of computing
forward probability, the process still is time consuming
since we need to compute forward probabilities for each time point.
In fact, most time points are not likely to be the beginning/ending
points of a line (called as boundary later). For the time point which is less likely to
be a boundary, we need not to compute forward probability. In other words, we
only choose the points which is more likely to be a
boundary.

The key issue is how to judge whether a point is likely to be a boundary.
We judge it based on the following observation:
\newcounter{Observation}
\begin{Observation}
If the two neighboring lines connected at time point $t$ have apparently different slopes,
it is more possible that $t$ is a boundary.
\end{Observation}
The reason is if the two neighboring lines have similar slopes, it is
more likely that they belong to the same line, and consequently the connecting point is likely to be in the middle of this line, and less
possible to be a boundary. We illustrate the observation in
Figure \ref{fig:prune}. It is obvious that $A$ and $C$ should be a boundary.
As for $D$ and
$E$, the lines before and after them have similar slopes, so it is less possible to be
a boundary than $A$ and $C$.

\begin{figure}[!htp]
  \centering
\includegraphics[width=8.1cm,height=4cm]{figure/prune3.eps}
  \caption{Boundary points}
\label{fig:prune}
\end{figure}

We use this criteria to choose some points at which we compute
forward possibilities. In other words, instead of computing forward
probabilities at all time points, we only compute them at points which are
more likely to be boundaries.

We choose the possible boundary points according to the results of segmentation
in the initial phase. Remind that in initial phase, we segment the
time series in a bottom-up way. At each step, two neighboring lines
are merged and the connecting point is changed from boundary to a
middle point. The sooner a connecting point is changed, the less likely
that it is a boundary in the line sequence we look for. Continue the example
in Figure \ref{fig:prune}, the last points should be sorted as:
\[\cdots, D, E, B, F, A, C\]
It is obvious that they are the most possible points to be a boundary.
So we choose the last $N$ time points to form a boundary candidate list,
where $N$ is a user-specified parameter. Then, we execute the
detect\_state\_sequence algorithm only on these points.% This
%strategy can reduce the time consumption dynamically, while keep the accuracy
%approximately with no pruning.


Strategies 1 and 2 are lossless ones and don't affect the accuracy, while the third one is an lossy
strategy, the reason is that it is possible that the points in the final observation sequence are not
contained in the last $N$ points. But our method of choosing points guarantees that
we choose the points which are most likely to be the boundaries. The experiment results
show that this strategy can reduce the time consumption dynamically, while keep the accuracy
approximate with no pruning.


\subsection{Step 2: Update PHMM}
After obtaining optimal observation sequence $\mathbf{L}^{k}$
and $\mathbf{s}^{k}$, we update the current PHMM, so that it can
reflect $\mathbf{L}^{k}$ and $\mathbf{s}^{k}$ best.
Let
$\mathbf{L}^{k}=(L_{1},L_{2},\cdots,L_{m'})$ and
$\mathbf{s}^{k}=(s_{1},s_{2},\cdots,s_{m'})$. Note that the number of lines
in $\mathbf{L}^{k}$, $m'$, is possible to be different with that in last round.
where $m$ is the number of lines.

First we cluster the lines in $\mathbf{L}^{k}$ according to the corresponding states. Specifically,
\[C_i=\{L_{j}|s_{j}=i\},i=1,2,\cdots,K\]
Assume cluster $C_i$ contains $|C_i|$ lines
,$\{L_{i1},L_{i2},\cdots,L_{i|C_i|}\}$,
the mean and variance of length and slope are updated as

\begin{equation}
\begin{array}{ccc}
\bar{l}_i^{k} & = & \frac{1}{|C_i|}\sum_{j=1}^{|C_i|}l_{ij}\\
\bar{\theta}_i^{k} & = &
\frac{1}{|C_i|}\sum_{j=1}^{|C_i|}\theta_{ij}\\
varl_i^{k}&=&\frac{1}{|C_i|-1}\sum_{j=1}^{|C_i|}(l_{ij}-\bar{l}_i^{k})^2\\
var\theta_i^{k}&=&\frac{1}{|C_i|-1}\sum_{j=1}^{|C_i|}(\theta_{ij}-\bar{\theta}_i^{k})^2
\end{array}
\end{equation}
Then we new PHMM $\lambda^{k}$ contain the states $\{1,2,\cdots,K\}$, in which state $i$ corresponds
to cluster $C_i$. With mean mean and variance of length and slope of lines
in each cluster, we can obtain the output probability of all states. Transition
probability and initial probability are updated according to
state sequence $\mathbf{s}^k$. It is possible that $\lambda^{k}$
contains some \emph{empty} states, since
certain states in previous PHMM don't occur in new state sequence $\mathbf{s}^k$.
If it happens, we delete these states from $\lambda^{k}$.


We can prove that the new production probability in current round is not
less than that in last round.

Assume the model in round $k$ is $\lambda^k$, the overall
probability is:
\[P(\mathbf{L}^{k}|\lambda^k)=\prod_{i=1}^{m}b(s_i^k,L_i^k)a(s_{i-1}^k,s_i^k)\]

\begin{theorem}
The production probability of round $k$ is not less than that of round
$k-1$, that is:
\[P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1})\]
\end{theorem}
\begin{proof}
Since $\mathbf{L}^{k}$ and $\mathbf{s}^k$ is the optimal observation sequence
and state sequence which has maximal probability based on previous PHMM, it holds:
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k-1})\geq P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1})
\label{eq_proof1}
\end{equation}

Next, since the new estimated parameters in $\lambda^k$ are
the maximal likelihood estimation. So it holds
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq P(\mathbf{L}^{k},\mathbf{s}^{k}|\lambda^{k-1})
\label{eq_proof2}
\end{equation}


Combining Eq.~\ref{eq_proof1} and Eq. ~\ref{eq_proof2}, we can get
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1})
\end{equation}
\end{proof}






%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
