\section{Iterative pHMM Refinement}\label{sec:refine}

We iteratively refine the pHMM learned in the previous round. Each
iteration has two steps. In step one, based on the current pHMM, we
use an extended Viterbi algorithm to segment the time series and learn
the optimal state sequence. In step two, based on the new line segment
sequence and the state sequence, we update the pHMM. The iteration
stops until the pHMM does not change.

\subsection{Motivation of Iterative Refinement}\label{sec:why}

We illustrate the benefit of refining with an example in
Figure~\ref{fig:clus}. Assume in the initial phase, the time series
is segmented into $S_1$, as shown in Figure~\ref{fig:clus}(a).
Consider intervals $[a,b]$, $[c,d]$ and $[e,f]$. It can be seen that
subsequences in all of them have similar shapes. Moreover, the lines
before and after them are all similar. However, in $S_1$, $[a,b]$ is
represented by line $L_2$, while $[c,d]$ is represented by two lines
$L_6$ and $L_7$. Since the shape of $L_2$ is apparently different to
$L_6$ or $L_7$ respectively, they cannot be clustered into a group.
So, $S_1$ misses the information that $L_2$ is similar to the
connection of $L_6$ and $L_7$. Moreover, according to the cluster
approach, three clusters, as well as three states, will be
generated: $C_1=(L_6,L_{10})$, $C_2=(L_7,L_{11})$ and $C_3=(L_2)$.
While the first two are meaningful states, $C_3$ is very likely to
be a noise state. Note that such issues cannot be solved by frequent
pattern mining, as such patterns may not necessarily be frequent in
the entire time series.

\begin{figure}[!htp]
  \centering
\includegraphics[width=6cm,height=4cm]{figure/clus5.eps}
  \caption{Segmentation with/without Refinement}
\label{fig:clus}
\end{figure}

In the refinement process, the time series will be re-segmented with
the guidance of the current pHMM.  Intuitively, the new segmentation
will identify the lines that can be generated by certain states with
a high probability. So $[a,b]$ will be split into $L_2$ and $L_3$,
as shown in Figure~\ref{fig:clus}(b).  Obviously, it achieves a
better pHMM.




\subsection{pHMM-based Segmentation}

Given an observation sequence, the Viterbi algorithm can find the
optimal state sequence. However, it only works if the observation
sequence is known. In our case, we only have the raw time series,
instead of the observation sequence (a sequence of line segments).
In this work, we extend the Viterbi algorithm to learn the line
segment and the state sequence simultaneously.

\subsubsection{The Traditional Viterbi Algorithm}
The Viterbi algorithm is an efficient method of learning the optimal
state sequence $\mathbf{s}^*$, given the HMM $\lambda$ and the
observation sequence $O$. It has a recursive procedure, and it works
in parallel for all states in a strictly time synchronous manner.

The key component in the Viterbi algorithm is the \emph{optimal
  probability}, denoted as $\delta_t(i)$, which is the maximal
probability of HMM generating the observation segment
$o_1,\cdots,o_t$, along the optimal state sequence $s_1,\cdots,s_t$,
in which $s_t=i$. That is,


\begin{eqnarray}
\delta_t(i)&=&\max\limits_{s_1,\cdots,s_{t-1}} P(o_1,\cdots,o_t,s_1,\cdots,s_{t-1},s_t=i|\lambda)\nonumber\\
           &=&\max\limits_{s_1,\cdots,s_{t-1}}\pi_{s_1}b_{s_1}(o_1)\prod_{j=2}^{t}(a_{s_{j-1},s_{j}}b_{s_j}(o_j))\nonumber
\end{eqnarray}

The algorithm scans the observation sequence from $t=1$, at which
point the optimal probability for state $i$ is initiated as
\[\delta_1(i)=\pi_ib_i(o_1)\]
Assume all $\delta_{t-1}(j),1\leq j\leq K$, are already obtained.
The algorithm computes $\delta_t(i)$ as:
\[\delta_t(i)=\max\limits_{j}(\delta_{t-1}(j)a_{ji})b_i(o_t)\]


When the algorithm reaches the last time point $n$, we obtain all
optimal probabilities: $\delta_n(i),1\leq i\leq K$. By comparing all
of them, and backtracking the largest one, this algorithm obtains
the optimal state sequence.

\subsubsection{Extending the Viterbi Algorithm}
%Similar with traditional Viterbi, the proposed algorithm stills
%computes optimal probability, $\delta_t(i)$, from $t=1$ to $t=n$.
%After the algorithm goes through the whole sequence, we obtain the
%maximal production probability of current pHMM generating a line
%sequence along the optimal state sequence. Through backtracking it,
%we obtain the line sequence and state sequence. First, we introduce
%the new definition of optimal probability. Then, we discuss how to
%compute it, and learn the line sequence.

As analyzed before, we need to learn the line sequence and the state
sequence simultaneously. We implement it with a modified optimal
probability, $\delta_t(i)$. In other words, in the traditional
Viterbi algorithm, we only learn the optimal state sequence when
computing $\delta_t(i)$, while in our algorithm, we not only learn
the optimal state sequence, but also find the optimal line sequence.

In our case, an observation token, or the unit of observation, is a
line segment. Therefore, we define $\delta_t(i)$ as the maximal
probability of the current HMM generating any line sequence up to
$t$ along the optimal state sequence ending with state $i$.
Formally,
\[\delta_t(i)=\max\limits_{L_1,\cdots,L_{k}}\max\limits_{s_1,\cdots,s_{k-1}}P(L_1,\cdots,L_{k},s_1,\cdots,s_{k}=i|\lambda)\]
where $\{L_1,\cdots,L_k\}$ is a line sequence, in which the last
line segment $L_k$ ends at time point $t$, and its corresponding
state is $i$. state $s_k$ corresponds to the $k$-th line. Note that
$k$ can be any value not exceeding $\lfloor\frac{t}{2}\rfloor$.

Intuitively, in the Viterbi algorithm, when $\delta_t(i)$ is
computed from $\delta_{t-1}(j)$, it implies that the observation at
time $t$ is added to the observation sequence. Similarly, in our
algorithm, when $\delta_t(i)$ is computed, it implies a ``new
observation'' is added to the observation sequence. However, here
the observation is a line segment ending at $t$, instead of a single
value.

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=2.7cm]{figure/forward2.eps} &
\includegraphics[height=2.7cm]{figure/forward1.eps} \\
(a) Traditional  & (b)  Extended
\end{tabular}
\caption{Computing optimal probability \label{fig:forward}}
\end{figure}

To be specific, we compute a possible $\delta_{t}'(i)$ based on any
previous optimal probability $\delta_{t-d}(j)$ as
\[\delta_{t}'(i)=\delta_{t-d}(j)a_{ji}b_i(L)\]
where $L$ is the new observed line, which begins at $t-d+1$ and ends
at $t$, and its corresponding state is $i$. The new line $L$ is
determined by $t$ and $d$. The only limitation of $d$ is that the
approximation error of $L$ on the interval $[t-d+1,t]$ cannot exceed
$\varepsilon_r$.

Since we do not know the optimal line sequence, we cannot determine
the value of $d$ beforehand. We compute $\delta_t(i)$ by checking all
possible previous optimal probabilities, and choose the largest result
as the final $\delta_t(i)$:
\[\delta_t(i)=\max\limits_{d,j}(\delta_{t-d}(j)a_{ji})b_i(L)\]
where line $L$ is a variable for different previous optimal
probabilities. %No matter the position of the previous optimal
%probability, we only add one line as the new observation line.
Figure~\ref{fig:forward} illustrates the difference between our
approach with the traditional Viterbi algorithm.










When the algorithm reaches time $n$, we obtain the maximal optimal
probability $\max_{i}\delta_n(i)$. Through backtracking it, we
obtain the optimal observation sequence and the corresponding state
sequence. The detailed algorithm is shown in
Algorithm~\ref{al:viterbi}.

\label{app:algorithm}

\begin{algorithm}
\caption{Detect\_state\_sequence}\label{al:viterbi}
\begin{algorithmic}[1]
\State \textbf{Input} $\varepsilon_r$:maximal error threshold of
line approximation

 \State Initialize $\delta_1(i)=0$ ($1\leq i\leq K$)
 \For{$t\gets 2, n$}
    \For{$i\gets 1,K$}
        \State $\delta_t(i)=0$
        \For{$d\gets 2,t$}
            \State $L=BestLine(t-d+1,t)$
            \If{$Err(L)>\varepsilon_r$}
                \State Break
            \Else
                \If{$t==d$}
                    \State $temp=\pi_ib_i(L)$
                \Else
                    \State $temp=\max\limits_{j}(\delta_{t-d}(j)\cdot a_{ji})b_i(L)$
                \EndIf
                \If{$temp>\delta_t(i)$}
                    \State $\delta_t(i)=temp$
                    \State $prev_d(t)=t-d$
                    \State $prev_s(t)=j$
                \EndIf
            \EndIf
        \EndFor
    \EndFor
\EndFor

\State Obtain maximal optimal probability $\delta_n(i)$, which holds
\[\delta_n(i)\geq \delta_n(j),j\neq i\]
\State Obtain state sequence by backtracking sequence of $prev_s$
\State Obtain line sequence by backtracking sequence of $prev_d$
\end{algorithmic}
\end{algorithm}
In Algorithm~\ref{al:viterbi}, function $BestLine(x,y)$ (line 7)
learns the best-fit line beginning from $x$ and ending at $y$, which
has the minimal approximation error. Function $Err(L)$ computes the
approximation error of $L$.


\paragraph*{Performance analysis} In the traditional Viterbi
algorithm, at each time point, it computes $K$ probabilities:
$\delta_t(i)$, $i=1,\cdots,K$. To compute each $\delta_t(i)$, it
checks $K$ probabilities $\delta_{t-1}(j)$, $j=1,2,\cdots,K$. So the
time complexity is $O(nK^2)$ in each round.

In our task, at each time point, we also compute $K$ probabilities:
$\delta_t(i)$,$i=1,2,\cdots,K$. However, to compute each
$\delta_t(i)$, we need to check $(t-1)*K$ possible probabilities
$\delta_{t-d}(j)$, ($j=1,2,\cdots,K$, $1\leq d<t$) at most. Thus,
the time complexity is $O(n^2K^2)$ in each round. Clearly, for long
sequences, it is not feasible. Next, we introduce three pruning
strategies to improve the efficiency.




%
%\subsubsection{Three Pruning Strategies}
%To speed up the above process, we propose three pruning strategies,
%two of which are lossless and the third is lossy (with respect to
%whether the final result is the same as that of the exact approach
%discussed before). For details, please see Appendix~\ref{app:prune}.

\subsubsection{Three Pruning Strategies}
\label{app:prune} We propose three pruning strategies, two of which
are lossless and the third is lossy (with respect to whether the
final result is the same as that of the exact approach discussed
before).

\paragraph*{Strategy 1: Prune with threshold $\varepsilon_r$} The
first one is based on the requirement that the approximation error
of each line cannot exceed threshold $\varepsilon_r$. So we use
$\varepsilon_r$ to filter state transitions which need not be
checked. Assume the current time point is $t$, and we compute
$\delta_t(i)$. We check optimal probabilities of previous time
points from $t-1$ to $1$. If we find a time point $t'$, which
satisfies
\[Err(BestLine(t'-1,t))>\varepsilon_r\mbox{ and } Err(BestLine(t',t))<\varepsilon_r\]
it means the approximation error of the line covering interval
$[t'-1,t]$ must exceed $\varepsilon_r$. So later we need not check
the optimal probabilities before $t'$. Note that while the process
continues, $t'$ will move forward gradually.

%So we need not check the previous optimal probabilities which can
%cause last line, $L_k$, whose approximation error is larger than
%$\varepsilon_r$. To filter these optimal probabilities, we maintain
%a point to represent the farthest time point before which the
%optimal probability need not be checked. Specifically, when
%computing $\delta_t(i)$,   if we find a $t'$, which satisfies
%\[err(BestLine(t',t))<\varepsilon_r, err(BestLine(t'-1,t))>\varepsilon_r\]
%$t'$ is set as the farthest point and denoted as $FP$. Later when we
%compute optimal possibilities after $t$, we need not check the
%optimal probabilities before $FP$. Note that while the process
%continues, $FP$ will move toward $n$ gradually.


\paragraph*{Strategy 2: Prune with optimal possibility} The second
pruning strategy uses obtained candidates of optimal probabilities
to filter useless optimal probabilities. In
algorithm~\ref{al:viterbi}, to compute $\delta_t(i)$, we need to
check lots of previous optimal probabilities. Each time a previous
optimal probability is checked, we get a candidate of $\delta_t(i)$.
Since computing optimal probability finds the maximal one, we can
make pruning based on the obtained candidates.



%each time we obtain an optimal possibility candidate $\delta_t(i)$,
%we estimate whether it is possible that there exists larger
%$\delta_t(i)$. If it is not, we can ignore all other previous
%optimal possibilities, and go to compute $\delta_t(i+1)$ or
%$\delta_{t+1}(1)$.
%
%%based on the observation that to compute optimal probability, only a
%%few splitting points and previous states is meaningful, and others
%%are not likely to be in the final optimal observation sequence and
%%state sequence. Hence once we obtain certain 'meaningful' line
%%segmentation up to $t$, and corresponding optimal possibility, we
%%can use it to prune the other meaningless line segmentations. To be
%%specific, when the process arrives at $t$, each time we obtain a
%%optimal possibility candidate $\delta_t(i)$ based on certain
%%splitting point and previous state, we estimate whether there exist
%%larger optimal probability. If it is not, we can ignore all other
%%previous optimal possibilities, and go to compute $\delta_t(i+1)$ or
%%$\delta_{t+1}(1)$.

We maintain all previous optimal probabilities in a list, denoted as
$LT$, in which all probabilities are sorted in descending order. The
optimal probability will be deleted from $LT$ once the approximation
error of the line $L$ generated by it exceeds $\varepsilon_r$. We
use $LT_i$ to indicate the $i$-th optimal probability in $LT$. To
compute $\delta_t(i)$, we check the entries in $LT$ from top to
bottom. Assume after we check the top-$j$ entries in $LT$, the
obtained maximal candidate is $\delta'$. Then we check the
$(j+1)$-th entry, $LT_{j+1}$, in $LT$. We compute whether the
following inequality holds:
\[LT_{j+1}\cdot a_{max}(i)\cdot b_{max}(i)<\delta'\]
where $b_{max}(i)$ is the maximal output probability generated by
state $i$ and $a_{max}(i)$ is the highest transition probability
from any state to state $i$.

If it holds, it indicates that from $LT_{j+1}$, as well as all
entries after it, we cannot obtain a candidate of $\delta_t(i)$
larger than $\delta'$. So $\delta'$ is the final $\delta_t(i)$. We
add it into $LT$ for later computing.

\paragraph*{Strategy 3: Prune with boundary points}
Although the first two strategies reduce the unit cost of computing
an optimal probability, the process may still be time consuming
since it needs to compute optimal probabilities for each time point.
In this strategy, we reduce the number of points where we need to
compute optimal probabilities, which can greatly speed up the
process.

In fact, the goal of the extended Viterbi algorithm is to find the
optimal line sequence, which is determined by boundary points of
lines. If we only compute optimal probabilities on these points,
instead of all time points, we can speed up the process greatly,
while the accuracy doesn't suffer too much. An important question
is: \emph{which points are more likely to be boundaries in the
optimal segmentation}? We answer this question with the following
observation:
%\newcounter{Observation}
\setcounter{theorem}{0}
%\newenvironment{theorem}
\begin{Observation}
If two neighboring lines before and after $t$ have apparently
different slopes, $t$ is more likely to be a boundary.
\end{Observation}
The reason is that if the two neighboring lines have similar slopes,
it is more likely that they are merged into a line, and consequently
$t$ is a point in the middle of this line, instead of being a
boundary. We illustrate the observation in Figure \ref{fig:prune}.
It is obvious that points $A$ and $C$ should be boundaries. For $D$
and $E$, the lines before and after them have similar slopes, so
they are less possible to be the boundaries than $A$ and $C$.

\begin{figure}[!htp]
  \centering
\includegraphics[width=5cm,height=2.5cm]{figure/prune3.eps}
  \caption{Boundary points}
\label{fig:prune}
\end{figure}

We choose points based on the above observation. Remember that in
the initial phase, we segment the time series in a bottom-up way. At
each step, two neighboring lines are merged and the point connecting
them changes from a boundary to a middle point. The sooner a
boundary point is changed, the less likely that it is a boundary. So
we sort all the time points according to the order they become
middle points, and select the last $N$ time points to form a
boundary candidate list, where $N$ is a user-specified parameter.
Then, we execute Algorithm~\ref{al:viterbi} only on these points.
Continuing the example in Figure~\ref{fig:prune}, assume we only
consider these six points. They should be sorted as:
\[\cdots, D, E, B, F, A, C\]
If we just select 4 points from them to run
Algorithm~\ref{al:viterbi}, $C$, $A$, $F$ and $B$ will be selected.

%In other words, instead of computing optimal probabilities at all
%time points, we only compute them at points which are more likely to
%be boundaries.
%
%%We choose the possible boundary points according to the results of
%%segmentation in the initial phase.

% This
%strategy can reduce the time consumption dynamically, while keep the accuracy
%approximately with no pruning.


Strategies 1 and 2 are lossless ones and don't affect the accuracy,
while the third one is a lossy strategy, since it may cause some
points, which should be optimal segmentation boundaries, to be
missed. However, our method of choosing points guarantees that we
choose the points which are most likely to be the boundaries.
Experimental results verify that this strategy can reduce the time
consumption dynamically, while keeping the accuracy similar to the
exact model.

\subsection{Updating pHMM}
Assume in round $k$, the current pHMM is $\lambda^{k-1}$. The
obtained optimal line sequence and state sequence are denoted as
$\mathbf{L}^{k}$ and $\mathbf{s}^{k}$ respectively. We update the
current pHMM, so that it can generate $\mathbf{L}^{k}$ and
$\mathbf{s}^{k}$ with the largest probability. Let
$\mathbf{L}^{k}=(L_{1},L_{2},\cdots,L_{m})$ and
$\mathbf{s}^{k}=(s_{1},s_{2},\cdots,s_{m})$. Note that the number of
lines in $\mathbf{L}^{k}$, $m$, is possible to vary over different
rounds.

Transition probabilities and initial probabilities are updated
according to state sequence $\mathbf{s}^k$. To update output
probabilities, we cluster the lines in $\mathbf{L}^{k}$ according to
the corresponding states. Specifically,
\[C_i=\{L_{j}|s_{j}=i\},i=1,2,\cdots,K\]
Then we update the mean and the variance of slopes and lengths with
the method in the initial phase. After that, we obtain the pHMM in
this round, denoted as $\lambda^{k}$. It happens that certain states
in $\lambda^{k-1}$ disappear from $\lambda^{k}$, if these states
don't occur in state sequence $\mathbf{s}^k$.

%Assume cluster $C_i$ contains $|C_i|$ lines
%,$\{L_{i1},L_{i2},\cdots,L_{i|C_i|}\}$, the mean and variance of
%length and slope are updated as
%
%\begin{equation}
%\begin{array}{ccc}
%\bar{l}_i^{k} & = & \frac{1}{|C_i|}\sum_{j=1}^{|C_i|}l_{ij}\\
%\bar{\theta}_i^{k} & = &
%\frac{1}{|C_i|}\sum_{j=1}^{|C_i|}\theta_{ij}\\
%varl_i^{k}&=&\frac{1}{|C_i|-1}\sum_{j=1}^{|C_i|}(l_{ij}-\bar{l}_i^{k})^2\\
%var\theta_i^{k}&=&\frac{1}{|C_i|-1}\sum_{j=1}^{|C_i|}(\theta_{ij}-\bar{\theta}_i^{k})^2
%\end{array}
%\end{equation}
%Then we new pHMM $\lambda^{k}$ contain the states
%$\{1,2,\cdots,K\}$, in which state $i$ corresponds to cluster $C_i$.
%With mean mean and variance of length and slope of lines in each
%cluster, we can obtain the output probability of all states.
%Transition probability and initial probability are updated according
%to state sequence $\mathbf{s}^k$. It is possible that $\lambda^{k}$
%contains some \emph{empty} states, since certain states in previous
%pHMM don't occur in new state sequence $\mathbf{s}^k$. If it
%happens, we delete these states from $\lambda^{k}$.


Finally, we prove the correctness of the refinement approach, that
is, the new production probability in the current round is not less
than that in the last round.

%Assume the model in round $k$ is $\lambda^k$, the overall
%probability is:
%\[P(\mathbf{L}^{k}|\lambda^k)=\prod_{i=1}^{m}b(s_i^k,L_i^k)a(s_{i-1}^k,s_i^k)\]
\setcounter{theorem}{0}
\begin{theorem}
The production probability of round $k$ is not less than that of
round $k-1$, that is:
\[P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1})\]
\end{theorem}

\begin{proof}
Since $\mathbf{L}^{k}$ and $\mathbf{s}^k$ are the optimal
observation sequence and state sequence which has maximal
probability based on the pHMM in last round, it holds:
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k-1})\geq
P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1}) \label{eq_proof1}
\end{equation}

Next, since the parameters in $\lambda^k$ are the results of maximum
likelihood estimation. So it holds
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq
P(\mathbf{L}^{k},\mathbf{s}^{k}|\lambda^{k-1}) \label{eq_proof2}
\end{equation}


Combining Eq.~\ref{eq_proof1} and Eq. ~\ref{eq_proof2}, we can
obtain
\begin{equation}
P(\mathbf{L}^{k},\mathbf{s}^k|\lambda^{k})\geq
P(\mathbf{L}^{k-1},\mathbf{s}^{k-1}|\lambda^{k-1})
\end{equation}
\end{proof}

\section{Applications of the Model}\label{sec:appl}
%The pHMM model describes the semantic patterns in time series, and
%the temporal relations between them, which are very useful for
%processing user's advanced queries. Here, we introduce three typical
%applications of pHMM.

The pHMM reveals the system dynamics from the time series it
produces. With knowledge of the underlying system, we can deal with
some advanced tasks. In this section, we introduce how to use pHMM
to perform these tasks.

\subsection{Multi-step Value Prediction}
Different from traditional prediction models, which make predictions
based on previous values, the pHMM makes predictions based on
patterns. To be specific, assume we already learn a pHMM from the
training time series. Now, we go through a testing time series and
predict the values based on the learned pHMM. Let $t$ be the current
time point, we first detect the current state. Then based on the
current state, we predict the values of $t+1$, $t+2,\cdots$.
Multi-step value prediction is very useful in the application of
system monitoring, in which earlier detection of the anomaly value
is critical.

Assume we already obtain a pHMM, and $Y=\{y_1, y_2, \cdots\}$ is the
time series we monitor to make predictions. We first detect the
current state, with the extended Viterbi algorithm in
Section~\ref{sec:refine}. Pruning strategies 1 and 2 can both be
used here, but the third one is not, since it requires that we
already have the segmentation of the whole time series, which is not
available in online monitoring.

To speed up the process, we utilize another pruning strategy
introduced in ~\cite{perng00}, which can be executed on the fly. It
aims at reducing the number of points, at which we checked the
previous optimal probabilities to compute the current optimal
probability. Specifically, if we find that a past point is not
likely to be a boundary, we delete all the optimal probabilities at
this point from $LT$.

%For previous time points, We judge whether they are likely to be
%boundaries. If so, we maintain the optimal probabilities at them for
%later computing. If not, we ignore them, as well as the optimal
%probabilities on them.

We do it as follows. Given a minimal distance $D$ and a minimal
percentage $P$, we dynamically remove points $y_{t}$ and $y_{t'}$ if
they hold
\[|t-t'|<D\mbox{ and }\frac{|y_t-y_{t'}|}{|y_t+y_{t'}|/2}<P\]
If these two inequalities hold, it means that these two points are
near and there is no large fluctuation between their values. By this
strategy, we only need to maintain a small number of optimal
probabilities, by which we can compute current optimal probability
efficiently.

Assume the new arrival value is $y_t$, we compute $\delta_t(i),1\leq
i\leq K$, and obtain the optimal line sequence and state sequence up
to $t$. Then we make predictions of future values based on the
current state.

%Moreover, if the current time point is close to the end of a line, which can be estimated based
%on the average length of the corresponding, we first estimate the time points where
%state transition will happen based on the average length of the patter, and
%then estimate the values according to the most likely next state.

\subsection{Trend Prediction}
In many applications, users are not interested in forecasting
specific values. Instead, they are interested in the evolving of
trends. With pHMM, we can predict the future trends easily. For
example, we can answer queries like: \emph{what is the trend of time
series after 10 minutes}; or \emph{when will the time series end the
current downward trend and enter an upward trend}.

The approach is similar with that of multi-step value prediction.
When monitoring a time series, we first detect the current state
online, and then make prediction based on this. For example, to
estimate how long the system will stay in the current state, we
compute the difference between the mean duration of the current
state, and its current duration; A more useful case is to estimate
the trend in a future period, such as predicting the temperature
trend tomorrow between 9:00-10:00am. To answer this query, we first
predict the time span of the next state based on transition
probability. If it covers the period in question, we use the mean of
the slope in the next state as the estimated trend; if not, we
predict the state after the next state. We continue this process
until the period in question is covered.


\subsection{Pattern-based Correlation Detection}
Correlation detection is an important operation in time series
mining. Measurements, like the correlation coefficient, can tell
whether similar subsequences exist between two time series. However,
it is advantageous to detect a more general correlation between two
time series based on patterns. Consider the example shown in Figure
~\ref{fig:corr}. When a burst, $P_1$, occurs in time series $X$, a
more stable upward trend, $P_2$, will occur in time series $Y$ with
probability $80\%$. Note that $P_1$ and $P_2$ can be two totally
different patterns. Moreover, they can have different lengths and
not align. For example, occurrences of $P_2$ are always  5 seconds
later than those of $P_1$. In general, we learn the correlations
based on patterns, instead of values. We call this type of
correlation the pattern-based correlation.

\begin{figure}[!htp]
  \centering
\includegraphics[width=9cm,height=3cm]{figure/corr.eps}
  \caption{General correlation}
\label{fig:corr}
\end{figure}

pHMM can be used to find the pattern-based correlation effectively.
Given two time series $X$ and $Y$, we learn two pHMMs for them
respectively. Then we compute the correlations between patterns in
these two pHMMs. We use two criteria to measure the general
correlation. The first one is frequency, which measures whether
these two patterns have a similar number of occurrences. Assume we
measure the correlation between pattern $P_1$ in $X$ and $P_2$ in
$Y$. Let $P_1$ occur $m_1$ times and $P_2$ occur $m_2$ times
($m_1\leq m_2$). The first criterion is computed as
\[f(P_1,P_2)=\frac{m_1}{m_2}\]
The second criterion is about how well their occurrences align. It
is better if most of their occurrences have similar gaps, or delays.
To measure the second criterion, we compute the minimal average of
their gaps' square. Since there exist many possible matchings, we
choose the one with minimal gap-square-average. In the example shown
in Figure~\ref{fig:corr}, the best matching of $P_1$ and $P_2$ is
illustrated by dotted lines.


If $m_1\leq m_2$, we pick out $m_1$ occurrences of $P_2$, which can
best match occurrences of $P_1$. %We use an approach similar with
%Dynamic Time Warping (DTW)~\cite{keogh08} to find the best matching
%of occurrences.
Let $\{c_j\}$, $1\leq j\leq m_1$, be the central time points of
occurrences of $P_1$, and $\{c_{i_j}\}$, $1\leq i_j\leq m_2$, be
central points of $m_1$ occurrences of $P_2$. We measure the second
criterion as follows:
\[g(P_1,P_2)=\min \left\{\frac{1}{m_1}\sum_{j=1}^{m_1}(c_{j}-c_{i_j})^2\right\}\]
which can be efficiently computed with a dynamic programming
approach. We combine these two criteria to measure the general
correlation between patterns $P_1$ and $P_2$ as:
\[GC(P_1,P_2)=\frac{g(P_1,P_2)}{f(P_1,P_2)}\]
The smaller $GC(P_1,P_2)$, the more correlated patterns $P_1$ and
$P_2$.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
