%\section{Applications of the Model}\label{sec:appl}
%%The pHMM model describes the semantic patterns in time series, and
%%the temporal relations between them, which are very useful for
%%processing user's advanced queries. Here, we introduce three typical
%%applications of pHMM.
%
%The pHMM reveals the system dynamics from the time series it produces.
%With the knowledge of the underlying system, we can deal with some
%advanced tasks. In this section, we introduce how to use pHMM to
%perform these tasks.
%
%\subsection{Multi-step Value Prediction}
%Different from traditional prediction models, which make prediction
%based on previous values, the pHMM makes prediction based on
%patterns. To be specific, assume we already learn a pHMM from the
%training time series. Now, we go through a testing time series and
%predict the values based on the learned pHMM. Let $t$ be the current
%time, we first detect the current state. Then based on the current
%state, we predict the values of $t+1$, $t+2,\cdots$. Multi-step
%value prediction is very useful in the application of the system
%monitoring, in which earlier detection of the anomaly value is
%critical.
%
%Assume we already obtain a pHMM, and $Y=\{y_1, y_2, \cdots\}$ is the
%time series we monitor to make prediction. We first detect the
%current state, with the extended Viterbi algorithm in
%Section~\ref{sec:refine}. Pruning strategies 1 and 2 in
%Appendix~\ref{app:prune} can both be used here, but the third one is
%not, since it requires we already have the segmentation of the whole
%time series, which is not available in online monitoring.
%
%To speed up the process, we utilize another pruning strategy
%introduced in ~\cite{perng00}. which can be executed online. It aims
%at reducing the number of points, at which we checked the previous
%optimal probabilities to compute the current optimal probability.
%Specifically, if we find that a past point is not likely to be a
%boundary, we delete all the optimal probabilities at it from $LT$.
%
%%For previous time points, We judge whether they are likely to be
%%boundaries. If so, we maintain the optimal probabilities at them for
%%later computing. If not, we ignore them, as well as the optimal
%%probabilities on them.
%
%We do it as follows. Given a minimal distance $D$ and a minimal
%percentage $P$, we dynamically remove points $y_{t}$ and $y_{t'}$ if
%they hold
%\[|t-t'|<D\mbox{ and }\frac{|y_t-y_{t'}|}{|y_t+y_{t'}|/2}<P\]
%If these two inequalities hold, it means that these two points are
%near and there is no large fluctuation between their values. By this
%strategy, we only need to maintain a small number of optimal
%probabilities, by which we can compute current optimal probability
%efficiently.
%
%Assume the new arrival value is $y_t$, we compute $\delta_t(i),1\leq
%i\leq K$, and obtain the optimal line sequence and state sequence up
%to $t$. Then we make prediction of future values based on the
%current state.
%
%%Moreover, if the current time point is close to the end of a line, which can be estimated based
%%on the average length of the corresponding, we first estimate the time points where
%%state transition will happen based on the average length of the patter, and
%%then estimate the values according to the most likely next state.
%
%\subsection{Trend Prediction}
%In many applications, users are not interested in forecasting
%specific values. Instead, they are interested in the evolving of
%trends. With pHMM, we can predict the future trends easily. For
%example, we can answer queries like: \emph{what is the trend of time
%series after 10 minutes}; or \emph{when will the time series end the
%current downward trend and enter an upward trend}.
%
%The approach is similar with that of multi-step value prediction.
%When monitoring a time series, we first detect the current state
%with an online way, and then make prediction based on it. For
%example, to estimate how long the system will stay in the current
%state, we compute the difference between the mean duration of
%current state, and its current duration; A more useful case is to
%estimate the trend in a future period, such as predicting the
%temperature trend in tomorrow 9:00-10:00am. To answer this query, we
%first predict the time span of the next state based on transition
%probability. If it covers the period in query, we use the mean of
%the slope in next state as the estimated trend; if not, we predict
%the state after the next state. We continue this process until the
%period in query is covered.
%
%
%\subsection{Pattern-based Correlation Detection}
%Correlation detection is an important operation in time series
%mining. Measurements, like correlation coefficient, can tell whether
%exist similar subsequences between two time series. But it is
%admirable to detect more general correlation of two time series
%based on patterns. Consider the example shown in Figure
%~\ref{fig:corr}. Whenever a burst, $P_1$, occurs in time series $X$,
%a more stable upward trend, $P_2$, will occur in time series $Y$.
%Note that $P_1$ and $P_2$ can be two totally different patterns.
%Moreover, they can have different lengths and not align. For
%example, occurrences of $P_2$ are always  5 second later than those
%of $P_1$. In general, we learn the correlations based on patterns,
%instead of values. We call this type of correlation as the general
%correlation.
%
%\begin{figure}[!htp]
%  \centering
%\includegraphics[width=9cm,height=3cm]{figure/corr.eps}
%  \caption{General correlation}
%\label{fig:corr}
%\end{figure}
%
%pHMM can be used to find the general correlation effectively. Given
%two time series $X$ and $Y$, we learn two pHMMs for them
%respectively. Then we compute the correlations between patterns in
%these two pHMMs. We use two criteria to measure the general
%correlation. The first one is frequency, which measures whether
%these two patterns have similar number of occurrences. Assume we
%measure the general correlation between pattern $P_1$ in $X$ and
%$P_2$ in $Y$. Let $P_1$ occur $m_1$ times and $P_2$ occur $m_2$
%times ($m_1\leq m_2$). The first criterion is computed as
%\[f(P_1,P_2)=\frac{m_1}{m_2}\]
%The second criterion is about how well their occurrences align. It
%is better if most of their occurrences have similar gaps, or delays.
%To measure the second criterion, we compute the minimal average of
%their gaps' square. Since there exist many possible matchings, we
%choose the one with minimal gap-square-average. In example shown in
%Figure~\ref{fig:corr}, the best matching of $P_1$ and $P_2$ is
%illustrated by dotted lines.
%
%
%If $m_1\leq m_2$, we pick out $m_1$ occurrences of $P_2$, which can
%best match occurrences of $P_1$. %We use an approach similar with
%%Dynamic Time Warping (DTW)~\cite{keogh08} to find the best matching
%%of occurrences.
%Let $\{c_j\}$, $1\leq j\leq m_1$, be the central time points of
%occurrences of $P_1$, and $\{c_{i_j}\}$, $1\leq i_j\leq m_2$, be
%central points of $m_1$ occurrences of $P_2$. We measure the second
%criterion as follows:
%\[g(P_1,P_2)=\min \left\{\frac{1}{m_1}\sum_{j=1}^{m_1}(c_{j}-c_{i_j})^2\right\}\]
%which can be computed with a dynamic programming approach
%efficiently. We combine these two criteria to measure general
%correlation between patterns $P_1$ and $P_2$ as:
%\[GC(P_1,P_2)=\frac{g(P_1,P_2)}{f(P_1,P_2)}\]
%The smaller $GC(P_1,P_2)$, the more correlated patterns $P_1$ and
%$P_2$.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
