\section{Experiment}\label{sec:expr}
\label{sec:expr}

We conducted extensive performance tests for our approach. All
algorithms are implemented in Matlab 7.0, and are tested on a PC with
Intel Pentium IV 2.4GHz CPU and 2GB RAM.





\subsection{Experiment Setup}
We choose two types of real-life time series, the first is a
relatively regular time series, and the second is less regular.
We normalize values of both datasets into interval [0,1].
\begin{enumerate}
\item Power demand time series~\cite{infovis99}. It is the 15
minutes of averaged values of power demand for research facility
(ECN) in the
  full year 1997 and it contains 35,050 data points. The training
  subsequence and testing subsequence both contain 2,000 time points.
  Since the fluctuation of power demand in each day is similar, it is more regular.
\item Spot prices time
  series. %\footnote{http://www.stat.duke.edu/data-sets/mw/ts\_data/all\_exrates.html}.
  It contain the spot prices for daily exchange rates of 12 kinds of
  currencies relative to the US dollar.  For each currency, there are
  2,567 (work-)daily spot prices over the period 10/9/86 to 8/9/96.
  In each experiment, we randomly select one currency.
\end{enumerate}

We conducted three groups of experiments. In the first group of
experiments, we test the efficiency of the proposed approach,
especially of the three pruning strategies.  In the second group of
experiments, we analyze the impact of parameters $\varepsilon_r$ and
$\varepsilon_c$.  Finally, we test the effectiveness of pHMM for
answering three types of queries introduced in
Section~\ref{sec:appl}.


\subsection{Experiment Results}
\paragraph*{Efficiency}
% In this experiment, we test the efficiency of our approach.
We conduct the experiments on the Power dataset. We train the model
in three scenarios: no pruning, pruning with strategy 1, pruning
with both strategies 1 and 2. In each scenario, we compare the
runtime under different $N$, the selection ratio of boundary points.
For example, $0.05$ means we choose $0.05*2000=100$ time points to
train the model. The results are shown in Table~\ref{tab:runtime}.
It can be seen that pruning reduces runtime significantly,
especially pruning with both strategies 1 and 2.

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|l|}
  \hline  $N$        & 0.05 & 0.1 & 0.5 & 1\\ \hline
  \hline  No-Prune    & 96    &  381   &   9265  &   - \\
  \hline  Prune 1         & 41    &   93  & 689  &   1921 \\
  \hline  Prune 1\&2       & 29    &   70  & 425  &   934 \\
  \hline
\end{tabular}
\caption{Runtime Comparison(s)} \label{tab:runtime}
\end{table}

Since pruning loses information, we test its influence on the
quality of learned pHMM. As a larger $N$ leads to longer training
time, it is desirable that a relatively small $N$ can still build a
high-quality model. We directly measure the quality of the model
with the whole production probability. We test it on both the Power
dataset and the Spot dataset. The production probability is replaced
by the value of negative logarithm,
$-\log(P(\mathbf{L},\mathbf{s}|\lambda))$. The results are shown in
Figure~\ref{fig:varn}.

\begin{figure}[htbp]
\centering
\includegraphics[height=3cm]{figure/varn.eps}
\caption{Influence of $N$} \label{fig:varn}
\end{figure}

We can see that in both datasets, when the percentage is equal to or
larger than $0.1$, the production probability is not affected much,
which verifies the effectiveness of this pruning strategy. % In other
% words, the selected points by this strategy are more likely to be a
% boundary.
In other words, we can still learn high quality models by looking at a
small number of time points. In our experiments, we set $N$ as $0.1$.



\paragraph*{The influence of $\varepsilon_r$ and $\varepsilon_c$}
In this group of experiments, we test the influence of two parameters,
$\varepsilon_r$, approximation error threshold in time series
segmentation, and $\varepsilon_c$, relative error threshold in line
clustering. The experiments are conducted on the Spot dataset and the
Power dataset. The following two measurements are used to measure the
quality of the learned pHMM:
\begin{itemize}
\item Residual error per time point. It measures how accurately pHMM
  represents the original time series by segment lines. It is computed
  as follows. After obtaining the model and the optimal segmentation,
  $\mathbf{L}$, for each interval $L_i$, we use the 'central' line of
  state $s_i$ to approximate the subsequence. The errors are
  summarized and then divided by the length of the whole time series.
  A smaller residual error means the learned model represents the time
  series more accurately.
\item Entropy. It is the second criterion used in clustering segment
  lines in the initial phase, $I$ in Eq.~\ref{eq:objective}.  It
  measures the certainty of the states about the next state. The
  smaller the entropy, the more certain the states are about the
  following state.
\end{itemize}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{figure/vars-spot.eps} &
\includegraphics[height=3cm]{figure/vars-power.eps} \\
(a)  Spot Dataset & (b)  Power Dataset
\end{tabular}
\caption{Residual Error and Entropy Vs. $\varepsilon_r$
\label{fig:varr}}
\end{figure}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm,]{figure/varc-spot.eps} &
\includegraphics[height=3cm]{figure/varc-power.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Residual Error and Entropy Vs. $\varepsilon_c$
\label{fig:varc}}
\end{figure}

Figure~\ref{fig:varr} shows the results with varying
$\varepsilon_r$. To make the comparison of two measurements clearer,
we scale residual error by a factor of 0.002. It can be seen that in
both datasets, when $\varepsilon_r$ increases, the residual error
also increases. It means larger $\varepsilon_r$ will reduce the
accuracy of the learned model to approximate the time series. In
contrast, when $\varepsilon_r$ increases, the entropy decreases. So
in different applications, users can set $\varepsilon_r$ according
to the following requirement: To represent the time series more
accurately, users should choose a smaller $\varepsilon_r$; to be
more certain about states, users should choose a larger
$\varepsilon_r$.

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3.1cm]{figure/predt-spot.eps} &
\includegraphics[height=3.1cm]{figure/predt-power.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of Trend Prediction \label{fig:predt1}}
\end{figure}

The results of varying $\varepsilon_c$ are shown in
Figure~\ref{fig:varc}. The same scale is used as in the experiment
of the residual error. It can be seen that $\varepsilon_c$ has the
same characteristics with $\varepsilon_r$. When it increases, since
the relative error of the lines in a cluster is larger, more
residual error is generated. On the contrary, entropy decreases when
$\varepsilon_c$ increases.



\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3.1cm]{figure/predt_power_binary.eps} &
\includegraphics[height=3.1cm]{figure/predt_spot_binary.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of binary trend prediction\label{fig:predt2}}
\end{figure}




\paragraph*{Trend Prediction}
The major advantage of pHMM is that it can be used to perform the tasks discussed
in Section~\ref{sec:appl}. In this experiment, we first test the effectiveness of
the proposed approach for trend prediction.

The experiments are conducted on both the Spot dataset and the Power
dataset respectively. We compute the prediction accuracy as follows.
In the testing time series, we randomly select 50 time points. At
each time point, we make the trend predictions after 5 different
gaps, 10, 20, 30, 40, 50. Assume the current time point is $t$. For
each gap $d$, we predict the trend of 20-length subsequence
$[t+d+1,t+d+20]$. The trend is represented by a segment line. We
compare the true trend with the estimated trend by our approach. For
example, at time point $100$, we predict the trend of $[111,130]$,
$[121,140]\cdots$. We compute the relative error as
$\frac{e(l)-b(l)}{b(l)}$, where $e(l)$ is the slope of the estimated
line, and $b(l)$ is that of the best-fit line. The accuracy of two
models are compared. The first one is the pHMM model obtained by
only clustering, the second by both clustering and refinement.
Through this experiment, we test the effectiveness of both the
cluster algorithm and the refinement process. The results are shown
in Figure~\ref{fig:predt1}. It can be seen that in the cluster-based
model, the relative error does not increase dramatically as the gap
increases, which verifies the effectiveness of our clustering
approach. For both datasets, especially the Spot dataset, which is
less regular than the Power dataset, pHMM is more accurate. % On Power
% dataset, the data is more regular, which causes the cluster-based
% model is similar with that after refinement.
% But in Spot dataset, the
% data is less regular, so two models are more different.
This result clearly demonstrates the effectiveness of the refinement
process.



We also conduct experiments for binary trend prediction. For each
testing time series, we make a binary trend prediction: up or down.
Three approaches are compared: Rand, Regression and pHMM. The Rand
approach makes % prediction by randomly
random guesses. The Regression approach predicts by first computing
the linear regression of the time series in the current time window,
and then making predictions with it. For each dataset, we also pick
50 time points to predict the trend of the next 10, 20, 30, 40 and
50 steps. Figure~\ref{fig:predt2} shows the average accuracy of all
50 time points. It can be seen that pHMM is more accurate than other
approaches.

\paragraph*{Multi-step Value Prediction}
\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3.2cm]{figure/predv-spot-new.eps} &
\includegraphics[height=3.2cm]{figure/predv-power-new.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of Multi-step Value Prediction\label{fig:predv}}
\end{figure}

In this experiment, we test the accuracy of pHMM for multi-step
value prediction. The experiments are conducted for both Spot and
Power datasets. We select 50 points randomly. For each selected
point, we predict the values after 1, 5, 10, 20 ,30 and 40 steps
respectively. The pHMM is compared with two classic approaches: a
linear regression model (denoted by LR) and a regression tree
(denoted by RT)~\cite{timeseries94}. Both approaches are trained
based on values of the previous 20 time points. For the linear
regression model, we learn the model in the online way. That is, at
each selected time point, the model is learned based on the previous
20 values. Then all 5 predictions are made based on this model. For
the regression tree, we train the model before making predictions.
First, we build a training dataset from the original time series,
where each row contains 20 input values and 1 output value. Then a
regression tree is learned from this dataset. We make predictions as
follows. Assume the current time point is $t$. First we predict the
value at $t+1$ based on values at $t-19, t-18,\cdots,t$. Then we
consider the estimated value at $t+1$ as an input value, and
estimate the value at $t+2$. This process continues until all 5
values at $t+1, t+10, t+20, t+30, t+40$ are estimated.

We use relative error as the measurement. The results are shown in
Figure~\ref{fig:predv}. It can be seen that on both datasets, pHMM
is more accurate than the other two approaches. In LR, the values
are predicted based on the current line, which obviously cannot
accurately predict the values after a large gap. Unlike LR, RT
contains multiple prediction rules, so when the step increases, it
still can find the fittest model to make predictions. However, since
it has no knowledge whether the estimated value is appropriate, it
cannot adjust the next predictions, even if the previous estimated
value has a large error. Hence when the step increases, its accuracy
drops. The results show that these two models are applicable for
predicting the next value, but not for multi-step prediction. To
increase the accuracy for these approaches, an alternative way is to
train a specific model for the prediction with a specific step, but
it needs to learn a lot of models, which is infeasible in practice.
In contrast, pHMM can make predictions accurately, even when the
step increases. It verifies the advantage of the pattern-based
model.


\paragraph*{Pattern-based Correlation Detection}
Finally, we test the effectiveness of pHMM for pattern-based
correlation detection. We conduct this experiment on the Spot
dataset. Since all time series are over the same time period, we
hope to find correlations between price trends of different
currencies. We use time series for "French Franc" as the reference
time series, and compute correlations between it and 5 other
currencies (including "Australian Dollar", "Belgian Franc",
"Canadian Dollar" and so on). For each time series, we train the
pHMM model. Then, for each state of "French Franc", we compute its
pattern-based correlation($GC$) with any state in the 5 other time
series. Table~\ref{tab:gc} shows both minimal $GC$ and average $GC$.
For example, in the first row, Minimal $GC$ means the minimal $GC$
computed between any state pair in which one is from "French Franc"
and the other is from "Australian Dollar". Average $GC$ is the
average of all $GC$s between all state pairs from two currencies.
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|l|}
  \hline                     & Minimal $GC$ & Average $GC$\\
  \hline  Australian Dollar  &    88.33      &   130.45 \\
  \hline  Belgian Franc      & \textbf{41.25}       &   \textbf{50.62} \\
  \hline  Canadian Dollar    &    120.41     &   160.37 \\
  \hline  German Mark        & 60.57         &   120.46 \\
  \hline  Japanese Yenr      & 130.59         &   170.96 \\
  \hline
\end{tabular}
\caption{Pattern-based Correlation} \label{tab:gc}
\end{table}

It can be seen that 5 target currencies demonstrate different
correlations. The fluctuation of "Belgian Franc" is most similar to
that of "French Franc", so both the minimal and average $GC$ are
minimal compared to all other currencies. "German Mark" also has a
similar state with "French Franc", although its average $GC$ is
still high. For all the 3 other currencies, their states are more
different with those of "French Franc". This experiment shows that
with pattern-based correlation, we can compare time series in the
higher level.





%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
