\section{Experiment}\label{sec:expr}
\label{sec:expr}

We conducted extensive performance tests for our approach. All
algorithms are implemented in Matlab 7.0, and are tested on a PC with
Intel Pentium IV 2.4GHz CPU and 2GB RAM.





\subsection{Experiment Setup}
We choose two types of realtime time series, the first is a relatively regular time series,
and the second is less regular.
\begin{enumerate}
\item Power demand time series~\cite{infovis99}. It is the 15 minutes
  averaged values of power demand for research facility (ECN) in the
  full year 1997 and it contains 35,050 data points. The training
  subsequence and testing subsequence both contains 2,000 time points.
  Since in the fluctuation of power demand in each day is similar, it is more regular.
\item Spot prices time series\footnote{http://www.stat.duke.edu/data-sets/mw/ts\_data/all\_exrates.html}.
It contain the spot prices (foreign currency in dollars) for daily
exchange rates of 12 kinds of currencies relative to the US dollar.
For each currency, there are 2567 (work-)daily spot prices over the period 10/9/86 to 8/9/96.
Compared to Power dataset, this groups of dataset are less regular.
In each experiment, we randomly select one currency.
\end{enumerate}
We normalize both datasets into interval [0,1].

We conducted three groups of experiments. In the first experiment, we test the
efficiency of the proposed approach, especially, the three pruning strategies.
In the second experiments, we analyze
the influence of parameters, $\varepsilon_r$ and $\varepsilon_c$.
Finally, we test the effectiveness of pHMM for
answering three types of queries introduced in Section~\ref{sec:appl}.


\subsection{Experiment Results}
\paragraph*{Efficiency}
In this experiments, we test the efficiency of our approach. The experiments are
conducted on the Power dataset. We train the model in three scenarios: no pruning,
only pruning strategy 1, and both strategy 1 and 2. In each scenario, we compare
the runtime under different $N$, the selection ratio of boundary points in strategy 3.
For example, $0.05$ means we choose $0.05*2000=100$ time points to train the model.
The results are shown in Table~\ref{tab:runtime}. It can be seen that three proposed pruning strategies can reduce the
runtime greatly, especially the third strategy.

\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|l|}
  \hline  $N$        & 0.05 & 0.1 & 0.5 & 1\\
  \hline  No-Prune    & 96    &  381   &   9265  &   - \\
  \hline  Prune 1         & 41    &   93  & 689  &   1921 \\
  \hline  Prune 1\&2       & 29    &   70  & 425  &   934 \\
  \hline
\end{tabular}
\caption{Runtime Comparison (s)} \label{tab:runtime}
\end{table}

Since only the third strategy is a lossy one, we test the influences the quality of
learned pHMM. Since larger $N$ will cause more training time, it is
desirable that a relatively small $N$ can achieve the high-quality
model. We measure the quality of the model with the whole production
probability directly. We test it on both Stock dataset and Spot
dataset. The production probability is replaced by the value of
negative logarithm, $-\log(P(\mathbf{L},\mathbf{s}|\lambda))$. The
results are shown in ~\ref{fig:varn}.

It can be seen that in both datasets, when
the percentage is equals to or larger than $0.1$, the production
probability will not change a lot, which verifies the effectiveness
of this pruning strategy. In other words, the selected points by
this strategy are more likely to be a boundary. Hence, we can learn
the model safely by scanning only a smaller number of time points.
In the other experiments, we set $N$ as $0.1$.

\begin{figure}[htbp]
\centering
\includegraphics[height=3cm]{figure/varn.eps}
\caption{Influence of $N$}
\label{fig:varn}
\end{figure}

\paragraph*{The influence of $\varepsilon_r$ and $\varepsilon_c$}
In this group of experiments, we test the influence of two parameters,
$\varepsilon_r$, approximation error threshold in time series
segmentation, and $\varepsilon_c$, relative error threshold in line
clustering. The experiments are conducted on the Spot dataset and the
Power dataset. The following two measurements are used to measure the
quality of the learned pHMM:
\begin{itemize}
\item Residual error per time point. It measures how accurately pHMM
  represents the original time series by segment lines. It is computed
  as follows. After obtaining the model and the optimal segmentation,
  $\mathbf{L}$, for each interval $L_i$, we use the 'central' line of
  state $s_i$ to approximate the subsequence. The errors are
  summarized and then divided by the length of the whole time series.
  A smaller residual error means the learned model represents the time
  series more accurately.
\item Entropy. It is the second criterion used in clustering segment
  lines in the initial phase, $I$ in Eq.~\ref{eq:objective}.  It
  measures the certainty of the states about the next state. The
  smaller the entropy, the more certain the states are about the
  following state.
\end{itemize}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{figure/vars-spot.eps} &
\includegraphics[height=3cm]{figure/vars-power.eps} \\
(a)  Spot Dataset & (b)  Power Dataset
\end{tabular}
\caption{Residual Error and Entropy Vs. $\varepsilon_r$
\label{fig:varr}}
\end{figure}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm,]{figure/varc-spot.eps} &
\includegraphics[height=3cm]{figure/varc-power.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Residual Error and Entropy Vs. $\varepsilon_c$
\label{fig:varc}}
\end{figure}

The results of varying $\varepsilon_r$ are shown in
Figure~\ref{fig:varr}. To make the comparison of two measurements more
clear, we scale residual error by a factor of 0.002. It can be seen
that in both dataset, when $\varepsilon_r$ increases, the residual
error also increases. It means larger $\varepsilon_r$ will reduce the
accuracy of the learned model to approximate the time series. In
contrast, when $\varepsilon_r$ increases, the entropy decreases. So in
different applications, users can set $\varepsilon_r$ according the
requirement. If user hopes to represent time series more accurately, a
smaller $\varepsilon_r$ is better, while if users expect more
certainty of states, a larger $\varepsilon_r$ is better.

The results of varying $\varepsilon_c$ are shown in
Figure~\ref{fig:varc}. The same scale is used as in the experiment
of the residual error. It can be seen that $\varepsilon_c$ has the
same characteristics with $\varepsilon_r$. When it increases, since
the relative error of the lines in a cluster is larger, more
residual error is generated. And entropy decreases when it
increases.

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{figure/predt-spot.eps} &
\includegraphics[height=3cm]{figure/predt-power.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of Trend Prediction
\label{fig:predt}}
\end{figure}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{figure/predt_power_binary.eps} &
\includegraphics[height=3cm]{figure/predt_spot_binary.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of binary trend prediction\label{fig:predt}}
\end{figure}

\begin{figure}[htbp]
\centering
\begin{tabular}[h]{cc}
\includegraphics[height=3cm]{figure/predv-spot-new.eps} &
\includegraphics[height=3cm]{figure/predv-power-new.eps} \\
(a) Spot dataset & (b)  Power dataset
\end{tabular}
\caption{Accuracy of Multi-step Value Prediction\label{fig:predv}}
\end{figure}


\paragraph*{Trend Prediction}
The major advantage of pHMM is that it can be used to perform the tasks discussed
in Section~\ref{sec:appl}. In this experiment, we first test the effectiveness of
the proposed approach for trend prediction.

The experiments are conducted on both Spot dataset and Power dataset
respectively. We compute the prediction accuracy as follows. In the
testing dataset, we randomly select 50 time points. At each time
point, we make trend prediction after 5 different gaps, 10, 20, 30,
40, 50. Assume the current time point is $t$. For each gap $d$, we
predict the trend of 20-length subsequence $[t+d,t+d+10]$. The trend
is represented by a segment line. We compare the true trend with the
estimated trend by our approach. For example, at time point $100$,
we predict the trend of $[110,130]$, $[120,140]\cdots$. We compute
the relative error as follows
\[\frac{e(l)-b(l)}{e(l)}\]
where $e(l)$ is the slope of the estimated line, and $b(l)$ is that
of best-fit line. The accuracy of two models are compared. The first
one is the pHMM model obtained by only clustering, the second by
both clustering and refinement. Through this experiment, we test the
effectiveness of both the cluster algorithm, and of the refinement
process. The results are shown in Figure~\ref{fig:predt}. It can be
seen that in the cluster-based model, the relative error doesn't
increase dramatically as the gap increases, which verifies the
effectiveness of our clustering approach. On both datasets, pHMM is
more accurate apparently, especially on Spot dataset. On Power
dataset, the data is more regular, which causes the cluster-based
model is similar with the model with refinement. But in Spot
dataset, the data is less regular, so two models are more different.
This result clearly demonstrates the effectiveness of the refinement
process.

We also conduct experiments of binary trend prediction. For each
testing time series, we make binary trend prediction: up or down.
Three approaches are compared: Rand, Regression and pHMM. The Rand
approach makes prediction by randomly guessing up or down with
probability 50\%. The Regression approach predicts by first
computing the linear regression of the time series in the current
time window, and then making prediction with it. For each dataset,
we pick 500 time points to predict the trend of the next 10, 20, 30,
40 and 50 steps. Fig~\ref{fig:predt} shows the average accurate rate
of all 500 time points. It can be seen that pHMM is more accurate
than other approaches apparently.

\paragraph*{Multi-step Value Prediction}
In this experiment, we test the accuracy of pHMM for multi-step
value prediction. The experiments are conducted on Spot dataset and
Power dataset. We also select 50 points from training dataset
randomly. For each selected point, we predict the values after 1, 5,
10, 20 ,30 ,40 steps respectively. The pHMM is compared with two
classic approaches: Linear Regression model (denoted by LR) and Regression
Tree (denoted by RT)~\cite{timeseries94}. Both approaches are trained based on
values of previous 20 time points. For linear regression model, we
learn the model in the online way. That is, at each selected time
point, the model is learned based on the previous 20 values. Then
all 5 predictions are made based on this model. For regression tree,
we train the model before making prediction. First, we build a
training dataset from original time series, in which, each row
contains 20 input values and 1 output value. Then a regression tree
is learned from this dataset. We make prediction as follows. Assume the current time
point is $t$. First we predict value at $t+1$ based on values at
$t-19, t-18,\cdots,t$. Then we consider estimated value at $t+1$ as
an input value, and estimate value at $t+2$. This process continuous
until all the 5 values at $t+1, t+10, t+20, t+30, t+40$ are
estimated.

We use relative error as the measurement. The results are shown in
Figure~\ref{fig:predv}. It can be seen that on both datasets, pHMM
is definitely more accurate than the other two approaches. In LR,
the values are predicted based on the current line,
which obviously cannot predict the values after a large gap
accurately. Unlike LR, RT contains
multiple prediction rules, so when the step increases, it still can find
the fittest model to make prediction. But since it have no knowledge
whether the estimated value is appropriate, it cannot adjust the
next predictions, even if the previous estimated value has a large
error. Hence when the step increases, its accuracy suffers. The
results show that these two models are applicable for predicting the
next value, but not for the multi-step prediction. To increase the
accuracy for these approaches, an alternate way is to train a
specific model for the prediction with specific step, but it needs
to learn a lot of models, which is infeasible. In contrast, pHMM can
make prediction accurately, even when the step increases. It
verifies the advantage of the pattern-based model.


\paragraph*{Pattern-based Correlation Detection}
Finally, we test the effectiveness of pHMM for pattern-based
correlation detection. We conduct this experiment on the Spot
dataset. Since all the time series are over the same time period, we
hope to find correlations between price trend between different
currencies. We use time series of "French Franc" as the reference
time series, and compute correlations between it and other 5
currencies (including "Australian Dollar", "Belgian Franc",
"Canadian Dollar" and so on). For each time series, we train the
pHMM model. Then, for each state of "French Franc", we compute its
pattern-based correlation($GC$) with any state in other 5 time
series. Table~\ref{tab:gc} shows both minimal $GC$ and average $GC$.
For example, in the first row, Minimal $GC$ means the minimal $GC$
computed between any state pair in which one is from "French Franc"
and the other is from "Australian Dollar". Average $GC$ is the
average of all $GC$s between all state pairs from two currencies.
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|c|c|l|}
  \hline                     & Minimal $GC$ & Average $GC$\\
  \hline  Australian Dollar  &    88.33      &   130.45 \\
  \hline  Belgian Franc      & \textbf{41.25}       &   \textbf{50.62} \\
  \hline  Canadian Dollar    &    120.41     &   160.37 \\
  \hline  German Mark        & 60.57         &   120.46 \\
  \hline  Japanese Yenr      & 130.59         &   170.96 \\
  \hline
\end{tabular}
\caption{Pattern-based Correlation} \label{tab:gc}
\end{table}

It can be seen that 5 target currencies demonstrate very different correlations.
The fluctuation of "Belgian Franc" is most similar to that of "French Franc", so both
the minimal $GC$ and average $GC$ is minimal compared to all other currencies. "German Mark"
also has a similar state with "French Franc", although its average $GC$ is still high.
For all other 3 currencies, their states are more different with those of "French Franc".
This experiment shows that with pattern-based correlation, we can compare time series in the higher
level, which is the advantage of our pattern based model.





%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
