compare the overall information entropy of
states in the final model learned by dis\&com and fore\&com approach
respectively. The experiments are conducted on both ECG and Power
datasets. The overall entropy (as Eq~\ref{eq:entro}) are compared
between two clustering algorithms under different relative error
thresholds, $\varepsilon _r$, and the results are shown in
Fig.~\ref{fig:ent-ecg} and ~\ref{fig:ent-power}. It can be seen that
when relative error threshold increases, the overall entropy in both
approaches decreases. In most cases, forecasting-oriented approach
has lower entropy, which means that the clusters learned by this
approach are more certain about the next states, and verifies the
effectiveness of our proposed clustering algorithm. In ECG dataset,
when relative error threshold equals to 0.6, these two clustering
approaches obtain the same set of clusters, so they have the same
entropy. The reason is that the segmented lines in ECG data set are
well-regulated, and the slopes and lengthes of lines in each cluster
is obviously different with those in other clusters. At this point,
both clustering approaches obtain the optimal clusters. When
$\varepsilon_r$ increases to 0.8, the entropy increases since the
new obtained cluster is less certain about the next state than the
clusters being merged. In Power dataset, the segmented lines are
less-regulated. We can see that forecasting-oriented clustering
always has lower entropy. It means that in the case that the dataset
are less-regulated, forecasting-oriented clustering approach has
better performance.

\paragraph*{Accuracy of Detecting State Sequence}
Detecting current state sequence correctly is very crucial for the
accuracy of prediction. An ideal approach ought to detect the state
sequence accurately and promptly. So in this set of experiments, we
compare the accuracy of detecting the state sequence of fore\&com
and fore\&sim approach, in which either composite state or simple
state is used in transition probability. To test whether the
approaches can detect the state sequence correctly and promptly, we
train the model in testing dataset and let the final line
segmentation $S(Y)$ as the optimal segmentation. Then we train the
model in training dataset, and use the learned model to detect the
state sequence on testing dataset.
%Both info\&com and info\&sim approaches use information-based
%clustering and their state set is the same, while dis\&com uses
%distance-based clustering and the state set may be different. Then
%each line is allocated to the state with highest output probability
%to get the state sequence $S$. Then we use state-detecting algorithm
%to detect this state sequence. Both Info\&com and dis\&com use
%composite state to predict the next state while info\&sim uses
%simple state.
We compare the distribution of states detected correctly by these
two approaches. Specially, coordinates of X-axis are the
\emph{correct-detecting ratio intervals}. Correct-detecting ratio is
the ratio of time points in a state's interval at which the
detecting algorithm detects the current state correctly. For
example, if the correct-detecting ratio of a state is in [0.8,1], it
means that at 80\%-100\% of time points the approach can detecting
this state correctly. In other words, this state is detected
correctly at most time. On the contrary, if the correct-detecting
ratio of a state is in [0,0.3], it means this state is always
detected as a wrong state. Obviously, for certain approach, if the
correct-detecting ratios of most states are [0.8,1], it means this
approach can always detect the current correctly, and so the
prediction will be accurate subsequently.

For certain interval in X-axis, its value on Y-axis is the
percentage of states whose correct-detecting ratio is within this
interval. In both approaches, the relative error threshold
$\varepsilon_r$ is set of 0.6 and benefit threshold, $\varepsilon_b$
is set to 0.3. The experiments are conducted on both datasets, and
the results are shown in Fig. ~\ref{fig:ex-state-det}. It can be
seen that in both datasets, fore\&com outperforms fore\&sim. It
demonstrates that composite states can improve the accuracy of
detecting current state sequence.



\paragraph*{Effect of $\varepsilon _b$ on Prediction Accuracy}
When mining composite states, the threshold of benefit, $\varepsilon
_b$, effects the number of composite states mined, and hence the
prediction accuracy. In this experiment, we compare the prediction
accuracy of fore\&com approach under different values of
$\varepsilon _b$. The experiment is conducted on Power dataset. Step
0, step 1, step 2 prediction are compared respectively. For each
time point $t$, relative error $\frac{\hat{y_t}-y_t}{y_t}$ is
computed to indicate prediction accuracy, where $y$ is the data
value in the time series and $\hat{y}$ is the predicted value of
$y$. Relative error threshold $\varepsilon_r$ is set to 0.6.

The result is shown in Fig.~\ref{fig:benemin}. It can be seen that
accuracy of step 0 prediction is higher than step 1 and step 2
prediction. It means the prediction accuracy will decrease as the
predicted time points are further to current time points. When
$\varepsilon_b=0.3$, all predictions have the lowest relative error.
The reason is that when $\varepsilon_b$ is smaller than 0.3, too
much composite states are mine, which causes overfitting; when
$\varepsilon_b$ is larger than 0.3, few composite states are mined,
which causes some useful composite states to be missed.


\paragraph*{Accuracy of Prediction}
In this set of experiments, we compare the prediction accuracy of
three approaches under different relative error thresholds
$\varepsilon_r$. The experiments are conducted on Power dataset.
Step 0, step 1, step 2 predictions are compared respectively. In all
experiments, $\varepsilon_b$ is set to 0.3. We still use relative
error to measure the accuracy of prediction. The results are shown
in Fig.~\ref{fig:error}. It can be seen that in all cases, fore\&com
approach outperforms the other two approaches, which verifies the
effectiveness of our forecasting-oriented clustering and composite
state approach. When relative error threshold $\varepsilon_r=0.6$,
the prediction is most accurate, which is not consistent with the
effect of $\varepsilon_r$ on the overall entropy in
Fig.~\ref{fig:ent-power}. In the experiments of entropy comparison,
when relative error threshold is greater than 0.6, overall entropy
has smaller values, which means states are more certain about the
states. But since in this case, in each cluster the difference of
slope and length of the lines increases, so even we predict the next
state correctly, the average of slope and length cannot approximate
the data values accurately. Another observation is that in fore\&com
approach, step 1 and step 2 prediction has similar accuracy with
step 0 prediction, while in the other two approaches, accuracy
decreases dramatically in step 2 prediction, which demonstrates that
fore\&com approach can maintain a high accuracy to predict the data
values far from the current value.


\comment{
\begin{figure*}[htbp]
\centering
\begin{tabular}[h]{ccc}
\includegraphics[height=4cm]{figure/error-0step.eps} &
\includegraphics[height=4cm]{figure/error-1step.eps} &
\includegraphics[height=4cm]{figure/error-2step.eps} \\
(a) step 0  & (b)  step 1& step 2
\end{tabular}
\caption{Prediction accuracy on Power dataset\label{fig:error}}
\end{figure*}
}