\section{Sequential Data Forecasting}\label{sec:predict}

% Once we obtain the model, we can use it to make prediction. Assume
% $Y={y_1,y_2,\cdots}$ is the time series we predict.

Assume $y_1, y_2, \cdots, y_t$ is the sequence data we have up to time
$t$. Our goal is to predict the future values $y_{t+1}, y_{t+2},
\cdots$ with specified confidence.
% For traditional prediction method of time series, the goal is to
% predict the next value $y_{t+1}$ at time $t$ based on the seen
% values so far, $y_1,y_2,\cdots,y_{t}$, while our goal is to predict
% the future values $y_{t+1},t_{t+2},\cdots$ with specified
% confidence.
Our approach includes two steps:
\begin{enumerate}
  \item detect the most likely state sequence based on the current
  seen data
  \item after estimating the current state, we predict the next
  values according to transition probability and output probability.
\end{enumerate}

\subsection{Detecting state sequence}
In this section,we discuss how to detect the simple state sequence
corresponding the seen data sequence. Later, we will use state
sequence to indicate simple state sequence, when without confusion.

Specifically, the goal is: for sequence data
$Y_t=\{y_1,y_2,\cdots,y_t\}$ where $t$ is the current data point, we
segment it to $k+1$ disjoint and connected lines:
\[(L_{1,i_1},L_{i_1+1,i_2},\cdots,L_{i_k+1,t})\]
where $i_j, 1\leq j\leq k$ is $k$ splitting points,
$L_{i_k+1,i_{k+1}}$ is the best-fit line for data sequence
$(y_{i_k},y_{i_k+1},\cdots,y_{i_{k+1}})$ in which $i_k+1$ and
$i_{k+1}$ are the start and end points respectively, so that the
state sequence:
\[(s_{1,i_1},s_{i_1+1,i_2},\cdots,s_{i_k+1,t})\]
emitting this line sequence will have highest probability. For the
sake of simplicity, later we use $L_{i_{k+1}}$ to replace
$L_{i_k+1,i_{k+1}}$ and $s_{i_{k+1}}$ to replace
$s_{i_k+1,i_{k+1}}$.


A naive way is to use sliding-window algorithm~\cite{keogh01} to
segment $Y_t$, and then map each line to the best simple state. The
sliding-window algorithm is an online algorithm. It works as
follows: when each new data $y_t$ arrives, the algorithm tries to
approximate data sequence from start point (initially, it is $y_1$)
to $y_t$. If the square error exceeds error threshold, the data
sequence is split before $y_t$ and $(y_1,\cdots, y_{t-1})$ is
approximated by a line, and start point moves to $y_t$. But this
algorithm has two problems: 1) the accuracy is worse than other
segment algorithm, like bottom-up algorithm~\cite{keogh01}; 2) when
segmenting $Y_t$ to lines, it doesn't consider the output
probability and transition probability, which causes the obtained
state sequence may not be optimal. Another way is when each data
$y_t$ arrives, we segment $Y_t$ with the bottom-up algorithm, and
then map each line to the best simple state. This approach also has
two problems: 1) it causes many redundant computation and is
time-consuming. 2) when segmenting, it also doesn't consider the
output probability and transition probability.

We use an example to illustrate the second problem. Assume there are
4 simple states $1,2,3,4$, and transition probability includes
$a(1,3)=0.9$, $a(3,4)=0.9$. When $y_{150}$ arrives, since using one
line $L_{1,150}$ to approximate will exceed maximal error threshold,
we will segment $Y_{150}$ into 2 lines: $L_{1,100}$ and
$L_{101,150}$, which causes minimal approximation error. $L_{1,100}$
belongs to state $1$ with a high probability, and $L_{101,150}$
belongs to state $2$ with a high probability. But probability of
transition from 1 to 2 is 0. So we have to map them to other states,
which causes a very low probability of state sequence. However, if
we split $Y_t$ into 3 lines $L_{1,80}$ and $L_{81,120}$ and
$L_{121,150}$, and map the first line to state 1, the second to
state 3, and the third to state 4, the state sequence $(1,3,4)$ will
have high probability according to transition probability. It is
obvious that the latter segmentation strategy is a better choice.

%The most difference is to decide when to split. By segment
%algorithm, the criterion is using one line to approximate will
%exceed the maximal error, while for viterbi algorithm, the criterion
%is the transition probability. Since for many states, the
%approximation error of the central line is not near to maximal
%error, so using maximal error to decide the split points will cause
%high error.

%I had implemented one state-detection algorithm before, which uses
%maximal error to decide when to split and then chooses best split
%points (means cause minimal error) to split. Its performance is not
%good.


%\subsection{Detecting state sequence}

Now we introduce our approach. To facilitate this process, we added
a start state $0$, its prior probability is $1$ and the transition
probability to each simple state $i$, $a(0,i)$, is set to be the
prior probability of state $i$. We first define the probability of
the state sequence, and then propose the algorithm to obtain the
most likely state sequence.

\begin{figure}
  \centering
  \includegraphics[width=6cm,height=4cm]{figure/state_detect.eps}\\
  \caption{current state detection}\label{fig:statedetect}
\end{figure}


When new data $y_t$ arrives, we find the state sequence
\[\mathbf{s}=\{s_{i_1},s_{i_2},\cdots,s_{i_k},s_{t}\}\], which will
cause maximal probability $P_{\mathbf{s}}$:
%.Specifically, if we split $y$ to $k+1$ segments and find the best
%lines $q_{i_1}q_{i_2}\cdots q_{i_k}q_t$ to represent them, then
%probability $P_i^t$ satisfies
\begin{equation}
%P_i^t=\max\limits_{\mathbf{s}}a(S_t|S_{i_k})p_s(q_t|s_t)\prod_{j=1}^{k}b(q_{i_j},s_{i_j})a(S_{i_{j-1}},S_{i_{j}})
P_{\mathbf{s}}=a(s_{i_k}^c,s_{t}^c)p_s(L_t|s_t)\prod_{j=1}^{k}b(L_{i_j},s_{i_j})a(s_{i_{j-1}}^c,s_{i_{j}}^c)
 \label{equ:pit}
\end{equation}

\paragraph*{Explanation of $P_{\mathbf{s}}$}
It is similar with sequence probability in HMM, but there are some
differences:
\begin{itemize}
  \item In transition probability, we use composite state to replace
  simple state, since the former is more certain about the next state. In Equ.~\ref{equ:pit}, $s_{i_{j-1}}^c$ is the longest suffix state of $\{s_{i_1}\cdots
s_{i_{j-1}}\}$ and $s_{i_{j}}^c$ is that of $\{s_{i_1}\cdots
s_{i_{j}}\}$. According the definition of subsequent state,
$s_{i_{j}}^c$ is longest subsequent state of $s_{i_{j-1}}^c$.
  \item For the output probability of the front $k$ simple states,
  we use the defined $b(L_{i_j},s_{i_j})$, since the lines already end and
  the corresponding states are \emph{complete} state; while for the last simple
  state $s_t$, we only consider the slope and use $p_s(L_t|s_t)$ to replace $b(L_t,s_t)$, since
  the line doesn't end and the state is a on-going state. It can
  reduce the delay time of detecting the incoming state.
\end{itemize}

%$S_{i_{j-1}}$ is the longest suffix state of $s_{i_1}s_{i_2}\cdots
%s_{i_{j-1}}$ and $S_{i_{j}}$ is that of $s_{i_1}s_{i_2}\cdots
%s_{i_{j}}$. Also $S_{i_{j}}$ is longest subsequent state of
%$S_{i_{j-1}}$. Line $q_{i_j}$ is the best-fit line of subsequence
%$\{y_{i_{j-1}+1}y_{i_{j-1}+2}\cdots y_{i_{j}}\}$.

%For state before $s_{t}$, we consider them as a complete occurrence
%of the state, since the next states already happens. For the latest
%state $s_t$, we consider it as a uncomplete state. So we only
%estimate the probability of slope and ignore that of length, by
%which we can detect the incoming state as soon as possible.

Fig.~\ref{fig:statedetect} is an example. It can be seen that
$\{y_1\cdots y_{i_1}\}$ corresponds to state $s_{i_1}$ and
$\{y_{i_1+1}\cdots y_{i_2}\}$ corresponds to state $s_{i_2}$.
$\{y_{i_2+1}\cdots y_t\}$ corresponds to state $s_t$ and $l_t$
doesn't end.

We use a Viterbi-like algorithm to obtain the most likely state
sequence $\mathbf{s}$. Since computing $P_{\mathbf{s}}$ needs to
consider the previous states, and the location of splitting points
will effect the approximate line, we maintain a state list $SL$ to
record the previous happened simple states. The states in $SL$ needs
satisfying three conditions:
\begin{itemize}
  \item the square error of the corresponding best-fit line is lower than
  maximal error threshold.
  \item the output probability of the corresponding best-fit line is
  greater than a user-specified threshold;
  \item the simple state before it is already in $SL$.
\end{itemize}

Each record in $SL$ is a 5-attribute vector:
\[r_i=(ts_i,te_i,s_i,prevs_{i},p_{i})\]
 in which $ts_i$ is the start
time point, $te_i$ is the end point, $s_i$ is the corresponding
state, $prevs_{i}$ is the previous state sequence by connecting
which with $s_i$ we can get the most possible state sequence up to
$te_i$, and $p_i$ is the corresponding probability. The formula of
$p_i$ is similar with Equ.(~\ref{equ:pit}), except the output
probability of $s_i$ is $b(L_i,s_i)$, since $SL$ maintains the
happened states and we assume the states in $SL$ are
\emph{complete}. With $p_i$, we can easily compute $P_{\mathbf{s}}$
only by combining certain record $r_i$ in $SL$ with the current
on-going state whose start point is $te_i+1$.




Now we introduce the algorithm. At the beginning, $SL$ has only one
record corresponding to initial state $0$: $(0,0,0,null,1)$. When
new data $y_t$ arrives, we process 3 steps.

In the first step, we find the current most likely state sequence.
For each record $r_i$ in $SL$, we compute the probability of state
sequence which is $\{prevs_{i},s_i,s_t\}$, $s_t$ is the on-going
state starting after $s_i$. Specifically, we first compute the
best-fit line $L_t$ for $(y_{te_i+1}y_{te_i+2}\cdots y_t)$, if the
approximation error of $L_t$ is lower than maximal error threshold,
we find the simple state $s_t$ for $L_t$ with highest output
probability. Then we can compute $P_{\mathbf{s}}$ where
$\mathbf{s}=\{prevs_{i},s_i,s_t\}$. After obtaining $P_{\mathbf{s}}$
for all records in $SL$, we choose the state sequence with highest
$P_{\mathbf{s}}$ as the current most likely state sequence.

In the second step, we check whether the current state is a complete
state. Since the first and third conditions are already satisfied,
we only check whether the output probability of $s_t$ satisfies the
second condition of states in $SL$. If so, we add a new record
corresponding to $s_t$ to $SL$.

In the third step, we delete the useless records in $SL$. For each
record $r_i$ in $SL$, if the error of approximating the data
sequence from its end point to the current point with a line exceeds
the maximal error threshold, we delete $r_i$ from $SL$. Indeed, this
step is executed together with the first step.







\subsection{Prediction}

%After that, we can make predictions, which contains three folders.
%First, we can estimate the probability of the occurrence of queried
%value in the rest time of the current state. Second, by combining
%the probability of the current longest matching state and transition
%probability from the current state to other states (the longest
%subsequent states), we can obtain the occurrence probability of
%queried value in the next state. Three, by self-multiplying of
%transition probability matrix, we can obtain the possibility of
%occurrence of queried value in further states. Note that since in
%our model, prediction based on each state always has the best
%prediction performance, we can assume that our model satisfying
%Markov property approximately. So prediction after more than one
%state by multiplying transition probability is reasonable.

In this section, we propose the prediction approach, called
multi-step prediction. In other words, after obtaining the most
likely state sequence $\mathbf{s}=\{s_{i_1}s_{i_2}\cdots
s_{i_k}s_t\}$, we can make prediction for different time intervals.
In $i$-step, based on the current state sequence, we estimate the
next $i$ state with highest probability, and then use its average
slope and length to predict the values of corresponding time points.
The bigger step $i$, the further the predicted of values is.


%The difference between multi-step prediction with traditional
%prediction is that the former can predict the values of more than
%one time points, while the latter only predicts the value of next
%time point. %We use the following notations:
%\begin{itemize}
%  \item $\gamma(i)$, if i is a composite state, $\gamma(i)$ is the
%  last single state of $i$, $i_{p_i}$; else, $\gamma(i)$ is $i$
%  itself.
%  \item $\tau(\mathbf{s})$, the composite state which is the
%  longest suffix of single state sequence $\mathbf{s}$.
%\end{itemize}

The prediction has the form:
\[P_k=\{ts_k, len_k, \theta_k, d_k,prob_k\}, k=0,1,2,\cdots\]
where $k$ is the step, $ts_k$ is the start point of estimated line,
$len_k$ is its length, $\theta_k$ is the slope and $d_k$ is the
intercept, $prob_k$ is the probability that the estimated line
occurs.

In 0-step prediction, we estimate the values of the remaining of the
current state. Remind that we assume the latest simple state is a
on-going state. Assume current state $s_t=i$, that is, the line
$L_t$ which approximates $y_{i_k+1}y_{i_k+2}\cdots y_t$ belongs to
state $i$. If the length of $L_t$, denoted by $l_t$, is smaller than
$\bar{l}_i$, the average length of lines in $C_i$, it means that
with high probability state $i$ will maintain $\bar{l}_i-l_t$ time
points. Then the prediction of step-0 is as follows:
\[P_0=\{t+1,\bar{l}_i-l_t,\Theta_i,d_t,1\}, \mbox{if } l_t<\bar{l}_i \]
where $d_t$ is the intercept of line $L_t$. Based on that, we can
predict the values of time $(t+1,t+2,\cdots,t+\bar{l}_i-l_t)$ with
the following formula:
\[\hat{y}_k=\theta_t(l_t+k)+d_t,k=1,2,\cdots ,\bar{l}_i-l_t \]
where $\hat{y}_k$ is the estimated values of time point $t+k$. If
$l_t$ is greater than $\bar{l}_i$, we assume the state will change
in the next time point. So in this case, we assume $len_0=0$.



In 1-step prediction, we estimate the most likely next simple state
and predict the corresponding values based on it. According to
$\mathbf{s}$ and transition probability matrix $A$, we can obtain
the next most likely simple state, $s_{i_{k+1}}$. Assume
$s_{i_{k+1}}=j$ and the corresponding central line is
$(L_{j},\theta_{j})$. The prediction of step-1 is as follows:
\[P_1=\{t+len_0+1,\bar{l}_j,\bar{\theta}_{j},d_{j},a(s_t^c,s_{i_{k+1}}^c)\}\]
%\begin{equation}
%P_1=\left\{
%      \begin{array}{ll}
%        \{t+len_0+1,\bar{l}_j,\bar{\theta}_{j},d_{j},a(s_t^c,s_{i_{k+1}}^c)\}, & \hbox{$l_t<L_i$;} \\
%        \{t+1,L_j,\Theta_j,d_j,a(S_t,S_{i_{k+1}})\}, & \hbox{$l_t\geq L_i$.}
%      \end{array}
%    \right.
%\end{equation}
where $\bar{\theta}_{j}$ and $\bar{l}_j$ is the slope and length,
and $d_j$ is the estimated intercept for next single state. The
probability $prob_1$ is the transition probability from $s_t^c$ to
$s_{i_{k+1}}^c$, where $s_t^c$ is the longest suffix of
$s_{i_1}s_{i_2}\cdots s_{i_k}s_t$ and $s_{i_{k+1}}^c$ the longest
suffix of $s_{i_1}s_{i_2}\cdots s_ts_{i_{k+1}}$.

Consequently, we can estimate the corresponding values as:
\[\hat{y}_k=\bar{\theta}_{j}k+d_j,k=1,\cdots,\bar{l}_j \]

Since for each state, we only record the slope and length of the
central line, and ignore the intercept, so we have to estimate the
intercept $d_j$. Our approach is to assume that for any simple state
pair $(i,j)$, if two neighboring lines ,$L_k$ and $L_{k+1}$,
corresponding to $i$ and $j$ respectively, the difference of the
intercept of $L_k$ and $L_{k+1}$ is similar with other occurrence of
neighboring state $i$ and $j$. The difference is denoted by
$D_{ij}$. In the training phase, we can estimate $D_{ij}$ from
$S(Y)$. Based on $D_{ij}$ and $d_t$, the sintercept of the current
simple state, we can compute $d_j$.

%The start point of state $j$ is $t+L_i-l_t+1$. If the length of line
%$l_t$ exceeds $L_i$, we assume state $j$ will begin at $t+1$.

In $(l+1)$-step prediction ($l\geq 1$), the approach to make
prediction is similar with that in 1-step prediction. Based on
$s_{i_{k+l}}^c$, we can predict the state after $s_{i_{k+l}}$,
denoted as $s_{i_{k+l+1}}$. Assume $s_{i_{k+l+1}}=j$, $(l+1)$-step
prediction is as follows:
\[P_{l+1}=\{ts_l+len_l+1,\bar{l}_j,\bar{\theta}_j,d_j,prob_l*a(s_{i_{k+l}}^c,s_{i_{k+l+1}}^c)\} \]
Based on $P_{l+1}$, we can predict the corresponding values in the
way similar with 1-step prediction.


%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
