\section{Related work}
\label{sec:related}
\paragraph*{Time series forecasting and representation}
Time series forecasting has been a topic of extensive research for
years because of its use in many
domains~\cite{timeseries94,chatfield04series}. In particular,
valuable tools for forecasting and processing time series appear in
statistics and signal processing fields. The traditional method is
the Auto-Regressive Integrated Moving Average (ARIMA)
~\cite{timeseries94}. Other well-known machine learning approaches
include Bayesian Network, Classification and Regression Trees
(CART), Support Vector Machines (SVM), and Random Forests (RF)
~\cite{datamining05}. All these methods try to capture the
relationship between the predicted value, $y_t$, with observed
values $y_{t-1},y_{t-2},\cdots,y_{t-n}$. Our approach is different
with them because instead of relationship between specific values,
we represent time series with segment line and then model the
relationship between lines. We try to find the model with a coarser
granularity, which can predict the future trend instead of certain
specific values. The problem we want to solve is how to learn the
latent states (represented by segment line) so that they can predict
the next states more accurate.

A number of techniques have been proposed in the literature for
representing time series with reduced dimensionality, such as
Discrete Fourier Transforms (DFT) ~\cite{dft94}, Discrete Wavelet
Transformation (DWT)~\cite{dwt99}, Symbolic Aggregate approximation
(SAX)~\cite{sax07} and Piecewise Linear Representation
(PLR)~\cite{plr98}. These techniques are commonly used in indexing,
classification, clustering and approximation of time series. In
~\cite{keogh08}, the author gives an extensive performance
comparison of the popular time series representation approaches. In
this work, we choose PLR as our approximation of time series, since
it is fit to model the trends in time series, and easy to recover
the original values from it, which characteristics is important in
time series forecasting.

\paragraph*{Markov model and the related extension models}
Markov model and hidden Morkov model~\cite{hmm89} are classic models
of pattern recognition and has been applied in many fields, such as
speech recognition, handwriting recognition. The simplest form of
Markov models are the so-called Markov chain model (MC). A Markov
chain is a sequence of random variables $X_1,X_2,\cdots$ with Markov
property: given the present state, future states are independent of
the past states. In other words, the description of the present
state fully captures all the information that could influence the
future evolution of the process. Hidden Markov model (HMM) assume
that the states are unobservable and the observation symbols are
emitted by the states according to the output probability.

A well-known problem of HMM is that the first-order assumption
restricts it from accurately modeling time series data with highly
varied dynamics, as it is often that the future state not only
depends on the present state, but also the past states. To increase
the accuracy of modeling, the fixed-length n-th order HMM (also
called n-gram model) was presented~\cite{mari96}, in which the
transition probability is $p(s_t|s_{t-1},\cdots,s_{t-n})$. The
sequence of $\{s_{t-1},\cdots,s_{t-n}\}$ is called as a context, and
n-gram model has the fixed length of context. But the complexity and
learning cost of
the model increase exponentially when $n$ increases. %To deal with
%this problem, variable length Markov model (VLMM) is presented, in
%which the length of the context is variable. But it is not a
%"hidden" model and can only model sequence of discrete values.

In contrast with n-HMM, the variable length Markov model\\ (VLMM)
learns a minimum set of contexts with variable lengths to model the
high-order Markovian process as accurately as the large set of
fixed-length contexts~\cite{ron96}.  VLMM aims to extend the states
to variable length contexts, which is composed of several connected
states.  It reduces the number and complexity of contexts by
allowing context to have variable lengthes, but VLMM is limited as
an observable Markov model and not a hidden Markov model. In other
words, all states are observable, and no output probability is
needed. This characteristic makes VLMM not fit to our goal.


%In training sequence (DNA sequence, text sequence), each data is a
%observable state. (From this angle, we can easily say that it is not
%fit our goal. But its algorithm to learn contexts is exactly similar
%with our algorithm to learn the composite state, in which entropy is
%also used to decide whether extending is needed, and transition
%probability is also the same as ours.)



Another Moakov model related to our work is variable length hidden
Markov model (VLHMM), which combines the advantages of HMM and
VLMM~\cite{vlhmm06}. It includes the following components:
\begin{itemize}
  \item context set. Each context is also composed of variable
  number of states.
  \item state transition probability. It describes the probability of
  transition from a context to next state.
  \item output probability. For each data value, the probability of
  a context to generate this data.
\end{itemize}

The author uses EM algorithm to learn the parameters of the model,
which aims to maximize the likelihood of generating time series by
the model. Compared to VLHMM, our model has the following
advantages:
\begin{itemize}
  \item VLHMM takes \emph{the data of each time point} as an observation
outputted by a context. Our model takes \emph{segment line} as
observation. Our model is more fit to our purpose, since generally
using line as observation can provide more information about the
next values.
  \item VLHMM needs to set the number of states before hand.
It is difficult in case of unfamiliar time series. In our model, we
only need to set the relative error threshold, which reflects the
accuracy user wants and is much easier to estimate.
\end{itemize}

\comment{\begin{itemize}
  \item VLHMM
takes \emph{the data of each time point} as an observation outputted
by a context. Our model takes \emph{segment line} as observation. %So
%the former describe the relation between data of neighboring time
%points, while our model describes relation between neighboring
%lines.
Our model is more fit to our purpose, since generally using line as
observation can provide more information about the next values.% For
%example, if we say the next state corresponds to a line with slope
%10 and length 50, we can estimate the next 50 data values, while in
%VLHMM the next state can only predict the value of next time point.
  \item It is required to set the number of states before. This requirement is
  difficult in case of unfamiliar time series. While in our model, we only
  need to set the relative error, which reflects the accuracy user wants and is much
  easier to estimate.
  \item When applying the model for testing data, VLHMM can use the
  classic viterbi algorithm, while our model cannot. Since in our
  model, we have to segment the test time series to lines and then
  estimate its probability. So we propose a new algorithm to detect state
  sequence.
\end{itemize}
}





%. But this model is not appropriate for our goal. The reason is:
%first, the goal of this work is to maximize the overall probability
%of all possible state sequences to generate the observation
%sequence, while our goal is to learn the states, which can both
%represent the time series well and predict the future states as
%accuracy as possible; second, this work takes data of every time
%point as an observation generated by the states, while our model is
%to take the segment lines as the observation, which is more
%efficient and meaningful.

%As described in related work, LVMM and VLHMM is similar with our
%model in the sense that their states are also variable length. VLHMM
%takes \emph{the data of each time point} as an observation outputted
%by a context, which corresponds to the composite state in our model,
%and then uses EM algorithm to learn all the parameters of the model.
%On the contrary, our model takes \emph{the segment line} as the
%observation outputted by certain simple state. By segmentation
%first, our approach can reduce the scale of the data by converting
%the long time series into a sequence of lines and then clusters them
%into several states. So our approach is more efficient.

%VLMM is not hidden Markov model. It takes the data of a time point
%as a symbol, and each state is composed of a string of symbol. So
%this model can not be applied directly in time series data. But it
%is similar with our approach to learn the composite state. To obtain
%the state set, VLMM also extends a single symbol by checking whether
%connecting it with another symbol is useful. The criteria of
%extending is whether the ratio between the probability of certain
%symbol conditioned the obtained state and that conditioned on the
%extended state. While our criteria is whether relative difference of
%information gain.



%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
