\section{Related work}
\label{sec:related}
\paragraph*{Time series representation and forecasting}


A number of techniques have been proposed in the literature to
represent time series with patterns. In~\cite{keogh08}, authors give
an extensive performance comparison of popular time series
representation approaches. They can be categorized into two groups.
Those in the first group split the whole time series into disjoint
segmentations and represent each segment with the mean value or a
regression line, such as PLA~\cite{plr98} and PAA~\cite{paa00}.
These techniques can provide an approximate shape of the time
series. However, they don't exploit the relationships between
segments. Those in the second category represent time series with a
few dominant coefficients of certain transformations, such as
DFT~\cite{dft94} and DWT~\cite{dwt99}. However, the coefficients are
not interpretable, that is, knowing the coefficients in the
frequency domain does not necessarily enable us to understand how
the system works.


Time series motifs are approximately repeated subsequences of a
longer time series stream. Motifs are defined and categorized using
their support, distance, cardinality, length, dimension, underlying
similarity measure, etc. Many researchers have introduced techniques
to find them efficiently in the case of a large database or
streams~\cite{keoghkdd10}. But still, the set of motifs cannot
provide us with the whole picture of the time series.

Recently, some works dealing with relations between patterns has
been proposed. Pattern Growth Graph (PGG)~\cite{tang07} detects and
manages variations over pseudo periodical streams. It first splits
time series into segments, each of which is an occurrence of the
pseudo period, and then describes time series by several connected
lines. However, this work can only deal with pseudo periodical time
series, and is not applied to the general time series. In
~\cite{reeves09}, a multi-scale schema is proposed to compress time
series.  It uses techniques such as FFT and random projection to
represent the original time series. However, patterns are still
isolated, and hence cannot be used to make predictions.

Time series forecasting has been a topic of extensive
research~\cite{timeseries94,chatfield04series}. In particular, many
tools for forecasting and processing time series appear in
statistics and signal processing fields. The traditional method
includes ARIMA~\cite{timeseries94}. Other well-known machine
learning approaches include Bayesian Network, Regression Tree, CART
and Random Forests~\cite{datamining05}. These methods try to capture
relationships between the predicted value, $y_t$, with observed
values $y_{t-1},\cdots,y_{t-n}$. Our approach is different to them,
since our goal is to build a model based on patterns instead of
single values.



\paragraph*{Markov model and the related extension models}
% Markov model and hidden Markov model~\cite{hmm89} are classic models
%of pattern recognition and have been applied in many fields, such as
%speech recognition and handwriting recognition. The simplest form of
%Markov models is the Markov chain model (MC). A Markov chain is a
%sequence of random variables $X_1,X_2,\cdots$ with Markov property:
%given the present state, future states are independent of
% the past states. %In other words, the description of the present
%% %state fully captures all the information that could influence the
%% %future evolution of the process.


The hidden Markov model (HMM) assumes that the states are
unobservable and the observation symbols are emitted by the states
according to the output probability. A well-known problem of HMM is
that the first-order assumption restricts it from accurately
modeling the time series data with highly varied dynamics, as it is
often that the future state not only depends on the present state,
but also the past states. To increase the accuracy of modeling, the
$n$-gram model was presented~\cite{mari96}, but the complexity and
the learning cost increase exponentially when $n$ increases.

In contrast with $n$-gram, variable length Markov model (VLMM)
learns a minimum set of contexts with variable lengths to model the
high-order Markovian process~\cite{ron96}. VLMM aims to extend the
states to variable length contexts, which is composed of several
connected states. It reduces the number and complexity of contexts
by allowing the context to have variable lengths. However, VLMM is
limited as an observable Markov model and not a hidden Markov model.
In other words, all states are observable, and no output probability
is needed.

Another Markov model related to our work is Variable length hidden
Markov model (VLHMM) combines the advantages of both HMM and
VLMM~\cite{vlhmm06}. Instead of generating observations using single
states, VLHMM extends states to contexts, which are composed of
variable of states. An EM algorithm is used to learn the parameters
of the model, which aims to maximize the likelihood of generating
time series by the model. The difference between these models and
the proposed pHMM is that they take the value of each time point as
an observation, while in pHMM, the observations are patterns.







%. But this model is not appropriate for our goal. The reason is:
%first, the goal of this work is to maximize the overall probability
%of all possible state sequences to generate the observation
%sequence, while our goal is to learn the states, which can both
%represent the time series well and predict the future states as
%accuracy as possible; second, this work takes data of every time
%point as an observation generated by the states, while our model is
%to take the segment lines as the observation, which is more
%efficient and meaningful.

%As described in related work, LVMM and VLHMM is similar with our
%model in the sense that their states are also variable length. VLHMM
%takes \emph{the data of each time point} as an observation outputted
%by a context, which corresponds to the composite state in our model,
%and then uses EM algorithm to learn all the parameters of the model.
%On the contrary, our model takes \emph{the segment line} as the
%observation outputted by certain simple state. By segmentation
%first, our approach can reduce the scale of the data by converting
%the long time series into a sequence of lines and then clusters them
%into several states. So our approach is more efficient.

%VLMM is not hidden Markov model. It takes the data of a time point
%as a symbol, and each state is composed of a string of symbol. So
%this model can not be applied directly in time series data. But it
%is similar with our approach to learn the composite state. To obtain
%the state set, VLMM also extends a single symbol by checking whether
%connecting it with another symbol is useful. The criteria of
%extending is whether the ratio between the probability of certain
%symbol conditioned the obtained state and that conditioned on the
%extended state. While our criteria is whether relative difference of
%information gain.



%%% Local Variables:
%%% mode: latex
%%% TeX-master: "kdd09"
%%% End:
