\documentclass{llncs}

\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{epstopdf}

\newcounter{lemmas}
\newcounter{definitions}

\newtheorem{lowerbounding_lemma}[lemmas]{Lemma}
\newtheorem{FRM1_lemma}[lemmas]{Lemma}
\newtheorem{FRM2_lemma}[lemmas]{Lemma}
\newtheorem{jsliding_def}[definitions]{Definition}
\newtheorem{jdisjoint_def}[definitions]{Definition}



\begin{document}
%\title{Generic Subsequence Matching Framework:\\Fast Development, Flexibility, Efficiency}
\title{Generic Subsequence Matching Framework:\\Modularity, Flexibility, Efficiency}
%\title{General Subsequence Matching Framework}

\author{
Petr Volny \and David Novak \and Pavel Zezula}
\institute{ Masaryk University, Brno, Czech Republic\\
\email{[xvolny1\,|\,david.novak\,|\,zezula]@fi.muni.cz}}


\maketitle
\begin{abstract}
Subsequence matching has appeared to be an ideal approach for solving many
problems related to the fields of data mining and similarity retrieval. It has
been shown that almost any data class (audio, image, biometrics, signals) is or
can be represented by some kind of time series or string of symbols, which can
be seen as an input for various subsequence matching approaches. However, the
previous research in this field has suffered from problems like data and
implementation bias so it was not an easy task to properly compare competing
approaches. Therefore we present a new subsequence matching framework that
speeds up a development of subsequence matching related systems and helps to
overcome the mentioned biases by providing unified testing environment.
Furthermore, we show on prototypes that this framework can be used in many
application domains and its components can be reused effectively. Finally, the
strictly modular architecture of the framework enables fast development of new
systems that can combine highly efficient technologies or partial solutions that
were, so far, developed separately.
\end{abstract}

% A category with the (minimum) three required fields
%\category{H.4}{Information Systems Applications}{Miscellaneous}
%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]

%\terms{Theory}

%\keywords{ACM proceedings, \LaTeX, text tagging} % NOT required for Proceedings


\section{Introduction}
Majority of the data being produced in current digital era is in the form of
\emph{time series} or can be transformed into sequences of numbers. This concept
is very natural and ubiquitous: audio signals, various biometric data, image
features, economic data, etc. are often viewed as time series and need to be also
organized and searched in this way. 

One of the key research issues drawing a lot of attention during the last two
decades is the \emph{subsequence matching problem}, which can be basically
formulated
as follows: Given a query sequence, find the best-matching subsequence from the
sequences in the database. Depending on the specific data and application, this
general problem has many variants -- query sequences of fixed or variable size,
data-specific definition of sequence matching, requirement of dynamic time
warping, etc. Therefore, the effort in this research area resulted in many
approaches and techniques -- both, very general and those focusing on a
specific fragment of this complex problem. 

The leading authors in this field identified two main problems that limit the
comparability and cooperation potential of various approaches: the \emph{data
bias} (algorithms are often evaluated on heterogeneous datasets) and the
\emph{implementation bias} (the implementation of the specific technique can strongly
influence experiment results)~\cite{Keogh2002}. The effort to overcome the
data bias is expressed by founding a common set of data collections
\cite{KeoghUCR} which is publicly available and that should be used by any
consequent research in this area. However, the implementation bias lingers,
which also obstructs a straightforward combination of compatible approaches whose
interconnection could be very efficient.

%With these data can be used in applications of many types and each would need a bit
%different approach of subsequence matching to be implemented.

Analysis of this situation brought us to conclusion that there is a need for a
unified environment for developing, prototyping, testing, and combination of
subsequence matching approaches. In this paper we propose such a framework,
namely:
\begin{itemize}
	\item we overview and decompose the state-of-the-art approaches and
		techniques for subsequence matching and we identify several common
		sub-problems that these approaches deal with in various ways
		(Section~\ref{sec:overview});
	\item we propose general architecture of a framework that should fulfill our
		targets and we describe our implementation of such framework
		(Section~\ref{sec:framework}); power of the framework is demonstrated by
		an easy implementation of two famous subsequence matching
		algorithms;
	\item we describe two real applications with different requirements for
		subsequence matching and both were very simply implemented with the aid
		of our framework (Section~\ref{sec:demos}).
\end{itemize}
The paper is concludes in Section~\ref{sec:conclusions} by future directions
that cover possible performance boost enabled by a straightforward cooperation
of our framework with advanced distance-based indexing and
searching technologies~\cite{Batko2009d}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Time Series Processing Overview}
\label{sec:overview}
The field opening paper by Faloutsos et al. \cite{Faloutsos1994} introduced a
subsequence matching application model that has been used ever since 
only with smaller modifications. The model can be summarized in the following
four steps that should be adopted by a subsequence matching application:
%that should be adopted bythe application needs to perform: 
\begin{description}
\item[slicing] of the time series sequences (both data and query) into shorter
	subsequences,
\item[transforming] each subsequence into lower dimension, 
\item[indexing] the subsequences in a multi-dimensional index structure,
\item[searching] in the index with a distance measure that obeys the 
	lower bounding lemma on the transformed data.
\end{description}
In the original work~\cite{Faloutsos1994}, this approach was demonstrated on
a subsequence matching algorithm that used the \emph{sliding window} approach to
slice the indexed data and \emph{disjoint window} for the query.
The Discrete Fourier Transformation (DFT) was used for dimensionality reduction
and the data was indexed using the minimum bounding rectangles in
R-tree~\cite{Guttman1984}.  The Euclidean distance was used for searching since it 
satisfies the lower bounding lemma on data transformed by DFT.

\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{images/representations.pdf}
\caption{Basic hierarchy of representations. Those with asterisk after the name allows lower bounding.}
\label{fig:representations}
\end{figure*}

Many works followed this model contributing to it by using other time series
representations, more sophisticated distance functions and corresponding lower
bounding mechanisms. In the rest of this section, we provide a brief overview of
the key approaches in this field.
%that has been published since the field opening
%paper by Faloutsos et al. was published.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Data Representation}
\label{sec:representation}
The way in which the data is represented may be crucial for both efficiency and
effectiveness of the whole system. 
%The better approximation of the original data we have, the better the results we
%can get.
Naturally, one can work directly with the original data, but this \emph{raw
data} is typically very large and it would require great computing
infrastructure to process such data fast enough. Therefore, a lot of effort
has been aimed at
finding the best possible approximation of the time series that would reduce the
dimensionality of the data on one hand and preserve the main features of
the data on the other. 

Figure~\ref{fig:representations} depicts the
most significant approaches used for reducing time series data representation: 
\textit{Discrete Fourier Transformation} (DFT)~\cite{Faloutsos1994},
\textit{Discrete Cosine Transform} (DCT)~\cite{Korn1997},  
\textit{Chebyshev Polynomials} (CHEB)~\cite{Cai2004},
\textit{Discrete Wavelet Transform} (DWT)~\cite{Chan1999},
\textit{Piecewise Aggregate Approximation} (PAA)~\cite{Keogh2000},
\textit{Single Value Decomposition} (SVD)~\cite{Korn1997},
\textit{Adaptive Piecewise Constant Approximation} (APCA)~\cite{Chakrabarti2002}.
On the higher level, we distinguish two groups of approaches: \emph{data
adaptive} and \emph{non-data adaptive}. An important feature for any
representation is the ability to be indexed because proper indexing can
dramatically improve performance of the whole retrieval process. Members of the
latter group sometimes struggle with indexing~\cite{Yi2000}. Another
desirable property of the data representation is the possibility to
perform \textit{lower bounding}, which can again improve the performance of the
subsequence matching process. 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Distance computation}
\label{sec:disances}
The notion of similarity between two sequences is typically expressed by
a \emph{distance measure} (distance function). The most frequently used functions are depicted in
Figure~\ref{fig:measures}. The lock-step measures, like Euclidean distance, are
usually relatively cheap to compute but they 
lack the robustness against even the basic data transformations. On the other hand, more sophisticated
dynamic programming methods like Dynamic Time Warping (DTW) allow shifts on the time axis and serve
better to applications like speech recognition or query by humming. It is also
important to pair the 
data representation and the distance function wisely in order to satisfy the lower bounding
lemma~\cite{Faloutsos1994}. Distance functions can also differ by their input
domain. Some functions are defined on
continuous domains (real numbers) and some work on strings (sequences of symbols from
a finite alphabet). The most common distance measures are:
\textit{Euclidean Distance} (ED)~\cite{Faloutsos1994},
\textit{Dynamic Time Warping} (DTW)~\cite{Sakoe1978,Kim2001,Keogh2004},
\textit{Edit Distance with Real Penalty} (ERP)~\cite{Chen2004},
\textit{Edit Distance on Real Sequence} (EDR)~\cite{Chen2005},
\textit{Longest Common Sequence} (LCSS)~\cite{Vlachos2002}.

\begin{figure}[tbp]
\centering
\includegraphics[width=\textwidth]{images/measures.pdf}
\caption{Basic hierarchy of distance functions.}
\label{fig:measures}
\end{figure}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Subsequence Matching Approaches}
\label{sec:approaches}
In order to build a subsequence matching application, number of questions has to be answered: What
kind of data will be used and what is its dimensionality? What is the volume of
the data? Do we need the warping ability? Are
approximate answers acceptable? etc. On the basis of the
answers, one can make proper design decisions like what representation and distance measure to use,
which storage index to employ, etc.

Faloutsos encouraged many new approaches that followed his work. Moon et
al.~\cite{Moon2001} suggested dual approach for slicing and indexing sequences
to the one used in Faloutsos work.  The \emph{DualMatch}, how the approach was
called, uses the sliding windows for queries and disjoint windows
for data sequences to reduce the number of windows that need to be indexed.
DualMatch was followed by the generalization of windows creation method 
called GeneralMatch \cite{Moon2002}.
Another significant leap forward was made by the effort of Keogh et al. in their work about
exact indexing of Dynamic Time Warping \cite{Keogh2004}. They introduced a similarity measure that
is relatively easy to compute and it lower-bounds the expensive DTW function. This approach was further
enhanced by improving I/O part of the subsequence matching process using Deferred Group Subsequence
Retrieval introduced in \cite{Han2007}.
Moreover, many new distance functions and data representations were
introduced as we outlined in the previous sections.


Often, we run into performance issues and then there is a need for employing enhancements like
indexing, lower bounding, window size optimization, reducing I/O operations or approximate queries.
Lots of approaches for building subsequence matching applications often uses the very same techniques for solving
common sub-tasks included in the whole retrieval process and changes only some parts with some novel
approach. This leads to implementing the same parts of the process like DFT or
DWT repeatedly which leads to the phenomenon of the \textit{implementation
bias}~\cite{Keogh2002}. The modern subsequence
matching approaches \cite{Han2007,Keogh2004} employs many smaller tasks in the retrieval
process that solve sub-problems like optimizing I/O operations. Implementations of routines that
solve such sub-problems should be reusable and employable in similar approaches.
This led us to think about the whole subsequence matching process as a chain of
subtasks, each solving a small part of the problem. We have observed that many
of the published approaches fits into this model and their novelty is often only
in reordering, changing or adding new subtask implementation to the chain.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%  The Framework
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Subsequence Matching Framework}
\label{sec:framework}
The Subsequence Matching Framework (SMF) can be perceived on the following
levels that should, naturally, coincide: 
\begin{itemize}
	\item on the \emph{conceptual level}, the framework is composed of mutually
		cooperating modules, each of which solves a specific sub-task, and
		these modules are cooperating within a specific subsequence
		matching algorithms; 
	\item on the \emph{implementation level}, the framework defines the
		functionality of individual module types, their communication
		interfaces, and the ways to combine them into a specific algorithm;
		also, number of specific modules were implemented as well as several
		specific algorithms.
\end{itemize}
In Section~\ref{sec:modules}, we describe the common sub-problems (sub-tasks)
that we identified in the field and we define corresponding types of modules.
Further, in Section~\ref{sec:algorithms}, we justify our approach by describing
two well-known subsequence algorithms in terms of our modules and we present a
straightforward implementation of these algorithms within SMF.
Section~\ref{sec:implementation} is devoted to details about implementation of
the framework.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\subsection{Fundamental Terms and Data Structures}

The key term in the whole framework is, naturally, a \emph{sequence}. As we want
to keep the framework as general as possible, we do not lay practically any
restrictions on the components of the sequence -- it can be integers, real
numbers, vectors of numbers, or any more sophisticated structures, as long as
a distance function is defined on such type of sequence. We will use the
following notation:

\begin{center}
\begin{tabular}{|l|l|}
\hline
Symbols & Definitions \\
\hline
\hline
$Len[S]$ & The length of sequence $S$ \\ \hline
$S[k]$ & The k-th value of the sequence $S$ \\ \hline
$S[i:j]$ & Subsequence of $S$ from $S[i]$ to $S[j]$, inclusive. \\ \hline
$d(Q,S)$ & Distance between two sequences $Q$ and $S$ \\ \hline
%$s_i$ & The $i$-th disjoint window of sequence S \\
%\hline
%$\omega$ & Length of the (sliding/disjoint) window \\
%\hline
%$\epsilon$ & User defined tolerance \\
%\hline
\end{tabular}
\end{center}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Common Sub-problems: Modules in SMF}
\label{sec:modules}
%Modules for common problems (motivation, isolation of modules, examples)\newline
Studying the field of subsequence matching, we identified several common
sub-problems addressed by a number of approaches in some sense. Specifically, we
can see the following sub-tasks that correspond to isolated modules in our framework.

%Module functionality is independent of data or other modules which allows easy
%combination\newline

\begin{figure*}[tbp]
\centering
\includegraphics[width=0.8\textwidth]{images/modules.pdf}
\caption{Modules -- test image.}
\label{fig:modules}
\end{figure*}

\subsubsection{Data Representation (Transformer Module)}
The raw data that enter an application are often transformed into other
representation which can be motivated either by simple dimensionality
reduction (DFT, DWT, SVD, PAA)~\cite{Faloutsos1994,Chan1999,Korn1997,Keogh2000}
or also by extracting some important characteristics that should improve the
effectiveness of the retrieval~\cite{Perng2000} (see
Section~\ref{sec:representation} for details). In either case, the general task can
be defined simply as follows: \emph{Transform given sequence $S$ into
another sequence $S'$.} 
This naturally specifies the module \emph{transformer} of SMF with the following
specification:

\begin{table}
\centering
\caption{Summarization of the SMF module types.}
\label{tab:modules}
\begin{tabular}{c||c|c|p{.23\textwidth}|c}
	\bfseries module name & \bfseries input & \bfseries
	output & \bfseries description & \bfseries notation \\ \hline
	\emph{transformer} & sequence $S$ & sequence $S'$ & transform $S$ into $S'$
	& Figure~\ref{fig:modules} (a) \\ \hline
	\emph{transformer} & sequence $S$ & sequence $S'$ & transform $S$ into $S'$
	& Figure~\ref{fig:modules} (a) \\ \
\end{tabular}
\end{table}

Introduce some notation for specification of a module (name, functionality,
interfaces (input, output), notation = symbol in the diagrams?)

%Known significant solutions (Classics in signal processing DFT, PAA; MFCC for
%audio, SIFT for images)


\subsubsection{Windows and Subwindows (Slicer Module)}
Known significant solutions (sliding, disjoin, dual approach)

\subsubsection{Efficient Indexing (Index Module)}
Known significant solutions (metric indexes, iSAX)

\subsubsection{Efficient Aligning (Storage Module)}
Known significant solutions (metric indexes, iSAX)

\subsubsection{Distance Functions (Distance Module)}
Known significant solutions (ED, warping, DTW, Edit Distance based)


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Subsequence Matching Strategies in Our Framework}
\label{sec:algorithms}
Describe and visualize two specific algorithms for subsequence matching:
\begin{itemize}
	\item basic Faloutsos,
	\item advanced Han~\cite{Han2007}
\end{itemize}


(Algorithms for subseq matching, using modules, data independent, each algorithm solves class of problems \newline
Combining modules to tune performance and finding best solution for the particular data
Possible queries (including approx))

\begin{figure*}[tbp]
\centering
\includegraphics[width=\textwidth]{images/faloutsos.pdf}
\caption{Faloutsos -- test image.}
\label{fig:faloutsos}
\end{figure*}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Implementation of the Framework}
\label{sec:implementation}
Java, MESSIF, MUFIN, Config files scripting, Web and Mobile Apps\newline




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%  Demo Applications
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Use Cases}
\label{sec:demos}
General timeseries\newline
Gait recognition\newline
Query-by-example spoken term detection\newline


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%  Conclusions
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusions and Future Work}
\label{sec:conclusions}

\bibliographystyle{splncs}
\bibliography{library}

\end{document}










%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Framework}
\label{sec:framework}
The subsequence matching problem was first addressed in the field opening paper by Faloutsos et al. \cite{Faloutsos1994}. They introduced GEMINI framework, an application model for dealing with subsequence matching which in short can be described as a four steps that the application has to perform:
\begin{itemize}
\item slicing the time series sequences into smaller subsequences (originally using sliding window for the indexed data and disjoint window for the query)
\item maping each time series subsequence in a lower dimension. We can call this step segmentation (originally using Fast Fourier Transform)
\item indexing them in multi-dimensional indexing structure (originally indexing minimum bounding rectangles in R-tree)
\item performing search with a distance function that obeys the rule of lower bounding lemma on the reduced data (originally Euclidean Distance).
\end{itemize}
Works that followed Faloutsos's paper usually tried to enhance only the part of the problem mainly by introducing new methods for dimensionality reduction, new distance functions or a bit of a variance in an indexing and slicing strategy for the subsequence bits. So we can observe that the approaches follow the same pattern and that they ussualy enhance only one part of the overall algorithm. In this pattern, there are sub-tasks that must be implemented by every approach and this leads to a different implementations of the same methods which leads to the \textit{implementation bias}. With our framework we would like to help overcome that problem by providing "write once use many" option. 

\subsection{Subsequence matching overview}
Short overview of subseq matching and the last fifteen years of research(data representaion, data reduction, indexing, lowerbounding, matching strategies, distances)\newline
\subsection{Common Sub-problems}
Modules for common problems (motivation, isolation of modules, examples)\newline
Module functionality is independent of data or other modules which allows easy combination\newline
\subsubsection{Data Representation and Dimensionality Reduction}
Known significant solutions (Classics in signal processing DFT, PAA; MFCC for audio, SIFT for images)
\subsubsection{Windows}
Known significant solutions (sliding, disjoin, dual approach)
\subsubsection{Indexing}
Known significant solutions (metric indexes, iSAX)
\subsubsection{Distance Functions}
Known significant solutions (ED, warping, DTW, Edit Distance based)

\section{Subsequence Matching strategies}
So far have we discussed representations and similarity models that are used for general time series data mining tasks but we have not mentioned anything about subsequence matching yet. In general, subsequence matching is a specific problem in the area of sequence matching. Therefore, all the outlined methods are substantial to both sequence matching and subsequence matching. The latter differs in indexing and retrieval strategies but somewhere in the process it goes down to one of the mentioned similarity measuring methods on the given representations.

A specific application is heavily determined by its data meaning and interpretation. So prior to the inventing subsequence matching strategies one must ask few substantial questions. What is the meaning of the data? Do we seek for trends or patterns? Is it possible to compare data with different lengths? Is time warping a desirable feature during search? What will be the size of the window? What will be the volume of the indexed data? etc. As you can see the process has many variables that we must take into account before we start tailoring the subsequence matching approach for the specific problem.

This section covers basic problems and techniques used in subsequence matching applications. We will discuss usage of sliding and disjoint windows during indexing and querying, the effect of the chosen windows size on the performance of the application, discussion on where to employ dynamic time warping and where to use Euclidean distance, and finally specific problem of finding motifs in the time series.

In the following text we will use this notation:


\hspace{1cm}
\begin{center}
\begin{tabular}{|l|l|}
\hline
Symbols & Definitions \\
\hline
\hline
$Len[S]$ & The length of sequence $S$ \\
\hline
$S[k]$ & The k-th value of the sequence $S$ \\
\hline
$S[i:j]$ & Subsequence of $S$ including values between $S[i]$ to $S[j]$ inclusive. \\
\hline
$D(Q,S)$ & Distance between two sequences $Q$ and $S$ \\
\hline
$s_i$ & The $i$-th disjoint window of sequence S \\
\hline
$\omega$ & Length of the (sliding/disjoint) window \\
\hline
$\epsilon$ & User defined tolerance \\
\hline
\end{tabular}
\end{center}

\hspace{1cm}

\subsection{Sliding And Disjoint Windows}
For the sake of subsequence matching, we need to extract subsequences both when indexing and querying data. There are two related major techniques for creating subsequences:
\begin{itemize}
\item disjoint window -- it divides the given sequence into disjoint windows. where one starts where the other ends,
\item sliding window -- it creates all possible windows of given length $\omega$ that can be extracted from the given series. In other words, if one starts at position $i$ within the sequence, the other start at position $i+1$.
\end{itemize}
Both, sliding and disjoint windows are used in subsequence matching techniques to achieve the state, where each window created from the query can be compared to each window in an index. First time, this concept was used in field opening paper \cite{Faloutsos1994} by Faloutsos et al. (FRM in short). There, they use sliding windows for indexed data and disjoint windows for queries. To reduce the amount of subsequence to subsequence distance computations, not all subsequences are indexed in the database, but minimum bounding rectangles representing a bunch of windows (MBR) are used instead. So in the case the range query is initiated, query disjoint windows are first compared to the indexed MBRs and only those MBRs, where at least part of MBR is in the range, are selected as a candidates for the answer. Then all disjoint windows from the candidate MBRs are taken and all false alarms (windows not in range) are found and dismissed from the precise answer. Although, this lower bounding technique ensures no false dismissals, it is not very efficient and produces a lot of false alarms.\par
The opposite way of using sliding and disjoint windows was introduced by Moon et al. \cite{Moon2001}. They divide data sequences into disjoint windows and the query sequences into sliding windows. Hence, this approached exploits duality in constructing windows, it was called DualMatch. It has been shown that the duality based approach is correct (i.e. it incurs no false dismissals) and that it outperforms FRM. DualMatch reduced drastically the number of points that need to be stored to $1/\omega$ of that of FRM. The disadvantage of DualMatch is the upper bound for window size which in return caused additional false alarms due to the \textit{window size effect} (see section 5.2)\par
Another method, GeneralMatch, was brought by Moon et al. in 2002 \cite{Moon2002}. It is based on a generalization of constructing windows. The authors introduced the concept of $J$-sliding and $J$-disjoint windows  with the specified $J$ sliding factor and they defined them as follows:

\begin{jsliding_def}
A $J$-sliding window $(1 < J < \omega)$ $s_{i}^J$ of size $\omega$ of the sequence $S$ is defined as the subsequence of length $\omega$ starting from $S[(i-1)*J+1](1 < i < \frac{Len(S)-\omega}{j}+1)$.
\end{jsliding_def}

Intuitively speaking, if we have $\omega=16$ and $J=4$, we construct windows by shifting subsequence of length 16 and by 4 entries, and thus, the starting points of the 4-sliding  windows are $S[1],S[5],S[9],...,$ respectively.

\begin{jdisjoint_def}
A $J$-disjoint window $(1 < J < \omega)$ $q_{(i,j)}^J$ of size $\omega$ of the sequence $Q$ is defined as the subsequence of length $\omega$ starting from $Q[i+(j-1)*\omega](1\leq i\leq J,1\leq j\leq \frac{Len(S)-i+1}{\omega}))$ in $Q$
\end{jdisjoint_def}

Again, if we have an example where $\omega=16$ and $J=4$, we construct windows $Q[i:i+\omega -1], Q[i+\omega : i+2\omega -1],$... by dividing Q[i:Len(Q)] into disjoint windows for every $i$ where $(1<i<4)$.\par

\paragraph{I/O Issues}
Logically, when we divide the sequences into many subsequences, the amount of comparisons during the query processing rises. This also implies that subsequence matching algorithms do lots and lots of I/O operations needed for fetching particular subsequences one by one again and again. One of the advanced subsequence matching methods addressed this problem by introducing \textit{deferred group subsequence retrieval} \cite{Han2007}. This technique tries to cope with the fact that, because of excessive disk I/O operations and bad buffer utilization, we might have to read same data pages repeatedly from the disk. The basic idea of the proposed solution is to delay a fixed size set of subsequence retrieval and enable batch retrieval.


%=====================================

\subsection{Window Size Effect}
The effect of a window size on a performance was first discussed in previously mentioned DualMatch approach \cite{Moon2001}. The windows size effect is caused by the application of two lemmas that were introduced in FRM:

\begin{FRM1_lemma}
When two sequences $S$ and $Q$ of the same length are divided into $p$ windows $s_i$ and $q_i$ $(1 \leq i \leq p)$ respectively, if $S$ and $Q$ are in $\epsilon$-match, then at least one of the pairs $(s_i, q_i)$ are in $\epsilon/\sqrt{p}$-match. That is, the following equation holds:
$$ D(S,Q)\leq\epsilon \Rightarrow \bigvee_{i=1}^p D(s_i,q_i) \leq \epsilon/\sqrt{p}$$
\end{FRM1_lemma}

\begin{FRM2_lemma}
If two sequences $S$ and $Q$ of the same length are in $\epsilon$-match, then any pair of subsequences $(S[i:j], Q[i:j])$ are also in $\epsilon$-match. That is, the following equation holds:
$$ D(S,Q)\leq\epsilon \Rightarrow D(S[i:j],Q[i:j]) \leq \epsilon $$
\end{FRM2_lemma}

Application of Lemmas 2 and 3 for long query sequences causes false alarms. That is, when two sequences $S$ and $Q$ are divided into $p$ windows $s_i$ and $q_i$ $(1 \leq i \leq p)$ respectively, although a pair $(s_i, q_i)$ is in $\epsilon/\sqrt{p}$-match, the distance between $S$ and $Q$ may be greater than $\epsilon$. To reduce this kind of false alarms, it is reasonable to use as large windows as possible. For example, let the window size of the method A be twice as large as that of the method B. Then, by Lemma 2 or 3, a candidate subsequence of the method A must also be a candidate of the method B. However, the inverse does not hold. This effect was defined as the \textit{window size effect} \cite{Moon2001}. The size of the window, however, must be less than or equal to the length of the query sequence; thus, the maximum window size depends on the length of the query sequence. Typically, the algorithm have the problem that the performance decreases as the difference between the query sequence length and the window size increases. This problem was addressed in \cite{Koh2005, Lim2007}. The proposed techniques employ creating multiple indexes and their usage based on the query parameters. The dependency of the performance degradation on the number of queries and query vs. window size was revealed and it was claimed that the need for multiple indexes tailored for the variable query lengths is crucial. Both mentioned articles introduced heuristic methods for determination of the window sizes for particular indexes based on the distribution of the query sequence lengths.


\subsection{When We Need To Warp?}
Euclidean Distance and Dynamic Time Warping based techniques have both many advocates. Euclidean distance is mostly favored for its simplicity, lower bounding options and the fact that it could be easily computed. On the other side stands much more sophisticated linear programming approach of dynamic time warping. During the past few decades the latter pushed the former out of many applications and it was claimed that the dominance of DTW over ED is inevitable. There are areas where this has become true like in speech recognition and query-by-humming where the warping ability is crucial to match the spoken terms with words from the database or where one can expect that the humming will not have the same tempo as the indexed data that we want to match with the query.\par
Despite the fact that DTW outperforms ED in these areas, arguments in the favor of ED were raised in the literature \cite{Ratanamahatana2005, Ann2004}. It was argued that the bigger the data collection, the lesser the need for DTW because it often degrades to the same result as the ED measure. This is caused by the fact that the probability, that there are some data in the collection that are very similar to the possible query, grows with a size of the indexed data collection. So the usage of DTW and ED in this case would bring very similar results but ED is cheaper to compute.\par
On the other hand, this reasoning cannot be applied generally. We believe that for some domains the warping feature of the DTW is crucial even if the collection is large enough. Again, on the example of the spoken term detection, we do not want to find only the one best match, but to find possibly all variations of the spoken term and this is the case where DTW, or other functions with the warping ability, are irreplaceable. In the case that the warping would be needed only because the different offset of the data, than, when considering subsequence matching, it is better to use ED. We do not need DTW in this case because the offset difference of the sequence can be overcome by the sliding/disjoint windows.


\subsection{Finding Motifs In Time Series}
We can say that finding motifs \cite{Lin2002} is specific sub-problem of time series data mining. It aims on finding regular patterns in the time series, usually omitting the exact values and taking only the development or shape of the time series signal. Sometimes this problem is also referred as a rule discovery.\par
One of the common approaches is to have a predefined set of primitive shapes and than to classify the windows of the original sequence into these shapes. The set of primitives can be found by clustering of windows of predefined length. Each sequence is sliced into disjoint windows and than all the windows are clustered into the set of shape primitives. Each primitive is represented by a symbol and than each sequence can be represented by the string of these symbols \cite{Das1998}.\par
%====================================================================================================

\section{Subsequence Matching Framework}
As we have outlined in the previous sections, the ubiquity of the time series and its usability in a wide range of applications led the research community to the invention of many various approaches. It was shown that the time series whole matching approach is not sufficient in many areas. Therefore the subsequence matching problem had risen and has been addressed by many researches since the first paper on this topic was published \cite{Faloutsos1994}. Faloutsos et al. have introduced GEMINI framework, an application model for dealing with subsequence matching which in short we can explain in four steps that the application has to perform:
\begin{itemize}
\item slicing the time series sequences into smaller subsequences (originally using sliding window for the indexed data and disjoint window for the query)
\item maping each time series subsequence in a lower dimension. We can call this step segmentation (originally using Fast Fourier Transform)
\item indexing them in multi-dimensional indexing structure (originally R-tree)
\item performing search with a distance function that obeys the rule of lower bounding lemma (originally Euclidean Distance).
\end{itemize}
Despite the fact that the application model is almost two decades old, it is still vital and suits many cases where the subsequence matching procedure has to be employed.
Works that followed that Faloutsos's paper usually tried to enhance only the part of the problem mainly by introducing new methods for dimensionality reduction, new distance functions or a bit of a variance in an indexing and slicing strategy for the subsequence bits. We have mentioned the most significant ones in the previous sections. Those new approaches enhanced the performance of the GEMINI framework but a lot of them were tested only on a constrained, and often artificially created, datasets. It has been shown in \cite{Dinga} that the comparison of those methods is not as clear as some of the authors were trying to declare and so that the results can not be taken as a ground truth. It was argued \cite{Keogh2002} that the experimental results were computed in various environments with various data sets and with various implementations. This problem was previously called \textit{implementation and data bias}.\par
The need for the unified testing dataset was settled and fulfilled by the authors of \cite{Dinga} and a collection of testing datasets has been maintained since then, and the authors of new papers were asked to perform the tests on this collection to bypass at least the \textit{data bias} mentioned above. On the other hand, the \textit{implementation bias} often remains.\par
We would like to address this problem and to present a versatile subsequence matching framework that would be usable for rapid prototyping and testing of the current, but mainly new approaches and would serve as trusted platform for performing these tests. By framework, we mean a complex implementation framewrik for quick but efficient realization of a wide range of subsequence matching related applications and approaches. It will contain a library of subcomponents, a way to combine them, and an option to create new ones.\par 
We are aware that this goal is not easy one to achieve, because the balance of the boundaries of the framework and the real usability and extensibility must be found so that the developers and scientists would adopt it. Nevertheless, we believe that we can deliver such a framework that can helpful to the community. We can offer the experience with building the MESSIF framework \cite{Batko2007}, part of the MUFIN \cite{Batko2009e} project that allows fast prototyping and testing of general similarity search applications literally on any data domain. Until now, from this paper's point of view, these technologies allowed only dealing with the vectors whole matching problem. What we did was that we used these technologies that were proven by the time and many projects and enriched them with the ability of the subsequence matching.\par
To prove that our solution is really usable for many applications areas, we have decided to demonstrate it on the application related in speech recognition and analysis that covers many problems and many approaches to solve them and where it may not be clear on the first look that the related applications can be solved or enhanced by the state of the art subsequence matching solutions. Later in the text, we will discuss the demo application that demonstrates the combination of state of the art technologies both from the world of speech analysis and subsequence matching.



\subsection{Idea}
Our Subsequence Matching Framework relies on the MESSIF library that is designed to help to implement similarity search applications in general. We provide several basic classes and an easy understandable architecture for rapid development and testing of subsequence similarity search applications. We have tried to identify the common sub-problems of the subsequence matching process. We have mentioned some in the rough description of the GEMINI framework but we added some new and still let the architecture opened for implementing brand new approaches. To achieve this we build the framework in a modular way. You can imagine \textit{modules} dealing with:
\begin{itemize}
\item normalization of the time-series/sequence data,
\item transformation for data sequences,
\item storage index for subsequences,
\item slicing of the sequences into the windows,
\item distance function implementations,
\item lower bounding techniques.
\end{itemize}
The concept of \textit{modules} (Fig. \ref{fig:SMF_modules}) should allow good re-usability of the code and will help to overcome  the \textit{implementation bias} because once the module responsible for solving some part of the problem is implemented, it should be used by others.
The architecture allows to implement subroutines with functionality independent of the specific data type. As you can see on the Fig \ref{fig:SMF_modules}, another substantial part of the framework is the definition of algorithms. The algorithms are responsible for the logic of the subsequence matching, indexing and the execution of the queries. Each algorithm declares its parameters. You can imagine that as slots in which the modules are plugged in. So in the example, the Simple algorithm is working with two slicing modules, one transformation module, one distance function that obeys the lower bounding lemma and two indexes -- one for the original sequences and one for the subsequences. Under the definition of the algorithm we can see two instances altering in the usage of modules. This is possible because the implementation of the modules is independent of the data types (to some level of abstraction) and the algorithm itself only uses the interface common to the modules from the same class. These features allow to quickly prototype and alter the algorithm by changing the modules that are used. So when we have the mechanism of the algorithm implemented using such an interface to the the modules declared as a parameters of this algorithm and have implemented a bunch of modules for these classes, we can easily perform lots of combinations and gather the results.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{SMF_modules.pdf}
\caption{SMF principles.}
\label{fig:SMF_modules}
\end{figure}

Another great feature of the framework is that altering these combinations does not require to re-compile the code. With the usage of the MESSIF \textit{batch files} one can instantiate new algorithm with new parameters/modules (also instantiated through the \textit{batch file}) and script the whole series of tests in a very comfortable and fast way.\par The binding with the MESSIF framework also brings support for performing various similarity queries including classical range queries, KNN-queries etc.\par
We can also leverage many of state of the art metric indexes previously developed for MESSIF applications like M-index \cite{Novak2009}.

	
\subsection{Implementation}
As mentioned above, the subsequence matching framework extends the MESSIF platform and is a part of the MUFIN project. The whole MUFIN ecosystem and the SMF are developed using Java and it works with related technologies like Remote Method Invocation (RMI), Java Server Pages (JSP) and use the latest Java features like reflection. We can say that the Subsequence Matching Framework layer extends the MESSIF core layer and that the applications written with aid of the framework can use the MESSIF-UI module for creating the web application demos. The MESSIF offers lots of functionality when developing any similarity search application like indexing and querying in a distributed environment or constructing multi-layered indexing network or composing combined queries.\par
So far we have developed basic skeleton of the subsequence matching framework and now we are trying to proof the concept by implementing various subsequence matching applications with different data domains and different demands. This should help us to find cons of the framework and make the architecture better and more usable for different purposes. The last application domain that has drawn our attention is the speech analysis and generally the audio retrieval.

\section{Use Cases}
\label{sec:usecases}
General timeseries\newline
Gait recognition\newline
Query-by-example spoken term detection\newline

\section{Conclusions and Future Work}
\label{sec:conclusions}

\bibliographystyle{abbrv}
\bibliography{library}

\balancecolumns
\end{document}
