\documentclass[a4paper,11pt]{amsart}
\setlength{\parskip}{10pt plus 1pt minus 1pt}
\usepackage[utf8]{inputenc}
\usepackage[UKenglish]{babel}
\usepackage{tikz}
\usepackage{caption}
\usepackage{array}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{url}
\usepackage{algorithmic, algorithm}
\usepackage{float}
\usepackage{amsmath}

\makeindex

\newcommand{\term}[1] {\textbf{#1}\index{#1}}

\usetikzlibrary{arrows}
\usetikzlibrary{shapes.arrows}
\usetikzlibrary{shadows}
\usetikzlibrary{positioning}
\title{Automated Multilingual Alignment of Discontinuous Sequences}
\author{Alexander Kislev \\ SfS \\ University of Tübingen}
\date{\today}

\begin{document}

\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]

\floatname{algorithm}{Procedure}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}

\maketitle

\begin{center}
	\begin{tabular}{ll}
		\emph{First Supervisor:} & Prof. Dr. Erhard Hinrichs \\
		\emph{Second Supervisor:}  & Dr. Dale Gerdemann \\
	\end{tabular}
\end{center}

\begin{abstract}

Phrase models form the basis for the best statistical machine translation systems currently available (\cite{SMT:Koehn2010}). These systems constuct a translation using sequences of words as atomic units. The systems require sequence alignment, i.e. translations of short word sequences collected from parallel corpora with a variety of approaches. Most rely on word alignments to generate multiword alignments. Some learn sequence alignments without the use of word alignments. One of the latter group is a suffix array based system proposed in the paper by \cite{Mcnamee:2006}. Suffix array based algorithms are widely used in the field of computational biology to analyse patterns in very long genome sequences. It is only natural that these techniques gradually find their way into the field of computational linguistics, as the latter attempts to reveal structure in large sequences of natural language text. Following the research presented by \cite{Mcnamee:2006} and \cite{Gerdemann:2010}, we introduce an alignment approach using a new suffix array based method for discovery of repeated discontinuous subsequences. It allows finding alignments not only for continuous subsequences, e.g. recurring phrases, but also for discontinuous subsequences, e.g. multi-part phrases, separable verbs etc.

\end{abstract}

\clearpage

Hiermit versichere ich, dass ich die vorgelegte Arbeit selbstständig und nur mit den angegebenen Quellen und Hilfsmitteln einschließlich des www und anderer elektronischer Quellen angefertigt habe. Alle Stellen der Arbeit, die ich anderen Werken dem Wortlaut oder dem Sinne nach entnommen habe, sind kenntlich gemacht.


\begin{tabular}{ll}
    Ort, Datum: & Tübingen, \the\day.\the\month.\the\year \\
    \\
    Unterschrift: & \hrulefill \\
\end{tabular}

\clearpage

\tableofcontents

\clearpage

\section*{Introduction}

In the rapidly growing field of machine translation, the use of word alignment-based models has proved to be inadequate for the task. Indeed, many word sequences, such as idioms or other collocations, cannot be translated merely by composition of translations of individual words. Various extensions to word alignment systems have been proposed, resulting in improved translation. However dealing with longer sequences also brings new problems, such as growth in both the memory requirements and the computational complexity. The suffix array based system proposed by \cite{Mcnamee:2006} provides an interesting solution to these problems. Since suffix arrays are best suited for discovery of repeated sequences, this system does not need word alignments and is quite efficient in memory use. The method proposed in the current paper further develops this idea by performing discontinuous sequence alignments following the research by \cite{Gerdemann:2010}. Discontinuous sequences arise from syntactic phenomena, such as separable verbs in German, and multi-part collocations.

The structure of this paper is as follows. In the first part, the notion of a repeat from the field of computational biology is defined (Chapter 1), followed by an overview of the current suffix array based techniques for discovery of repeats (Chapter 2). Then we extend the notion of a repeat to its general case by introducing the notion of a discontinuous repeat (Chapter 3) along with a method for discontinuous repeat discovery based on a new data structure called embedded suffix tree (Chapter 4). This method allows for recursive embedding of discontinuous repeats and thus for discovery of discontinuous repeats consisting of unbounded number of sequences. In the second part, a vector space model based approach for modelling of meaning, commonly used in the field of information retrieval, is described (Chapter 6) and a method for alignment of discontinuous sequences based on this approach is introduced (Chapter 7).

In addition, the system proposed in the current paper was implemented in Java. The implementation is available online as an open-source project. It could serve not only as a proof of concept, but also as a foundation for future research. A sample of actual results obtained by using this implementation is presented towards the end of the paper along with suggestions for possible improvements.

The utilization of techniques from computational biology in natural language processing opens up a number of exciting possibilities. Among others, the efficient pattern matching and discovery techniques are well suited for various applications in computational linguists. In this paper some of the techniques are described in a very detailed and intuitive way. The description is accompanied by numerous illustrations and examples on each and every topic discussed. It is our hope that the current paper could introduce more people to the new possibilities opened by these techniques.

\clearpage
\part{Repeats}

\section{Continuous Repeats}

\subsection{Sequences}

In this chapter, the basic notation of a sequence, as it is used in computational biology, is introduced which will be used in subsequent chapters. We start by defining a sequence. A \term{sequence} is a number of symbols, where the position of each symbol is fixed. Each \term{position} has an \term{index} which uniquely identifies the position of a single symbol in the sequence. Sequences are very common, a written word in English is a sequence of letters, sentence is a sequence of words etc. In the examples throughout this paper we shall be looking at sequences of characters, unless stated otherwise. Capital letters will be used to refer to sequences, whereas their denotation will be written using the type writer text style. Consider, for example, the sequence

\begin{equation*}
S=\texttt{mining␣engineering\#}
\end{equation*}

\noindent
shown below. It will be used in examples throughout the first part of this paper.

\medskip
\input{sequence.tex}
\medskip

Each position in this sequence is identified by its index, written as a subscript. Note that we added a special end marker symbol \texttt{\#} to the sequence. For convenience, this marker symbol is lexicographically greater than any other symbol of an alphabet. Also note that we start counting indexes from $0$ to bring the examples closer to the array data structure commonly used in actual implementations.

\begin{definition}[Sequence]
A sequence $S$ of length $n$ is defined as a function $I \mapsto \Sigma$, where $I=\{i \in\mathbb{N}\cup \{0\}: 0 \leq i < n\}$ and $\Sigma$ is a well-ordered set of symbols known as an alphabet. The well-order relation used to order the alphabet is called the lexicographical order.
\end{definition}

A \term{symbol} is an element of an alphabet which makes an atomic unit of a sequence. We denote the symbol at an index $i$ in a sequence $S$ by $S[i]$. For example, the $4$th symbol, denoted by $S[4]$, of the sequence $S$ is highlighted below.

\medskip
\input{symbol.tex}
\medskip

\begin{definition}[$i$th Symbol in a Sequence]
An $i$th symbol in a sequence $S$, denoted by $S[i]$, is the value of the function $S$ at the index $i$.
\end{definition}

A \term{subsequence} is a section of a sequence bounded by its starting and ending index. We use $S[i..j]$ to denote a subsequence of $S$ starting at the index $i$ and ending at the index $j$, including $j$. For example, the subsequence

\begin{equation*}
S[3..5]=\texttt{ing}
\end{equation*}

\noindent
of the sequence $S$ is highlighted below.

\medskip
\input{subsequence.tex}
\medskip

\begin{definition}[Subsequence]
A subsequence $S[j..k]$ of a sequence $S$ of length $n$ is defined as a sequence $Q$ of length k-j+1, such that $Q[i]=S[j+i]$, where $i \in\mathbb{N}\cup \{0\}, 0 \leq i \leq k-j, k < n$.
\end{definition}

Note that sometimes it might come in handy to think of a sequence $S$ of length $n$ as a subsequence $S[0..n-1]$, so one might think of the former simply as a shorthand for the latter. In order to be able to compare sequences we define an \term{equivalence relation} for sequences. This equivalence relation partitions the set of all subsequences of a given sequence into equivalence classes comprised of subsequences having the same succession of symbols and the same length, but starting at different indexes. For example, two equivalent subsequences $S[3..5] \equiv_s S[15..17]$ of the sequence $S$ are highlighted below.

\medskip
\input{equivalence.tex}
\medskip

\begin{definition}[Sequence Equivalence Relation]
Subsequences $Q$ and $R$ of a sequence $S$, both of length $n$, are said to be in an equivalence relation, denoted by $Q \equiv_s R$, if $Q[i] = R[i]$ for all $0\leq i< n$. 
\end{definition}

We shall use the square brackets to refer to the sequence equivalence class. In the example above, where

\begin{equation*}
S[3..5] \equiv_s S[15..17]
\end{equation*}

\noindent
we can say

\begin{equation*}
[S[3..5]] = [S[15..17]]
\end{equation*}

\noindent
i.e. the equivalence class of $[S[3..5]]$ is exactly the same set as the equivalence class of $[S[15..17]]$. To refer to the substring represented by this equivalence class in the examples we shall use:

\begin{equation*}
[S[3..5]] = [\texttt{ing}]
\end{equation*}

Two cases of subsequences will be of particular interest to us, namely those whose beginning/end coincide with the beginning/end of a sequence. Those are called \term{prefix} and \term{suffix} respectively. For example, the third prefix

\begin{equation*}
S[0..3] =\texttt{mini}
\end{equation*}

\noindent
of the sequence $S$ is highlighted below.

\medskip
\input{prefix.tex}
\medskip

\begin{definition}[Prefix]
The $k$'th prefix of a sequence $S$ is a subsequence $S[0..k]$ such that $0 \leq k < n$. 
\end{definition}

The 15th suffix $S[15..18]=\texttt{ing\#}$ of the sequence $S$ is highlighted below.
\input{suffix.tex}

\begin{definition}[Suffix]
The $k$'th suffix of a sequence $S$ is a subsequence $S[k..n]$ such that $0 \leq k < n$. 
\end{definition}

\subsection{Continuous Repeats}

In this section, the notion of \term{repeat}, as it is used in computational biology, will be presented. We begin by introducing the fundamental definition of repeat. It could be defined intuitively as a subsequence which occurs more than once in a given sequence. For example, the subsequences $S[8..9]$, $S[15..17]$ and $S[4..5]$ of the sequence $S$ form a repeat, since they all are in the same equivalence class. These are highlighted below.
\input{repeat.tex}

\begin{definition}[Repeat]
A sequence equivalence class $[R]$ is called a repeat if and only if its cardinality is greater than one.
\end{definition}

We shall call the number of subsequences in an equivalence class \term{number of occurrences of a repeat}. For instance, in the previous example there are three occurrences of the repeat $[\texttt{ng}]$.

\begin{definition}[Number of Occurrences of a Repeat]
The cardinality of an equivalence class $[R]$, where $[R]$ is a repeat, is called the number of occurrences of the repeat $[R]$.
\end{definition}

We shall refer to the length of any sequence in a repeat as \term{length of a repeat}. In the previous example the length of the repeat equals two.

\begin{definition}[Length of a Repeat]
For a repeat $[R]$ the length of the sequence $R$ is called length of the repeat.
\end{definition}

Note that the definition of repeat above is very generic and it does not in any way limit the length of a repeat or its other qualities. In practice it is useful to have an idea of whether the given repeat is subsumed by (i.e. a subsequence of) some other repeats. Consider the following repeat

\begin{equation*}
[R]=\{S[1..1], S[3..3], S[10..10], S[15..15]\}
\end{equation*}

\noindent
shown in the sequence $S$ below.

\medskip
\input{nonrmaxrepeat.tex}
\medskip

\noindent
Although this is a valid repeat, we could extend each subsequence by one to the right and thus obtain a longer repeat

\begin{equation*}
[Q]=\{S[1..2], S[3..4], S[10..11], S[15..16]\}
\end{equation*}

\noindent
with the same number of occurrences (shown below). Thus in case of the first repeat we say it is not \term{right maximal}. 

\medskip 
\input{rmaxrepeat.tex}
\medskip

Looking at the new repeat we observe that if it were extended by one to the right again, we would no longer have a repeat. Thus the repeat we have obtained is right maximal.

\begin{definition}[Right Maximality]
A repeat $[S[i..j]]$ is called right maximal if and only if the cardinality of the sequence equivalence class $[S[i..j+1]]$ is lower than that of $[S[i..j]]$.
\end{definition}

\noindent
Using the same logic in the reverse direction we get a definition of \term{left maximality}.

\begin{definition}[Left Maximality]
A repeat $[S[i..j]]$ is called left maximal if and only if the cardinality of the sequence equivalence class $[S[i-1..j]]$ is lower than that of $[S[i..j]]$.
\label{def:left-max}
\end{definition}

\noindent
The repeat in the previous example is both right and left maximal, as it cannot be extended either way without losing occurrences. Such a repeat is called \term{maximal}.

\begin{definition}[Maximality]
A repeat which is both left maximal and right maximal is called maximal.
\end{definition}

The idea is that if we can extend the length of a repeat by one in any direction and the result is a repeat with the same number of occurrences, that means the original repeat is not maximal. In general the length of maximal repeats cannot be extended without losing the number of occurrences.

The supermaximal repeat is a repeat that is not a subsequence of any other repeat.

\begin{definition}[Supermaximality]
A repeat $[S[i..j]]$ is called \term{supermaximal} if and only if both sequence equivalence classes $[S[i-1..j]]$ and $[S[i..j+1]]$ are no longer repeats.
\end{definition}

\noindent
The supermaximal repeat

\begin{equation*}
[Q]=\{S[3..5], S[15..17]\}
\end{equation*}

\noindent
is shown below.

\medskip 
\input{smaxrepeat.tex}
\medskip

\clearpage

\section{Discovering Continuous Repeats}
In this chapter we shall consider a method for finding all the repeats in a given sequence in time proportional to its length. We shall present the basic data structures for this method which were first introduced in \cite{DBLP:journals/siamcomp/ManberM93}, along with their recent enhancements proposed by \cite{DBLP:journals/jda/AbouelhodaKO04} and \cite{DBLP:journals/algorithmica/KimKP08}.

Using our example sequence we start by enumerating all its suffixes. These are shown in Figure \ref{fig:suffixes}. Note that each symbol has its index written in subscript and that the index of the first symbol is shown at the beginning of each row in red. The index of the first symbol matches the index of the corresponding suffix, that is the $n$'th suffix starts with the $n$'th symbol as follows from the definition in the previous chapter. The first column in fact identifies all the suffixes, and in practice there is no real need to list the suffixes explicitly as we have done here.

\input{suffixes.tex}

\subsection{Suffix Table}

Now that we have all the suffixes of the sequence we shall sort them in lexicographical order, as shown in Figure \ref{fig:suftab}. It has the same structure as the Figure \ref{fig:suffixes}, apart from the fact that the suffixes are sorted. As we have already established the first column, i.e. the column of suffix indexes, contains all the information needed to identify the suffixes. We shall call this column a \term{suffix table} or in short \term{suftab}. The $i$'th number in a suffix table will be referred to as $suftab[i]$, as usual we start counting from $0$. For example $suftab[7] = 3$, it denotes the suffix $S[3..18]$.

\input{suftab.tex}

\begin{definition}[Suffix Table]
A suffix table, called in short $suftab$, is an array of length $n$ containing the suffix indices of the lexicographically sorted suffixes of $S$.
\end{definition}


Although for typical sorting algorithms a good behaviour is $\mathcal{O}\left( n \log n\right)$ \cite{DBLP:books/daglib/0020103}. It was shown, that if the special properties of suffixes are used, an average case complexity of $\mathcal{O}\left( n \right)$ could be achieved for sorting suffixes. One such algorithm is called \term{skew algorithm}, it was introduced by \cite{Karkkainen:2006}.

\subsection{Longest Common Prefix Table}

In the $suftab$ suffixes starting with the same subsequences are clustered neatly together. Thus repeats could already be found by searching for the first and the last suffix starting with the same subsequence using binary search. In order to make repeats explicit we shall introduce an auxiliary table. In this table each position contains the length of the longest common prefix of the suffix in the current position with its predecessor. Such table is called a \term{longest common prefix table} or \term{lcptab} in short. Figure \ref{fig:lcptab} shows the $lcptab$ based on the $suftab$ in Figure \ref{fig:suftab}. The longest common prefix is shown in colour for each position and the length of these prefixes is shown in green in the first column. This column actually constitutes the $lcptab$. For example, since both $suftab[7]$ and $suftab[8]$ start with the subsequence $[\texttt{ing}]$ of length 3, $lcptab[8] = 3$.

\input{lcptab.tex}

\begin{definition}[Longest Common Prefix Table]
The longest common prefix table of a sequence $S$, called in short $lcptab$, is an array of length $n$, where each element, $lcptab[i]$, indicates the length of the common prefix between $suftab[i-1]$ and $suftab[i]$. Since the $0$th position in the $suftab$ has no predecessor its $lcptab$ value is set to $-1$, i.e. $lcptab[0]=-1$. The last value in the $lcptab$ is always $0$, as \texttt{\#} is a unique marker symbol which is lexicographically greater than any other symbol.
\end{definition}

A linear-time algorithm to compute the longest common prefix table for a suffix table is proposed in \cite{DBLP:conf/cpm/KasaiLAAP01}.

\subsection{\emph{lcp-interval}}

As already mentioned, the longest common prefix table allows us to easily see the length of the common prefixes of two neighbouring suffixes, i.e. a length of a repeat with two occurrences. But if a few consequent suffixes share a common prefix of a particular length, then we could talk about a repeat with a greater number of occurrences. For example, consider $lcptab[6]=0$,  $lcptab[7]=2$, $lcptab[8]=3$ and $lcptab[9]=2$. Observe that the corresponding suffixes share a common prefix of size 2, but the same could also be concluded without actually looking at the suffixes. By definition of the $lcptab$, $suftab[7]$ and $suftab[6]$ share a common prefix of size 2, $suftab[8]$, and $suftab[7]$ share a common prefix of size 3 and $suftab[9]$ and $suftab[8]$ share a common prefix of size 2. Since sharing a common prefix is a transitive relation we can say that all of those suffixes share a common prefix of size 2. We would call such a section of an $lcptab$ an \term{lcp-interval} and the length of its commonly shared prefix is the \term{lcp value} of an lcp-interval. The precise definition of an lcp-interval is:

\begin{definition}[\emph{lcp-interval}]\label{def:lcpi}

An interval $lcptab[i..j]$ is called an lcp-interval $[i..j]$ with an lcp value $lcp$ if:
\begin{itemize}
    \item $lcptab[i] < lcp$,
    \item $lcptab[k] \ge lcp$ for all $k$ such that $i + 1 \le k \le j$,
    \item $lcptab[k] = lcp$ for at least one $k$ such that $i + 1 \le k \le j$, and
    \item $lcptab[j + 1] < lcp$.
\end{itemize}
\end{definition}

We shall denote the lcp value of an lcp-interval $[i..j]$ as $lcp([i..j])$. Each interval corresponds to a repeat. In order to know how many occurrences of a repeat there are, we define \term{lcp-interval length}.

\begin{definition}[\emph{lcp-interval} length]
The length of an lcp-interval $[i..j]$ is defined as the number of indices in the lcp-interval, i.e., $j-i+1$.
\end{definition}

In our example the lcp-interval $[6..9]$ has an lcp value of 2 and a length of 4. Inside of it there is another lcp-interval $[7..8]$ with an lcp value of 3 and a length of 2. Since intervals are repeats we would be interested to know what subsequences they actually correspond to. We shall call these subsequences the \term{labels of lcp-intervals}. Note that lcp-intervals already produce right maximal repeats.

\begin{definition}[\emph{lcp-interval} label]
For an lcp-interval $[i..j]$ the subsequence

\begin{equation*}
S[suftab[i]..suftab[i]+lcp([i..j])-1]
\end{equation*}

is called the label of the lcp-interval.
\end{definition}

\subsection{Suffix Array}

A structure containing a suffix table and a longest common prefix table is called a \term{suffix array}. It was first introduced in \cite{DBLP:journals/siamcomp/ManberM93}. The properties of a suffix array could be found in Table \ref{tab:sa}. The suffix array for the sequence $S$ is shown in Figure \ref{fig:sa}.

\input{sa.tex}

\begin{definition}[Suffix Array]
A suffix array for a sequence $S$ consists of two arrays of size $n$: the suffix table ($suftab$) and the longest common prefix table ($lcptab$) for $S$.
\end{definition}

\begin{table}[H]
\begin{center}
	\begin{tabular}{lllc}
        \toprule
		\multirow{2}{*}{Applications:} & \multicolumn{3}{l}{Indexing} \\
                      & \multicolumn{3}{l}{Burrows--Wheeler transform} \\
        \midrule
        \midrule
        \multirow{3}{*}{Data Structures:} & Type & Name & Size \\
        \cmidrule(r){2-4}
        & \textit{Array} & \textit{Suffix Table} ($suftab$) & $n$ \\
	    & \textit{Array} & \textit{Longest Common Prefix Table} ($lcptab$) & $n$ \\
    \bottomrule
	\end{tabular}
	\captionof{table}{Suffix array properties.}
	\label{tab:sa}
\end{center}
\end{table}

\subsection{\emph{lcp-interval} Tree}

Intuitively, an lcp-interval represents all suffixes sharing a common prefix of length $l$. Since all the suffixes sharing a common prefix are always adjacent in the suffix array, we get a partition of the entire suffix array into intervals and those in turn are partitioned into even smaller intervals. Eventually, we end up with intervals of length one which no longer have partitions. This recursively partitioned structure is actually a tree which we refer to as an \term{lcp-interval tree}. Each node in an lcp-interval tree is labelled with the beginning and ending indices of the respective interval. For example, using the Definition \ref{def:lcpi} on the $lcptab$ from Figure \ref{fig:lcptab} we see that the entire $lcptab$ is an lcp-interval, $[0..18]$, with an lcp value of $0$. This interval is in turn partitioned into:

\begin{equation*}
[0..2], [3..5], [6..9],[10],[11..15],[16],[17],[18].
\end{equation*}

We continue partitioning until we arrive at the intervals of length one, those would be the terminal nodes in an lcp-interval tree. The complete lcp-interval tree as well as the $suftab$ and the $lcptab$ of the underlying suffix array are shown in Figure \ref{fig:itree}.

\input{itree.tex}

\subsection{Using an \emph{lcp-interval} Tree to Discover Repeats}

An lcp-interval tree reduces the discovery of all the repeats in a sequence to an elementary tree traversal. The procedure is as follows: traverse the tree in any order, for each non-terminal interval $[i..j]$ the parameters of the corresponding repeat are:

\begin{itemize}
	\item lcp value $lcp([i..j])$ is the length of a corresponding repeat $[R]$,
	\item $i-j+1$ is the number of occurrences of the repeat $[R]$,
	\item $label([i..j])$ is a representative sequence from equivalence class $[R]$,
	\item complete sequence equivalence class $[R]$ is defined as
    \begin{equation*}
    	[R]=\{S[suftab[k]..suftab[k]+lcp([i..j])-1] : i \leq k \leq j\}.
    \end{equation*}
\end{itemize}

For example the repeat $[Q]$ corresponding to the interval $[6..9]$ in figure \ref{fig:itree} has the following parameters:

\begin{itemize}
	\item $length([Q]) = lcp([6..9]) = 2$,
	\item $\textit{number of occurrences}([Q]) = 9-6+1 = 4$,
	\item $[Q] = label([6..9]) = [\texttt{in}]$,
	\item $[Q]=\{S[10..11],S[3..4],S[15..16],S[1..2]\}$.
\end{itemize}

Thus we obtain all the information we need about the repeats in $S$. Note that so far there is no guarantee as to maximality of the repeats. The methods to address this issue will be proposed later. In what follows we deal with some further implementation issues of lcp-interval trees. 

\subsection{Suffix Tree}

The concept of \term{suffix tree} first appeared under a different name in \cite{DBLP:conf/focs/Weiner73}, later its construction algorithm was greatly improved by \cite{McCreight:1976:SST:321941.321946}, and subsequently a linear time construction algorithm was proposed by \cite{ukkonen1995:suffix}. Suffix tree is a compact trie with exactly $n$ terminal nodes representing all the suffixes of a given sequence $S$. The $i$th terminal node corresponds to the suffix $S[i..n-1]$ while internal (non-terminal) nodes represent common prefixes. Each edge in a suffix tree is labelled with a non-empty subsequence of $S$ in such a way that the concatenation of all the labels from the root down to a terminal node results in a suffix represented by this terminal. Suffix tree structure could easily be extracted from an lcp-interval tree introduced earlier. Let's consider an internal or a root node $\nu_{ij}$, representing an lcp-interval $[i..j]$ and its child node $\nu_{kl}$, representing a subordinate lcp-interval $[k..l]$. We know that the lcp-interval $[k..l]$ corresponds to the prefix

\begin{equation*}
S[suftab[k]..suftab[k]+lcp([k..l])].
\end{equation*}

The suffix tree edge between these two nodes will be labelled with the part of the prefix contributed by this interval, and not inherited from the parent interval, i.e. we adjust the beginning of a prefix by adding its parent lcp value to it. Thus we obtain the edge label as follows.

\begin{equation*}
S[suftab[k]+lcp([i..j])..suftab[k]+lcp([k..l])]
\end{equation*}

\noindent
To obtain the label for a terminal node, we use the same staring index, but we use the end of the string as the label ending index, i.e.

\begin{equation*}
S[suftab[k]+lcp([i..j])..n-1]
\end{equation*}

\noindent
As we have just seen, it is possible to substitute an lcp-interval tree for a suffix tree. Ths suffix tree for the sequence $S$ is shown in Figure \ref{fig:stree}. Note the similarity to Figure \ref{fig:itree}, especially how the edge labels in Figure \ref{fig:stree} correspond to the intervals shown in the table of the Figure \ref{fig:itree}. The properties of a suffix tree are summarized in Table \ref{tab:st}.

\input{stree.tex}

\begin{table}[H]
\begin{center}
	\begin{tabular}{lllc}
        \toprule
		\multirow{5}{*}{Applications:} & \multicolumn{3}{l}{Exact sequence matching (sequence search)} \\
        & \multicolumn{3}{l}{Finding a longest common subsequence}\\
        & \multicolumn{3}{l}{Finding maximal repeats} \\
        & \multicolumn{3}{l}{Finding supermaximal repeats} \\
        & \multicolumn{3}{l}{Burrows--Wheeler transform} \\
        \midrule
        \midrule
        \multirow{3}{*}{Data Structures:} & Type & Name & Number \\
        \cmidrule(r){2-4}
        & \textit{Tree Node} & \textit{Internal Nodes} (incl. root) & at most $n$ \\
	    & \textit{Tree Node} & \textit{Terminal Nodes} (leaf nodes) & exactly $n$ \\
    \bottomrule
	\end{tabular}
	\captionof{table}{Suffix tree properties.}
	\label{tab:st}
\end{center}
\end{table}

We have presented suffix trees to underline their conceptual proximity to lcp-interval trees, which we shall use extensively throughout this paper. Being an earlier concept, suffix trees have inspired many developments in suffix arrays and thus no overview of the field would be complete without them. For some time, the two data structures used to complement each other, but with recent developments, more and more applications of suffix trees are taken over by suffix arrays. For the purposes of repeat discovery an explicit construction of suffix trees has been shown by \cite{DBLP:journals/jda/AbouelhodaKO04} to be less efficient than the suffix array based approach presented in the next section.

\subsection{Enhanced Suffix Array}

In order to facilitate the replacement of suffix trees with suffix arrays, \cite{DBLP:journals/jda/AbouelhodaKO04} proposed an \term{enhanced suffix array}. They also suggested possible algorithms on suffix array to address the common uses of suffix trees. One of the most common operations performed on a suffix tree is traversal. To facilitate an lcp-interval tree traversal, an additional array of size $n$, a \term{child table}, or in short \term{cldtab}, was introduced. It stores the entire tree structure in a very efficient way allowing for online reconstruction of a suffix tree during traversal. Since an enhanced suffix array does not store tree structure explicitly, its use as a replacement for suffix trees offers significant improvements in memory efficiency. The properties of enhanced suffix arrays are summarized in Table \ref{tab:esa}. The enhanced suffix array for the sequence $S$ is shown in Figure \ref{fig:esa}.

\begin{definition}[Enhanced Suffix Array]
The enhanced suffix array for a sequence $S$ consists of three arrays of size $n$: the suffix table ($suftab$), athe longest common prefix table ($lcptab$), and the child table ($cldtab$) for $S$.
\end{definition}

\input{esa.tex}

\begin{table}
\begin{center}
	\begin{tabular}{lllc}
        \toprule
		\multirow{5}{*}{Applications:} & \multicolumn{3}{l}{Exact sequence matching (sequence search)} \\
        & \multicolumn{3}{l}{Finding a longest common subsequence}\\
        & \multicolumn{3}{l}{Finding maximal repeats} \\
        & \multicolumn{3}{l}{Finding supermaximal repeats} \\
        & \multicolumn{3}{l}{Indexing} \\
        & \multicolumn{3}{l}{Burrows--Wheeler transform} \\
        \midrule
        \midrule
        \multirow{4}{*}{Data Structures:} & Type & Name & Size \\
        \cmidrule(r){2-4}
        & \textit{Array} & \textit{Suffix Table} ($suftab$) & $n$ \\
	    & \textit{Array} & \textit{Longest Common Prefix Table} ($lcptab$) & $n$ \\
        & \textit{Array} & \textit{Child Table} ($cldtab$) & $n$ \\
    \bottomrule
	\end{tabular}
	\captionof{table}{Enhanced suffix array properties.}
	\label{tab:esa}
\end{center}
\end{table}

\subsection{Child Table}
A child table stores interval relations in the lcp-interval tree structure. There are two kinds of relations: \term{the second child of} and \term{the next sibling of} an interval. For an interval $[i..j]$, we shall call them $child([i..j])$ and $next([i..j])$ respectively. Those are retrieved from the \texttt{cldtab} as follows:

\begin{itemize}
	\item if $[i..j]$ is neither root nor the last child
    \begin{align*}
    	next([i..j])&=cldtab[i] \text{ (undefined if $[i..j]$ is the first child)} \\
        child([i..j])&=cldtab[j] \text{ (undefined if $[i..j]$ is a terminal)},	
    \end{align*}
    \item if $[i..j]$ is either root or the last child
    \begin{align*}
    	next([i..j])& \text{ is undefined} \\
        child([i..j])&=cldtab[i] \text{ (undefined if $[i..j]$ is a terminal)}.
    \end{align*}
\end{itemize}

Consider the example of traversing an lcp-interval tree for the sequence $S$ using its $cldtab$. We start with the root node $[0..18]$ in Figure \ref{fig:cldtab0}. Using the second rule, we know that the starting index of the second child of $[0..18]$ is

\begin{equation*}
child([0..18])=cldtab[0]=3.
\end{equation*}

\noindent
Thus the first child is $[0..2]$. Note that the $child$ relation is shown in green. That is all we need to know to go one level deeper down the tree. Observe now the first child in Figure \ref{fig:cldtab1}. Since it is not a terminal node, only $child$ is defined for it:

\begin{equation*}
child([0..2])=cldtab[2]=1.
\end{equation*}

\noindent
Now, as for the second child, we only know its starting position $3$. We calculate $next$ using the definition above:

\begin{equation*}
next([3..?])=cldtab[3]=6.
\end{equation*}

\noindent
Now that we know that the second interval is $[3..5]$, we can find its $child$ value, i.e.:

\begin{equation*}
child([3..5])=cldtab[5]=4.
\end{equation*}

\noindent
The $next$ relation is shown in red. We continue this procedure until we reach the last interval and do the same at the next level, shown in Figure \ref{fig:cldtab2}, and the last one, shown in Figure \ref{fig:cldtab3}, thus, obtaining the complete lcp-interval tree shown in Figure \ref{fig:cldtab4}.

\input{cldtab.tex}

\subsection{Burrows--Wheeler Transform Table}

So far, we have shown how right maximal repeats could be extracted in time linear to the size of a sequence. In order to achieve maximality in both directions, we shall introduce another table, called a \term{Burrows--Wheeler transform table} or in short $bwttab$. It is based on a compression technique introduced in \cite{Burrows94ablock-sorting} and is rather intuitive. For each suffix, such a table contains the symbol preceding this suffix in a given sequence. For instance, for $suftab[0] = 12$, the corresponding suffix $S[12..18]=\texttt{eering\#}$ is preceded by the symbol $S[11]=\texttt{n}$ in the sequence $S$ as shown in Figure \ref{fig:bwttab}, thus $bwttab[0] = \texttt{n}$.

\begin{definition}[Burrows--Wheeler Transform Table]
A Burrows--Wheeler Transform table, called in short \term{bwttab}, is an array of length $n$ of symbols $S[suftab[i]-1]$ for each suffix $i$.
\end{definition}

\input{bwttab.tex}

In order to determine whether a given interval is left maximal, we should look at the section of $bwttab$ corresponding to the interval. In case the section constitutes a singleton set of symbols, we can conclude that the interval is not left maximal. As one can immediately see this idea is based directly on the definition of left maximality (Definition \ref{def:left-max}). Coming back to our example, the interval $[3..5]$ in Figure \ref{fig:bwttab} is not left maximal as the corresponding values of the $bwttab$ all contain the same symbol, namely \texttt{n}. That means that there exists another repeat subsuming this repeat. As one can see the repeat represented by the interval $[12..14]$ is the maximal repeat subsuming the repeat represented by $[3..5]$. In practice, there is no need to store an actual $bwttab$ table, as its elements could be computed in constant time from a $suftab$.

\subsection{Linearised Suffix Tree}

The \term{linearised suffix tree} proposed in \cite{DBLP:journals/algorithmica/KimKP08} adds efficiency to the enhanced suffix array by introducing the so-called \term{binary lcp-interval tree} (also \term{modified lcp-interval tree}) which is a balanced binary tree unlike the previously introduced lcp-interval tree. Having a balanced tree becomes much more important when dealing with large alphabets. The binary lcp-interval tree is encoded in the \term{new child table} which replaces the $cldtab$ in a linearised suffix tree.

\begin{table}[H]
\begin{center}
	\begin{tabular}{lllc}
        \toprule
		\multirow{5}{*}{Applications:} & \multicolumn{3}{l}{Exact sequence matching (sequence search)} \\
        & \multicolumn{3}{l}{Finding longest common subsequence}\\
        & \multicolumn{3}{l}{Finding maximal repeats} \\
        & \multicolumn{3}{l}{Finding supermaximal repeats} \\
        & \multicolumn{3}{l}{Indexing} \\
        & \multicolumn{3}{l}{Burrows--Wheeler transform} \\
        \midrule
        \midrule
        \multirow{4}{*}{Data Structures:} & Type & Name & Size \\
        \cmidrule(r){2-4}
        & \textit{Array} & \textit{Suffix Table} ($suftab$) & $n$ \\
	    & \textit{Array} & \textit{Longest Common Prefix Table} ($lcptab$) & $n$ \\
        & \textit{Array} & \textit{New Child Table} ($newcldtab$) & $n$ \\
    \bottomrule
	\end{tabular}
	\captionof{table}{Linearised suffix tree properties.}
	\label{tab:lst}
\end{center}
\end{table}

\begin{definition}[Linearised Suffix Tree]
The \term{linearised suffix tree} for a sequence $S$ consists of three arrays of size $n$: the suffix table ($suftab$), the longest common prefix table($lcptab$) and the new child table($newcldtab$) for $S$.
\end{definition}

\input{lst.tex}

The linearised suffix tree for the sequence $S$ is shown in Figure \ref{fig:lst}.

As one can see a linearised suffix tree has more intervals than an enhanced suffix array, but as we have already said they represent exactly the same intervals. This happens because many intervals in a linearised suffix tree are there for the sake of making it a binary tree and are not \term{proper intervals}. For this reason, we should be able to tell which interval is a proper one and which is not.

\begin{definition}[Proper Interval]
An interval of a linearised suffix tree is called a proper interval if its lcp value is distinct from its parent's lcp value. The root interval is set to be a proper interval.
\end{definition}

For example, the interval $[3..4]$ in Figure \ref{fig:lst} is not a proper interval, since it has exactly the same lcp value as its parent, namely 1.

\subsection{New Child Table}

The $newcldtab$ array stores the structure of a modified lcp-interval tree. Since a modified lcp-interval tree is a binary tree, each non-terminal interval has exactly two children splitting it in two. Thus, knowing the starting index of the second child for each interval, we can reconstruct the entire tree structure. For an interval $[i..j]$, we shall denote this index as $child([i..j])$ and it will be stored in the $newcldtab$ as follows:

\begin{itemize}
\item if $[i..j]$ is the first child
    \begin{equation*}
        child([i..j])=newcldtab[j] \text{ (undefined if $[i..j]$ is a leaf node)}
    \end{equation*}
\item if $[i..j]$ is the second child or the root
    \begin{equation*}
        child([i..j])=newcldtab[i] \text{ (undefined if $[i..j]$ is a leaf node)}
    \end{equation*}
\end{itemize}

Traversing a binary lcp-interval tree is accomplished by recursively applying the rules above. Consider the example of traversing a binary lcp-interval tree for the sequence $S$ using a $newcldtab$. We start with the root node $[0..18]$, shown in Figure \ref{fig:newcldtab0}. Using the second rule, we know that the starting index of the second child of $[0..18]$ is

\begin{equation*}
child([0..18])=newcldtab[0]=11.
\end{equation*}

\noindent
Thus, the first child is $[0..10]$ and the second child is $[11..18]$. Knowing all the children of the root node, we are ready to move down one level. For the first child of the root, we use the first rule:

\begin{equation*}
child([0..10])=newcldtab[10]=6,
\end{equation*}

\noindent
i.e the children are $[0..5]$ and $[6..10]$. For the second child, we use the second rule:

\begin{equation*}
child([11..18])=newcldtab[10]=17,
\end{equation*}

\noindent
i.e the children are $[11..16]$ and $[17..18]$. This is shown in Figure \ref{fig:newcldtab1}. This simple procedure is repeated until we reach the leaf interval (as shown in Figure \ref{fig:newcldtab2}, Figure \ref{fig:newcldtab3}, Figure \ref{fig:newcldtab4}, Figure \ref{fig:newcldtab5},and Figure \ref{fig:newcldtab6}). The complete binary lcp-interval tree is shown in Figure \ref{fig:newcldtab8}, accompanied by the complete $newcldtab$.

\input{newcldtab.tex}

\input{bitree.tex}

\clearpage

\section{Discontinuous Repeats}
In the previous chapter, we went through an algorithm for repeat discovery. We will next introduce an extension to that algorithm for finding \term{discontinuous repeats}. We start by adding some notation which will be used later. So far we have talked about standalone sequences, we now shift our attention to relations between sequences. Naturally, a subsequence could precede another subsequence in a sequence, thus one can talk of an order between subsequences based on their position in a given sequence. Let $A$ be the set of all the subsequences of $S$.

\begin{definition}[Subsequence Precedence Relation $<_T$]
A binary relation $<_T$ between two subsequences $S[j..k] \in A$ and $S[l..m] \in A$ defined as $S[j..k] <_T S[l..m]$ if and only if $k<l$, is called a \term{subsequence precedence relation}. This relation is a well-order relation on a set $T \subset A$.
\end{definition}

In the following example, the subsequence $S[10..11]$ precedes the subsequence $S[15..17]$ in the subsequence precedence order.

\input{dsubsequence.tex}

It is clear from the definition above that the relation $<_T$ is not defined between every pair of subsequences in $A$. For instance, overlapping subsequences do not relate to each other in $<_T$. From now on we will focus only on such subsets of $A$ for which $<_T$ is defined. Each such subset consists of zero or more subsequences of $S$ that do not overlap. We shall call these sets discontinuous subsequences of $S$. Note that the notion of discontinuous subsequence is more general than the notion of sequence, since one can think of any sequence $S$ as a discontinuous subsequence consisting of one element, namely the entire sequence $S$. The example above could be written as a discontinuous subsequence

\begin{equation*}
\mathbf{D}=(\{S[10..11], S[15..17]\},<_T)
\end{equation*}

\begin{definition}[Discontinuous Subsequence]
A set $T \subset A$ together with the well-order relation $<_T$ on $T$ constitute a well-ordered set called \term{discontinuous subsequence} of $S$, denoted by $\mathbf{D}=(T, <_T)$.
\end{definition}

Another useful notion when talking about subsequences forming a discontinuous subsequence is \term{immediate precedence}, i.e. whether a subsequence comes immediately before another subsequence in $\mathbf{D}$. The subsequence $S[10..11]$ in the previous example immediately precedes the subsequence $S[15..17]$. This is a rather trivial case since there are only two subsequences in the discontinuous subsequence and naturally one of them immediately precedes the other.

\begin{definition}[Immediately Preceding Subsequence]
For a discontinuous subsequence $\mathbf{D}=(T, <_T)$, a subsequence $S[g..h] \in T$ is said to be immediately preceding a subsequence $S[l..m] \in T$ if and only if $S[g..h] <_T S[l..m]$ and there exists no subsequence $S[i..j] \in T$ such that $S[g..h] <_T S[i..j] <_T S[l..m]$.
\end{definition}

\noindent
In the example below the discontinuous subsequence

\begin{equation*}
\mathbf{D}=(\{S[1..2], S[16..17]\}, <_T)
\end{equation*}

\noindent
is shown in purple.

\input{isubsequence.tex}

One can notice that the subsequence $S[1..2] \in T$ is followed by an equivalent subsequence $S[3..4] \not\in T$ and at the same time the subsequence $S[16..17] \in T$ is preceded by an equivalent subsequence $S[8..9] \not\in T$. In this case, the subsequences $S[3..4]$ and $[8..9]$ are called \term{intervening subsequences}. Although $\mathbf{D}$ is a valid discontinuous subsequence according to the previously given definition, one often wants to obtain only discontinuous subsequences with no intervening subsequences.

\begin{definition}[Intervening Subsequence]
For a discontinuous subsequence $\mathbf{D}=(T, <_T)$ a subsequence $S[j..k] \not\in T$ is called an intervening subsequence if there exist two subsequences $S[g..h] \in T$ and $S[l..m] \in T $, where $S[g..h]$ immediately precedes $S[l..m]$, such that $S[j..k] \equiv_s S[l..m]$ or $S[j..k]  \equiv_s S[g..h]$, where $S[g..h]<_T S[j..k] <_T S[l..m]$.
\end{definition}

An \term{equivalence relation} similar to the one we already have for subsequences could be established for discontinuous subsequences.

\begin{definition}[Discontinuous Subsequence Equivalence Relation]
Let $\mathbf{D}_1=(T_1, <_T)$ and $\mathbf{D}_2=(T_2, <_T)$ be discontinuous subsequences. $\mathbf{D}_1$ and $\mathbf{D}_2$ are said to be in an equivalence relation denoted by $\equiv_D$ if there exists an onto function $h:T_1 \mapsto T_2$, that maps any subsequence in $T_1$ to its equivalent subsequence in $T_2$, such that for any two subsequences $S_i, S_j \in T_1$, $h(S_i) <_T h(S_j)$ if and only if $S_i <_T S_j$. I.e. $h$ is an order isomorphism from $(T_1, <_T)$ to $(T_2, <_T)$.
\label{def:dseqeq}
\end{definition}

If a discontinuous subsequence equivalent to another discontinuous subsequence could be found in the sequence $S$ we shall call such a discontinuous subsequence a discontinuous repeat. For example the discontinuous subsequence

\begin{equation*}
\mathbf{D_1}=(\{S[10..11],S[15..17]\},<_T)
\end{equation*}

\noindent
is equivalent to the discontinuous subsequence

\begin{equation*}
\mathbf{D_2}=(\{S[1..2],S[3..5]\},<_T)
\end{equation*}

\noindent
and thus they are both repeats of each other. These are highlighted below:

\input{drepeat.tex}

As with sequence equivalence classes, we shall use an element in square brackets to denote its equivalence class. In the following example,

\begin{equation*}
[\mathbf{D_1}] = [\mathbf{D_2}]
\end{equation*}

\noindent
In the substrings notation, we shall write

\begin{equation*}
[\mathbf{D_2}] = [\texttt{in}\dots\texttt{ing}].
\end{equation*}

\begin{definition}[Discontinuous Repeat]
A discontinuous subsequence equivalence class $[\mathbf{D}]$ is called a discontinuous repeat if and only if its cardinality is greater than one.
\end{definition}

\section{Discovering Discontinuous Repeats}

In this chapter, we present a method for finding all discontinuous repeats in a given sequence. This method builds upon the research described in \cite{Gerdemann:2010}.

Since a discontinuous repeat is nothing more than an ordered set of repeats we shall start the discontinuous repeat discovery by first selecting a repeat and then trying to establish whether it is followed by other repeats. We shall refer to this first repeat as the initial repeat. Of course, each occurrence of the initial repeat is not necessarily followed by an occurrence of another repeat. It is enough if only some occurrences of the initial repeat are followed by some occurrences of another repeat. Thus, we know the first step of the discontinuous repeat discovery -- finding a repeat. This could be done by constructing either an enhanced suffix array or a linearised suffix tree presented in the previous chapters. Once we have one of these we traverse through or search in an lcp-interval tree for an lcp-interval representing the initial repeat. Note that in case of a binary lcp-interval tree, we should only consider proper intervals. For chosen lcp-interval, we look at the section of the suffix table that it covers. Let us take, for example, the interval [6..9] of the binary lcp-interval tree in Figure \ref{fig:bitree}. This interval represents the repeat $[\texttt{in}]$. The section of the suffix table covered by this interval along with the corresponding suffixes is shown for the in Figure \ref{fig:esuffixes}. One can check that these are all the suffixes starting with ``\texttt{in}". If we disregard this beginning and look at the resulting sequences (highlighted) these are all the sequences that follow ``\texttt{in}" in the original sequence $S$. 

\input{esuffixes.tex}

As we have already established we have to look for repeats following the initial repeat. Note that, we have already limited our choice of sequences that might contain a desired repeat. If our goal is to find a discontinuous sequence starting with the repeat $[\texttt{in}]$ we should look for repeats in the sequences highlighted in Figure \ref{fig:esuffixes}. We could just use these sequences as suffixes and construct an lcp-interval tree to find the repeats, but this would limit us to discovering only the repeats immediately following the repeat $[\texttt{in}]$. However, we would like to discover discontinuous repeats and thus extracting suffixes from the highlighted subsequences would be the way to go. In practice, we might want to limit the distance from the end of the initial repeat to the beginning of the next one. Such a limit is called a \term{window}. Figure \ref{fig:window} shows the suffixes highlighted in Figure \ref{fig:esuffixes} and a window of size 5 is highlighted.

\input{window.tex}

By defining this window size, we limit ourselves to repeats located at a maximum distance of 5 positions from the end of the initial subsequence. We shall next construct a suffix table solely from the suffixes starting within the highlighted window. But before we do this let us introduce an auxiliary table to save us the trouble of sorting the new suffix table. Since all the suffixes we are going to extract are a subset of all the suffixes of the original sequence $S$, which we have already sorted, we can just use this subset in exactly the same order as it appears in the suffix table. For this purpose, we introduce an \term{inverse suffix table}. The suffix table maps the position of a suffix in a lexicographical order to its index, whereas the inverse suffix table is the inverse mapping, i.e. it maps the indices of suffixes to their positions in the suffix table. The inverse suffix table and the original suffix table for $S$ are presented in Figure \ref{fig:isuftab} together with an index. Now, if we wish to find the index of the 10th suffix in the suffix table we just look at the 10th position in the inverse suffix table, which is 6. This means that the 10th suffix is 6th in the lexicographical order. To check that, we can just look at the 6th position in the suffix table and indeed it contains the 10th suffix.

\input{isuftab}

Now that we have the inverse suffix table we shall write down all the suffixes starting within the window in Figure \ref{fig:window}. These are presented together with their suffix tree values (in red) and the inverse suffix tree values (in green) in Figure \ref{fig:ewinsuffixes}. Note the multiple instances of some suffixes.

\input{ewinsuffixes.tex}

In the next step, all the suffixes in Figure \ref{fig:ewinsuffixes} are sorted by their inverse suffix table value and multiple instances of the same suffix are removed. The result of the sorting is shown in Figure \ref{fig:esuftab}. The red column is the new suffix table which we shall refer to as the \term{embedded suffix table}, since it contains all the suffixes embedded in the initial interval.

\input{esuftab.tex}

\input{elcptab.tex}

Having a suffix table, we can construct the \term{embedded lcp table} shown in Figure \ref{fig:elcptab}. Note that some algorithms for lcp table construction, such as the one presented in \cite{DBLP:conf/cpm/KasaiLAAP01}, fail when presented with embedded suffix tables since these are not complete suffix tables, i.e. not all the suffixes are present.

\subsection{Embedded Suffix Tree}

Once we have an embedded suffix table and an embedded lcp table we can construct an entire \term{embedded binary lcp-interval tree}. For that we can use the procedure for construction of linearised suffix trees. We shall call the resulting structure an \term{embedded suffix tree}. Using the embedded suffix table and the embedded lcp table in the example above, we can construct the embedded suffix tree in Figure \ref{fig:est}. As it has exactly the same structure as a linearised suffix tree, there is no need to go into detail about it again. Its properties are given in Table \ref{tab:est}.

\begin{definition}[Embedded Suffix Tree]
 An embedded suffix tree for an interval $[i..j]$ consists of three arrays of size $m<(j-i+1)*window$: the embedded suffix table ($suftab$), the embedded longest common prefix table ($lcptab$) and the embedded new child table($newcldtab$) for $[i..j]$.
\end{definition}

\input{est.tex}

As we already know from linearised suffix trees, a binary lcp-interval tree could be constructed using the data from an embedded suffix tree. An example of such a tree is shown in Figure \ref{fig:etree}.

\input{etree.tex}

\begin{table}[!h]
\begin{center}
	\begin{tabular}{lllc}
        \toprule
		\multirow{5}{*}{Applications:} & \multicolumn{3}{l}{Exact discontinuous sequence matching} \\
        & \multicolumn{3}{l}{Finding a longest common discontinuous subsequence}\\
        & \multicolumn{3}{l}{Finding maximal discontinuous repeats} \\
        & \multicolumn{3}{l}{Finding supermaximal discontinuous repeats} \\
        & \multicolumn{3}{l}{Indexing} \\
        \midrule
        \midrule
        \multirow{4}{*}{Data Structures:} & Type & Name & Size \\
        \cmidrule(r){2-4}
        & \textit{Array} & \textit{Embedded Suffix Table} ($suftab$) & $m$ \\
	    & \textit{Array} & \textit{Longest Common Prefix Table} ($lcptab$) & $m$ \\
        & \textit{Array} & \textit{New Child Table} ($newcldtab$) & $m-1$ \\
    \bottomrule
	\end{tabular}
	\captionof{table}{Embedded suffix tree properties.}
	\label{tab:est}
\end{center}
\end{table}

\subsection{Recursively Embedded Suffix Tree}

So far, we have described a way to discover discontinuous repeats consisting of two sequences. This procedure could be extended to extracting discontinuous repeats with unbounded number of sequences. We start by constructing an lcp-interval tree, then we run Procedure \ref{pro:rest} with this tree and an empty discontinuous repeat as arguments.

\begin{algorithm}
\caption{$\mathbf{extract}(\mathbf{D}, tree)$ - recursive embedding of suffix trees}
\begin{algorithmic}
\REQUIRE $\mathbf{D}$
\COMMENT{a discontinuous repeat}
\REQUIRE $tree$
\COMMENT{an lcp-interval tree}
\FORALL{interval $i$ in $tree$}
    \STATE $\mathbf{C} \gets \mathbf{D}$
    \STATE append $label(i)$ to $\mathbf{C}$
    \STATE $tree \gets $ an embedded suffix tree for $i$
    \IF {the root node is the only node in $tree$}
        \PRINT $\mathbf{C}$
    \ELSE
        \STATE $\mathbf{extract}(\mathbf{C}, tree)$        
    \ENDIF
\ENDFOR
\end{algorithmic}
\label{pro:rest}
\end{algorithm}

Procedure \ref{pro:rest} will recursively construct embedded suffix trees. At certain point it will reach the base case where embedded suffix trees consist only of the root node and report an entire discontinuous repeat produced so far.

\clearpage
\part{Alignment}

\section{Preliminaries}

As this paper presents techniques in the field of computational linguistics some linguistic terminology needs to be presented. So far, we have talked about sequences in general and what exactly amounts to a symbol has been of little importance to us.

\subsection{Preprocessing}

In natural language processing one typically works with \term{corpora}. A corpus is a body of text generally collected from a single source, such as a newspaper, a magazine, or parliamentary proceeding transcripts. It commonly consolidates texts in one particular language. Before using a corpus some amount of \term{preprocessing} might be required depending on the task at hand. Preprocessing typically includes \term{lower-casing}, i.e. converting words to lower case, breaking up some compound words, e.g. \textit{Entschließungsantrag}, and combining some words into one token, e.g. \textit{in spite of}. Some affixes might be normalized as well, i.e. the words in plural might be converted to singular. For some applications punctuation might be omitted or substituted for. In case \term{part-of-speech} or \term{word sense} information is available, we could annotate tokens with this information. The selection of words we might be interested in also depends greatly on a particular application. For some applications, it could be a good idea to leave out the so-called \term{functional words}, such as determiners and prepositions, as those are much more frequent than the \term{content words}, also called \term{lexical words}. The words to be excluded from a sequence are typically composed into a \term{stop list}. Figure \ref{fig:corpus} shows a simple corpus preprocessed by lower-casing and punctuation removal.

\input{corpus.tex}

Once a corpus is preprocessed it can be converted into a sequence of letters of a respective language augmented with punctuation, or annotations etc. Alternatively, for many applications, this sequence is further split into subsequences of symbols, called \term{tokens}, and a corpus is treated as a sequence of tokens. A token is similar to what is commonly known as a word. Although it might sound simple, \term{tokenization}, that is segmentation of a text into tokens, is a rather serious problem on its own and requires elaborate techniques to acquire tokens appropriate for a particular application.

For many tasks, such as information retrieval and data mining, a corpus is additionally divided into \term{documents}. A document is any closed piece of text, such as a newspaper article, a paragraph, or a sentence. In this section, for each corpus assume that:

\begin{itemize}
  \item It is a sequence of documents,
  \item A document is a sequence of tokens,
  \item A token is a sequence of symbols.
\end{itemize}

The simple corpus from the previous example split into documents is shown in Figure \ref{fig:docs}.

\input{docs.tex}

\subsection{Parallel Corpora}

In the field of \term{machine translation}, so-called \term{parallel corpora} are used. A parallel corpus consists of collection of texts along with its translation into another language. Parallel corpora are constructed from literary works, parliamentary proceedings and other government documents. The bible along with its translation is one of the early examples of parallel corpus. We shall referr to the \term{source language} part of parallel corpora as a \term{source corpus} and the \term{target language} part as a \term{target corpus}. Figure \ref{fig:pcorpus} shows a simple parallel corpus preprocessed by lower-casing and punctuation removal.

\input{pcorpus.tex}

\subsection{Alignment}

An alignment is a mapping of sequences form a source corpus to the corresponding sequences in a target corpus. Roughly speaking, it tells us how well a sequence from a source corpus translates to a sequence from a target corpus and vice versa. For example, in document alignment, a document in a source corpus is aligned to the corresponding document in a target corpus. We shall be interested in the alignment of token sequences and particularly in the alignment of discontinuous token sequences. One commonly proceeds by aligning parts of a corpus higher up in the hierarchy, such as documents and use this alignment to find more refined alignments. The parallel corpus from the previous example split into documents and aligned by document is shown in Figure \ref{fig:pdocs}.

\input{pdocs.tex}

\begin{definition}[Discontinuous Subsequences Alignment]
Let $A_S$ and $A_T$ be the sets of all discontinuous subsequences in a source and in a target corpus respectively. An alignment is a function of the form $A_S \times A_T \mapsto \mathbb{R}$ assigning a real value to each pair of discontinuous subsequences.
\end{definition}

\section{Vector Space Model}

\subsection{Vectors}

Vectors a commonly used to represent quantities with multiple components. For example, position, velocity or acceleration have both direction and magnitude and thus could be represented by vectors. A \term{vector} is an element of a \term{vector space}. The notation for vectors is as follows: $\vec{v}$

\begin{definition}[Vector Space]
A vector space over $\mathbb{R}$ (a real vector space) is a set $V$ together with a function of the form $V \times V \mapsto V$ called addition and a function of the form $\mathbb{R} \times V \mapsto V$ called scalar multiplication.

Addition, denoted $\vec{x} + \vec{y}$, must satisfy the following axioms:
\begin{itemize}
  \item \textit{Commutativity:} For all $\vec{x}, \vec{y} \in V$,  $\vec{x} + \vec{y} = \vec{y} + \vec{x}$,
  \item \textit{Associativity:} For all $\vec{x}, \vec{y}, \vec{z} \in V$,  $(\vec{x} + \vec{y}) + \vec{z} = \vec{x} + (\vec{y} + \vec{z})$,
  \item \textit{Additive identity:} There exists a \term{zero vector} in $V$, denoted $\vec{0}$, such that $\vec{0} + \vec{x} = \vec{x}$ for each $\vec{x} \in V$,
  \item \textit{Additive inverse:} For each element $\vec{x} \in V$, there exists an inverse element $-\vec{x} \in V$ such that $(-\vec{x}) + \vec{x} = \vec{0}$.
\end{itemize}

Scalar multiplication, denoted $c\vec{x}$, must satisfy the following axioms:
\begin{itemize}
  \item \textit{Distributivity over addition in $V$:} For each $a \in \mathbb{R}$ and $\vec{x}, \vec{y} \in V$,  $a(\vec{x} + \vec{y}) = a\vec{x} + a\vec{y}$,
  \item \textit{Distributivity over addition in $\mathbb{R}$:} For each $a, b \in \mathbb{R}$ and $\vec{x} \in V$,  $(a + b)\vec{x} = a\vec{x} + b\vec{x}$,
  \item \textit{Associativity:} For each $a, b \in \mathbb{R}$ and $\vec{x} \in V$,  $(ab) \vec{x} = a(b\vec{x})$,
  \item \textit{Multiplicative identity:} There exists an identity element in $\mathbb{R}$, denoted $1$, such that $1\vec{x} = \vec{x}$ for each $\vec{x} \in V$.
\end{itemize}

(Based on \cite{Hogben:2006} and \cite{widdows04geometry})
\end{definition}

If $n$ is a positive integer, $\mathbb{R}^n$ denotes the set of all $n$-tuples over $\mathbb{R}$. These form an $n$-dimensional real vector space. For $\vec{x} \in \mathbb{R}^n$, $x_j$ is called the $j$th coordinate of $\vec{x}$. For vectors

\begin{equation*}
\vec{x} = \left[\begin{array}{c}x_1\\\vdots\\x_n\end{array}\right], \vec{y} = \left[\begin{array}{c}y_1\\\vdots\\y_n\end{array}\right] \in \mathbb{R}^n
\end{equation*}

\noindent
and a scalar $c \in \mathbb{R}$, the addition is defined as

\begin{equation*}
\vec{x} + \vec{y} = \left[\begin{array}{c}x_1+y_1\\\vdots\\x_n+y_n\end{array}\right]
\end{equation*}

\noindent
scalar multiplication is defined as

\begin{equation*}
c \vec{x} = \left[\begin{array}{c}c x_1\\\vdots\\c x_n\end{array}\right]
\end{equation*}

\noindent
and $\vec{0}$ denotes the $n$-tuple of zeros.

\begin{definition}[Matrix]
An $m \times n$ \term{matrix} over $\mathbb{R}$ is an $m \times n$ rectangular array

\begin{equation*}
A = \left[\begin{matrix}a_{11} & \dots & a_{1n} \\ \vdots & \cdots & \vdots \\ a_{m1} & \dots & a_{mn} \end{matrix}\right]
\end{equation*}

where the element in the $i$ row of the $j$ column is denoted $a_{ij}$ with $a_{ij} \in \mathbb{R}$.

(\cite{Hogben:2006})
\end{definition}

If A is an $m \times n$ matrix, row $i$ is

\begin{equation*}
\left[\begin{array}{ccc}a_{i1} & \dots & a_{in}\end{array}\right]
\end{equation*}

\noindent
and column $j$ is

\begin{equation*}
\left[\begin{array}{c}a_{1j} \\ \vdots \\ a_{mj}\end{array}\right].
\end{equation*}

\noindent
These are called a \term{row vector} and a \term{column vector} respectively.

\subsection{\emph{term-document} Matrix}

One of the ways to model document meaning, pioneered in the filed of information retrieval by \cite{Salton:1971:SRS:1102022}, is to represent documents as vectors in a \term{term space}, by building a \term{term-document matrix}. In such matrix row vectors correspond to terms (tokens) and column vectors correspond to documents. Thus each document is represented by a \term{multiset} (a set where an element can occur multiple times) of tokens. This idea is based on the \term{bag of words hypothesis}, which states that the frequencies of tokens in a document indicate the relevance of the document to a given query. In order to find documents relevant to a query, a \term{query vector} is built from the tokens in the query and then compared with all the vectors in the term-document matrix using some \term{similarity measure}. The more similar document and query vectors are, the more relevant the document is to the query. An example of a term-document matrix is shown in Figure \ref{fig:tdmatrix}.

\input{tdmatrix.tex}

\subsection{\emph{word–context} Matrix}

Conversely, the same matrix could be regareded as a \term{word–context matrix} in which terms are represented as vectors in a document space. This idea, introduced by \cite{deerwester90indexing}, is based upon the \term{distributional hypothesis} in linguistics, that says that words that occur in similar contexts tend to have similar meanings. A word–context matrix is a transposed, i.e. rotated by 90 degrees, version of a term-document matrix. We shall be using a variant of a word–context matrix, where repeats are represented in a document space. Having such a matrix, one can calculate similarity of terms. A simple approach to term similarity will be presented later.

%%\input{wcmatrix.tex}

\subsection{Vector Space Models}
All vectors we have used to model our data contained event frequencies. In a term-document matrix, word frequencies form such an event. In a word–context matrix, an event frequency would be the number of times that a given word appears in a given context. We shall call this type of models \term{vector space models} or \term{VSM} in short. For an overview of VSM see \cite{DBLP:journals/corr/abs-1003-1141}.

When dealing with word frequencies, it might be the case that some words appear across all the documents and thus are not characteristic of a particular set of documents. Using such words to discriminate documents will not produce good results. In this case one says that they have low 
\term{information content}. That is, if you come across one of these words you cannot really establish the topic of a particular document. On the other hand, words that are specific to a number of documents will allow us to easily identify these documents. To reduce the effect of words with low information content, a number of weighting techniques have been proposed. We present two of them.

Let $T$ be the set of all terms, $D$ be the set of all documents, $N_{|T|\times|D|}$ be a term–document frequency matrix, where $n_{ij}$ is the number of times a term $t_{i}$ appears in a document $d_{j}$.

\subsection{\emph{term frequency} $\times$ \emph{inverse document} Frequency}

The first weighting technique is applying the \term{tf-idf} (\term{term frequency} $\times$ \term{inverse document frequency}) weighting function to all the elements of a matrix.

\begin{definition}[tf]
The frequency of a term $t_{i}$ in a document $d_{j}$ is defined as
\begin{equation*}
    tf_{ij} = \frac{n_{ij}}{\sum_{d_{k \in D}} n_{kj}},
\end{equation*}
\end{definition}

\begin{definition}[idf]
The inverse document frequency is defined as
\begin{equation*}
idf_{i} = \log \frac{|D|}{|\{j: t_{i} \in d_{j}\}|},
\end{equation*}
where $|D|$ is the total number of documents and $|\{j: t_{i} \in d_{j}\}|$ is the number of documents containing a given term $t_{i}$.
\end{definition}

\begin{definition}[tf-idf]
\begin{equation*}
    (tf\mbox{-}idf)_{ij} = tf_{ij} \times idf_{i}
\end{equation*}
\end{definition}


\subsection{Positive Pointwise Mutual Information}

The second weighting technique is called \term{positive pointwise mutual information} or \term{PPMI} in short. It is calculated as described in the definition:

\begin{definition}[ppmi]
\begin{align*}
    p_{ij}&=\frac{n_{ij}}{\sum_{d_k \in D}\sum_{t_l \in T}n_{kl}}\\
    p_{i*}&=\frac{\sum_{d_k \in D}n_{ik}}{\sum_{d_k \in D}\sum_{t_{l \in T}}n_{kl}}\\
    p_{*j}&=\frac{\sum_{t_l \in T}n_{ik}}{\sum_{d_k \in D}\sum_{t_{l \in T}}n_{kl}}\\
\end{align*}

\begin{equation*}
    pmi_{ij} = \log \left( \frac{p_{ij}}{p_{i*}p_{*j}}\right)
\end{equation*}

\begin{displaymath}
   ppmi_{ij} = \left\{
     \begin{array}{lr}
       pmi_{ij} & : pmi_{ij} > 0\\
       0 & : pmi_{ij} \leq 0
     \end{array}
   \right.
\end{displaymath}

\end{definition}

\subsection{Similarity}

Once we obtain a word–context matrix, we would like to compare its columns, i.e. terms, on the basis of their similarity. The most common way to do this is to calculate angles between all the possible vector pairs. The smaller is the angle between a pair of \term{term vectors} the more similar they are. Let $\vec{x}=\langle x_1,x_2,\dots,x_n\rangle$ and $\vec{y}=\langle y_1, y_2,\dots,y_n\rangle$ be vectors making up a given term pair $p$. The cosine of the angle $\theta$ between the vectors $\vec{x}$ and $\vec{y}$ is calculated as follows:

\begin{align*}
  cos(\theta) &= \frac{\vec{x}}{\|\vec{x}\|}\cdot\frac{\vec{y}}{\|\vec{y}\|} \\
  &= \frac{\vec{x}}{\sqrt{\vec{x}\cdot\vec{x}}}\cdot\frac{\vec{y}}{\sqrt{\vec{y}\cdot\vec{y}}} \\
  &= \frac{\sum_{i=1}^{n} x_{i} y_{i}}{\sqrt{\sum_{i=1}^{n} x_{i}^2\sum_{i=1}^{n} y_{i}^2}}.
\end{align*}

That is, the cosine of an angle between two vectors is a dot product of their second norms. Thus the vectors are normalized to unit vectors and their length  plays no role in determining the angle between them. Since we only have positive frequencies in our vectors, the cosine will be between $0$, for $90$ degrees, and $1$, for $0$ degrees. If the angle between two vectors is $90$ degrees they are said to be \term{orthogonal} to each other. For term similarity that would be mean completely unrelated terms. On the other hand, the angle of $0$ degrees corresponds to \term{parallel}, identically directed vectors, which means identical terms. All the angles in-between signify various degree of term similarity.

\section{Alignment of Discontinuous Repeats}

In this chapter, we shall combine everything we have done so far into one system. First, we convert discontinuous repeats into vectors in a document space. We shall refer to these as \term{repeat vectors}. 

\subsection{\emph{index-document} Mapping}

Since we store an entire corpus in a single sequence, we do not know in which document each suffix is located. To mitigate this problem, we introduce a mapping from the indices at which documents start, to the corresponding document id's. We shall call this \term{index-document mapping}. Index-document mapping is created during sequence construction and thus does not add any performance overhead to the suffix array construction. To convert a repeat into a repeat vector, Procedure \ref{pro:tovector} is used. It is based upon the ideas presented in \cite{Yamamoto:2001:USA:972778.972779}. The procedure is to first go through all the indices in an interval, fetch corresponding sequence indices from a suffix table and then identify the document id corresponding to each sequence index. Finally, these document id's are combined into a repeat vector.

\begin{algorithm}
\caption{$\mathbf{tovector}([i..j], idmap)$ - converting an interval to a repeat vector}
\begin{algorithmic}
\REQUIRE $[i..j]$
\COMMENT{an interval}
\REQUIRE $idmap$
\COMMENT{index-document mapping}
\STATE $\vec{v} = \vec{0}$
\COMMENT{a repeat vector initialized to a zero vector}
\FORALL{index $k$, where $i\leq k \leq j$}
    \STATE $seqindex \gets suftab(k)$
    \STATE find an index $m$ in the $idmap$ closest to $k$ such that $m\leq k$
    \STATE $docid \gets idmap(m)$
    \COMMENT{fetch a document id from a index-document mapping}
    \STATE $\vec{v}_{m} \gets \vec{v}_{m} + 1$
    \COMMENT{increment the value corresponding to this document id in the vector}
\ENDFOR
\end{algorithmic}
\label{pro:tovector}
\end{algorithm}

An additional problem arising from having an entire corpus in one sequence is that repeats could appear on document boundaries. To obviate this problem, we introduce a \term{separator symbol} at the end of each document in a sequence. Each occurrence of a separator is a unique symbol that does not appear elsewhere in the corpus. This procedure guarantees that there will be no repeats at document boundaries. Thus Procedure \ref{pro:tovector} can now be applied to the last interval of a discontinuous repeat only, since for each occurrence of a discontinuous repeat, all its sequences must now reside in the same document.

\subsection{\emph{repeat-context} Matrix}

Having prevented the repeats from occurring at the document boundary, we apply Procedure \ref{pro:tovector} to all the repeats in a corpus. Thus we obtain a \term{repeat-context matrix}.

\input{rspace.tex}

In the example from Figure \ref{fig:rspace}, the discontinuous repeats are shown as vectors in the document space and are highlighted in the respective documents.

The following procedure assumes that documents are already aligned, otherwise vectors would not be in the same document space. We produce the repeat-context matrix for both the source and the target corpus. Then we compare column vectors between the two, i.e. each repeat vector from the source corpus with each repeat vector from the target corpus. For each vector in the source corpus, we choose $n$ most similar ones from the target corpus.

\input{vectors.tex}

An example of a vector comparison is shown in Figure \ref{fig:vectors}. Let us calculate the cosine similarity for this particular example. From the highlighted discontinuous repeats in the documents, we first construct the \term{source repeat vector} $\vec{s}$ and the \term{target repeat vector} $\vec{t}$:

\begin{equation*}
  \vec{s} = \left[\begin{array}{c}3\\1\end{array}\right] , \vec{t} = \left[\begin{array}{c}1\\1\end{array}\right].
\end{equation*}

From the definition of cosine similarity, we get that

\begin{align*}
  cos(\theta) &= \frac{\sum_{i=1}^{n} s_{i} t_{i}}{\sqrt{\sum_{i=1}^{n} s_{i}^2\sum_{i=1}^{n} t_{i}^2}} \\
  &= \frac{3\times1+1\times1}{\sqrt{(3^2+1^2)\times(1^2+1^2)}} \\
  &= \frac{4}{\sqrt{20}} \approx 0.894
\end{align*}

\noindent
which corresponds to an angle $\theta$ of $\approx 26.56$ degrees.

The first occurrence of \texttt{both \dots and} in the source corpus was translated as \texttt{und}, unlike the rest of occurrences which were translated as \texttt{sowohl \dots als auch}. Such a variation in translation is a common obstacle an alignment system has to deal with. Ideally, having a larger corpus, both translations should find their way into the $n$ best alignments. Another interesting issue that we observe in this example is that there is an intervening sequence in the first document of the source corpus. Since we have offered no satisfactory treatment of this phenomenon, it has affected the count of the repeats.

\clearpage
\part{Implementation and Testing}

\section{Implementation}

In order to better understand the real-world behaviour and practicality of the alignment system proposed here, we have implemented a version this system. The entire implementation written in Java is available online under an open-source license at \url{http://gaal.googlecode.com}. The implementation has been tested using the Europarl parallel corpus created by \cite{koehn2005epc}. The Europarl corpus is a collection of proceedings of the European Parliament translated into eleven languages. The examples above were also taken from the Europarl corpus and are indicative of the nature of the text. The corpus is widely used, particularly in academic research in machine translation. It is available online at \url{http://www.statmt.org/europarl/}. Since the Europarl corpus is already sentence-aligned we have considered each sentence to be a document for the purpose of vector space model construction.

\section{Testing}

We have chosen English-German data as input for the testing phase. The parallel corpus for this language pair consists of 1,279,436 sentences. We collected $n$-part discontinuous repeats, i.e. discontinuous repeats with $n$ sequences, from the source corpus and $m$-part discontinuous repeats from the target corpus for various values of $n$ and $m$. Then we built repeat vectors and performed pairwise alignment. From now on, we shall refer to such an alignment as \term{$n$-to-$m$ alignment}. Note that in the examples below, words from the stoplist have been removed during preprocessing.

\subsection{$2$-$2$ Alignment of Discontinuous Repeats}

For the first stage of the testing phase, an alignment of two-part discontinuous repeats in the source and target corpora ($2$-$2$ alignment) has been performed.

\begin{table}[H]
\begin{tabular}{|p{6cm}|p{6cm}|}
\hline
\textsc{Source} & \textsc{Target} \\
\hline
\hline
i speak from [\textit{my own, my own limited, personal}] experience &
ich spreche aus [\textit{meiner eigenen begrenzten, eigener, persönlicher}] erfahrung \\
\hline
new eu institutions [\textit{require, demand}] resources &
neuen eu institutionen [\textit{brauchen, verbrauchen}] ressourcen \\
\hline
\end{tabular}
\captionof{table}{A sample of $2$-to-$2$ alignments.}
\label{tab:results2-2}
\end{table}

To illustrate how the alignments fit into their context in the original text, Table \ref{tab:results2-2} shows some $2$-to-$2$ alignments along with the sets of possible \term{fillers}. A filler is a sequence that fits in the gap between two sequences forming a discontinues sequence. Its length can vary from zero, i.e. an empty sequence, to the window size.

\begin{table}[H]
\begin{tabular}{|r|p{5cm}|p{5.5cm}|l|}
\hline
\textsc{\#} & \textsc{Source} & \textsc{Target} & \textsc{Sim.} \\
\hline

1 & of yesterday \dots s sitting have been distributed & gestrigen sitzung \dots verteilt & 0.81\\
\hline
2 & i declare session \dots adjourned & ich erkläre sitzungsperiode \dots für unterbrochen & 0.70\\
\hline
3 & on behalf of my \dots group & im namen meiner \dots fraktion & 0.73\\
\hline
4 & on behalf of committee \dots on & im namen ausschusses \dots für & 0.78\\
\hline
5 & on behalf of confederal group of \dots left & im namen konföderalen fraktion \dots linken & 0.60\\
\hline
6 & on proposal \dots for & über vorschlag \dots für & 0.70\\
\hline
7 & on proposal for \dots council & über vorschlag für \dots rates & 0.74\\
\hline
8 & as author \dots is & da fragesteller \dots ist & 0.67\\
\hline
9 & debate on following \dots motions & aussprache über folgende \dots entschließungsanträge & 0.76\\
\hline
10 & debate is \dots closed & aussprache ist \dots geschlossen & 0.84\\
\hline
11 & next item is commission \dots statement & folgt erklärung \dots kommission & 0.62\\
\hline
12 & next item is joint debate \dots on & folgt gemeinsame aussprache \dots über & 0.84\\
%%\hline
%%next item is continuation of \dots on & folgt fortsetzung \dots über & 0.75\\
\hline
13 & next item is continuation of \dots debate & folgt fortsetzung \dots aussprache & 0.78\\
\hline
14 & for following \dots reasons & aus folgenden \dots gründen & 0.70\\
\hline
15 & for waiver of \dots immunity & auf aufhebung \dots immunität & 0.66\\
\hline
16 & president declared \dots common position & präsident erklärt \dots gemeinsamen standpunkt & 0.86\\
\hline
17 & president declared common position \dots as amended & präsident erklärt \dots geänderten gemeinsamen standpunkt & 0.77\\
\hline
18 & at european \dots level & auf europäischer \dots ebene & 0.62\\
\hline
19 & an oral \dots amendment & einen mündlichen \dots änderungsantrag & 0.61\\
\hline
20 & we shall now proceed to \dots vote & wir kommen nun \dots zur abstimmung & 0.62\\
\hline
21 & request for waiver of \dots immunity & antrag auf aufhebung \dots immunität & 0.68\\
\hline
22 & minutes of yesterday \dots s sitting have been distributed & protokoll gestrigen sitzung \dots verteilt & 0.81\\
\hline
23 & minutes of yesterday ' s \dots have been distributed & protokoll gestrigen \dots verteilt & 0.82\\
\hline
24 & joint debate \dots on & gemeinsame aussprache \dots über & 0.79\\
\hline
25 & danish social democrats in \dots parliament & dänischen sozialdemokraten im \dots parlament & 0.86\\
\hline
25 & guardian \dots treaties & hüterin \dots verträge & 0.68\\
\hline
26 & violence against \dots women & gewalt gegen \dots frauen & 0.72\\
\hline
\end{tabular}
\captionof{table}{A sample of $2$-to-$2$ alignments with similarity scores.}
\label{tab:results}
\end{table}

\noindent
Although our study does not deal with fillers, they can be subject to analysis in their own right as they represent the variation in a sequence, unlike repeats that embody persistence. Filler sets are enclosed in square brackets to show they are not themselves part of a discontinuous repeat. The similarity scores are not shown for the lack of space.

Table \ref{tab:results} presents a sample of results of $2$-to-$2$ alignment. Each row consists of an example number, a source repeat, a target repeat, and a cosine similarity score between the corresponding vectors. In both source and target repeats, sequences are separated by dots which replace the fillers.

Let us examine some of the alignments in Table \ref{tab:results} more closely. In alignments \#1 and \#22, the word \texttt{sitting} and its German counterpart \texttt{sitzung} are located on the different sides of the gap. This happens due to the language specific differences in noun phrase formation. Something similar can be observed in alignment \#11. But this time the German counterparts of two words \texttt{commission} and \texttt{statement} are swapped in the aligned discontinuous repeat. In alignment \#17, the English and German versions are different due to the choice of syntactic structure. In alignment \#19, no exact correspondence for the English part can be found, since the German compound \texttt{änderungsantrag} has not been split during tokenization.

\subsection{$n$-to-$m$ Alignment of Discontinuous Repeats}

So far we have only seen $2$-to-$2$ alignments, we are next going to present some $n$-to-$m$ alignments where $n>2$ and $m>2$. For illustrative purposes, alignments are presented with sets of possible fillers and similarity scores are omitted.

\begin{table}[H]
\begin{tabular}{|p{6cm}|p{6cm}|}
\hline
\textsc{Source} & \textsc{Target} \\
\hline
\hline
congratulates mrs [\textit{dybkj, kathalijne buitenweg}] on [\textit{this, her}] report &
beglückwünscht frau [\textit{dybkjr, kathalijne buitenweg}] zu [\textit{ihrem, diesem}] bericht \\
\hline
more mines [\textit{were cleared, each year}] than [\textit{were, are}] laid &
mehr minen [\textit{zu räumen, geräumt würden}] als [\textit{neue, neu}] verlegt werden \\
\hline
vote will [\textit{be taken, be held, take place}] tomorrow [\textit{thursday, morning}] at &
abstimmung findet [\textit{erst, auf jeden fall}] morgen [\textit{vormittag, früh}] um \\
\hline
president declared [\textit{two, amended}] common [\textit{positions, position}] approved &
präsident erklärt [\textit{so geänderten, geänderten}] gemeinsamen [\textit{standpunkt, standpunkte}] für gebilligt \\
\hline

\end{tabular}
\captionof{table}{A sample of $3$-to-$3$ alignments.}
\label{tab:results3-3}
\end{table}

Table \ref{tab:results3-3} shows a sample of $3$-to-$3$ alignments. It is clear from both the $3$-to-$3$ and $2$-to-$2$ alignments, some German phrases have more discontinuous components than their English counterparts. To investigate this issue, we have tried a number of $n$-to-$m$ alignments with $n \neq m$.

\begin{table}[H]
\begin{tabular}{|p{6cm}|p{6cm}|}
\hline
\textsc{Source} & \textsc{Target} \\
\hline
\hline
by concentrating resources [\textit{on least favoured, in most needy}] regions &
durch konzentration mittel [\textit{in, auf}] am stärksten [\textit{benachteiligten, bedürftigen}] regionen \\
\hline
leaders at their [\textit{berlin summit, summit in lisbon}] last year &
regierungschefs eu auf ihrem [\textit{gipfeltreffen, gipfel}] in [\textit{berlin, lissabon}] im [\textit{letzten, vergangenen}] jahr \\
\hline
\end{tabular}
\captionof{table}{A sample of $2$-to-$3$ alignments.}
\label{tab:results2-3}
\end{table}

In Table \ref{tab:results2-3}, a sample of $2$-to-$3$ alignments is shown. The system makes a good job of identifying variation in sentences and at picking up persisting patterns, although in some examples variation is located differently in the two languages.

Table \ref{tab:results3-4} shows an example of variation that is due, probably, to a typo.

\begin{table}[H]
\begin{tabular}{|p{6cm}|p{6cm}|}
\hline
\textsc{Source} & \textsc{Target} \\
\hline
on behalf of confederal group of [\textit{european}] united left [\textit{/}] nordic green left &
im namen konföderalen fraktion vereinigten [\textit{europäischen}] linken [\textit{/}] nordische [\textit{gründe, grüne}] linke \\
\hline
\end{tabular}
\captionof{table}{A sample of $3$-to-$4$ alignments.}
\label{tab:results3-4}
\end{table}

We conclude that in the $n$-to-$m$ alignments with $n \neq m$, variation is mostly due to vocabulary choice than differences in syntactic structure.

As it is clear from the examples having the same number of sequences in both source and target discontinuous repeats might not always be optimal and some heuristics are needed to pick appropriate $n$ and $m$ values for each particular language pair.

\clearpage

\section{Conclusion and Future Research Directions}

In the current paper, we have presented a system for alignment of discontinuous sequences. As part of this system, we have introduced a technique for recursive discovery of discontinuous subsequences that relies on a new suffix array based data structure called embedded suffix tree. This technique can reveal patterns consisting of an unbounded number of arbitrarily long phrases in a corpus. Thus, we are no longer constrained by predefined phrase sizes and/or pattern lengths. Thanks to the efficiency of underlying data structures, this technique can easily deal with large corpora, as our own experiments have demonstrated. Approaches like this are increasingly becoming vital tools in computational linguistics, making it possible to cope with constant growth in the amount of textual data available for research. For the alignment part, the vector space model approach has been chosen for its intuitiveness and straightforwardness. Despite its seeming simplicity, it has performed well during the experiments. In the light of the wealth of work on vector space models that suggests numerous possible enhancements, we strongly believe that this approach can pave the way for fruitful research.

In the first part of this paper, we have first defined the notion of continuous repeat. A survey of various suffix array based data structures that are best suited for repeat discovery (enhanced suffix arrays and linearised suffix trees) has followed. Then we have turned our attention to discontinuous repeats. We have introduced an extension of linearised suffix tree, which we have called an embedded suffix tree, specifically tailored for discontinuous repeat discovery. We have also described a procedure for recursive construction of embedded suffix trees that allows for discovery of discontinuous repeats consisting of any number of sequences.

In the second part, we have briefly reviewed the terminology used in the field of natural language processing and machine translation. Then we have defined alignment and introduced vector space models. After that we have presented an alignment procedure, taking discontinuous repeats which we have defined in the first part and converting them to repeat vectors. Finally, we have described a simple way to compare these vectors based on cosine similarity measure.

The paper is complemented with illustrations of data structures and procedures. An implementation of the proposed system is available online as an open-source project and readers are welcome to download it. This makes it possible to experiment to experiment with the system while reading the paper. Although the test implementation is an early prototype, it does perform well and presents interesting results. At this early stage of development, it can already be utilized as a research tool to investigate variation in translation, and/or as a tool for \term{machine-aided human translation}.

As for further development, a number of possible enhancements can immediately be proposed. First, treatment of intervening sequences during discovery of discontinuous sequence is an open problem. Another interesting idea is to apply \term{singular value decomposition} based \term{dimensionality reduction} to the repeat-context matrix (\cite{DBLP:books/daglib/0001349}). By reducing the dimensionality of a repeat-context matrix we can achieve performance improvements, thanks to a lower number of calculations, as well as boost alignment by working in a latent document space.

\section*{Acknowledgments}
I am thankful to my supervisor Dale Gerdemann for his indispensable advice and an anonymous reviewer who has suggested numerous improvements in the presentation of the current paper.

\clearpage

\nocite{*}

\bibliography{sa}
\bibliographystyle{apalike}

\clearpage

\listoffigures
\listoftables

\clearpage

\printindex

\end{document}
