\documentclass[12pt,notitlepage]{book}

\pagestyle{headings}
\pagestyle{plain}

\frenchspacing

\usepackage[utf8]{inputenc}
\usepackage[english]{babel}

\usepackage{a4wide}
\usepackage[left=4cm,right=4cm,top=2.5cm,bottom=2.5cm]{geometry}
\usepackage{setspace}

\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsthm}

\usepackage[pdftex]{graphicx}
\usepackage{epstopdf}

\usepackage{booktabs}
\usepackage{courier}
\usepackage{extarrows}
\usepackage{multirow}
\usepackage{wasysym}
\usepackage{shuffle}

\usepackage[ruled,vlined,commentsnumbered]{algorithm2e}

\usepackage{index}
\usepackage{enumerate}

\usepackage{hyperref}
\hypersetup{
    bookmarks=true,         % show bookmarks bar?
    unicode=true,           % non-Latin characters in Acrobat’s bookmarks
    pdftoolbar=true,        % show Acrobat’s toolbar?
    pdfmenubar=true,        % show Acrobat’s menu?
    pdffitwindow=false,     % window fit to page when opened
    pdfstartview={FitH},    % fits the width of the page to the window
    pdftitle={Clearing Restarting Automaton}, % title
    pdfauthor={Peter Cerno}, % author
    pdfkeywords={analysis by reduction, clearing restarting automata, 
formal languages, grammatical inference}, % list of keywords
    pdfnewwindow=true,      % links in new window
    colorlinks=false,       % false: boxed links; true: colored links
    linkcolor=red,          % color of internal links
    citecolor=green,        % color of links to bibliography
    filecolor=magenta,      % color of file links
    urlcolor=cyan           % color of external links
}
\usepackage[all]{hypcap}

\usepackage{emptypage}


\newcommand{\clRA}{\mbox{\sf cl-RA\/}}
\newcommand{\kclRA}[1][k]{\mbox{\sf $#1$-\clRA}}
\newcommand{\sclRA}{\mbox{\sf scl-RA}}
\newcommand{\ksclRA}[1][k]{\mbox{\sf $#1$-\sclRA}}
\newcommand{\CRS}{\mbox{\sf CRS\/}}
\newcommand{\kCRS}[1][k]{\mbox{\sf $#1$-\CRS}}
\newcommand{\DclRA}{\mbox{\sf $\Delta$cl-RA\/}}
\newcommand{\kDclRA}[1][k]{\mbox{\sf $#1$-\DclRA}}
\newcommand{\DXclRA}{\mbox{\sf $\Delta^*$cl-RA\/}}
\newcommand{\kDXclRA}[1][k]{\mbox{\sf $#1$-\DXclRA}}

\newcommand{\Pref}{\mbox{\it Pref\/}}
\newcommand{\Suff}{\mbox{\it Suff\/}}
\newcommand{\Int}{\mbox{\it Int\/}}
\newcommand{\Inf}{\mbox{\it Inf\/}}

\newcommand{\X}{\mbox{\sf X}}
\newcommand{\R}{\mbox{\sf R}}
\newcommand{\RW}{\mbox{\sf RW}}
\newcommand{\RWW}{\mbox{\sf RWW}}
\newcommand{\RR}{\mbox{\sf RR}}
\newcommand{\Rr}{\mbox{\sf R(R)}}
\newcommand{\RL}{\mbox{\sf RL}}
\newcommand{\RRW}{\mbox{\sf RRW}}
\newcommand{\RrW}{\mbox{\sf R(R)W}}
\newcommand{\RLW}{\mbox{\sf RLW}}
\newcommand{\RRWW}{\mbox{\sf RRWW}}
\newcommand{\RrWW}{\mbox{\sf R(R)WW}}
\newcommand{\RLWW}{\mbox{\sf RLWW}}

\newcommand{\calL}[1]{{\cal L}(#1)}

\newtheorem{definition}{Definition}[chapter]
\newtheorem{theorem}{Theorem}[chapter]
\newtheorem{proposition}{Proposition}[chapter]
\newtheorem{lemma}{Lemma}[chapter]
\newtheorem{remark}{Remark}[chapter]
\newtheorem{corollary}{Corollary}[chapter]
\newtheorem{example}{Example}[chapter]
\newtheorem{claim}{Claim}[chapter]
\newtheorem{statement}{Statement}[chapter]
\newtheorem{metaalgorithm}{Meta-Algorithm}[chapter]

\DeclareMathOperator{\size}{\makebox{\sf size}\xspace}

\newindex{default}{idx}{ind}{Index}

\frontmatter


\title{Clearing Restarting Automaton}
\author{Peter {\v C}erno}

\begin{document}

\selectlanguage{english}

\begin{titlepage}
\begin{center}
\ \\

\vspace{15mm}

\Large Project\\
{\bf Clearing Restarting Automaton}
\footnote{This work was partially supported by the Grant Agency of Charles University under 
Grant-No. 272111/A-INF/MFF and by the Czech Science Foundation under 
Grant-No. P103/10/0783 and Grant-No. P202/10/1333.}

\vspace{\fill}

{\Large\bf USER GUIDE}

\vspace{\fill}

%\normalsize
\Large
Peter Černo\\
\large
Prague, \the\year
\end{center}

\end{titlepage}

\newpage

\tableofcontents

\newpage

\chapter{Preface}

Restarting automata \cite{JMPV95} were introduced as a tool for modeling some 
techniques used for natural language processing. In particular they are used for 
analysis by reduction which is a method for checking (syntactical) correctness or 
non-correctness of a sentence. While restarting automata are quite general 
(see \cite{O06} for an overview), they still lack some properties which could 
facilitate their wider use. One of their drawbacks is, for instance, the lack of 
some intuitive way how to infer their instructions. There were several attempts 
to learn their instructions by using genetic algorithms, but the results are far 
from being applicable. 

Clearing restarting automata were introduced in \cite{CM09,CM10} as a new 
restricted model of restarting automata which, based on a limited context, 
can only delete a substring of the current content of its tape. 
The model is motivated by the need for simpler definitions and simultaneously 
by aiming for efficient machine learning of such automata.
Clearing restarting automata are studied in \cite{CM10}. We only mention
that they can recognize all regular languages, some context-free languages
and even some non-context-free languages. Moreover, the model is effectively 
learnable from positive samples of reductions and it is even possible 
to infer some non-context-free languages in this way. However, there are some
context-free languages that are outside the class of languages accepted by clearing 
restarting automata. This limitation led to the development of the extended versions 
of clearing restarting automata. In \cite{CM10} there were introduced two extended 
versions -- the so-called $\Delta$-clearing restarting automata and
$\Delta^*$-clearing restarting automata. Both of them can use
a single auxiliary symbol $\Delta$ only. $\Delta$-clearing restarting
automata can leave a mark -- a symbol $\Delta$ -- at the place of deleting
besides rewriting into the empty word $\lambda$.
$\Delta^*$-clearing restarting automata can rewrite a subword $w$ into
$\Delta^k$ where $k$ is bounded from above by the length of $w$.
It was shown in \cite{CM10} that $\Delta^*$-clearing restarting automata 
are powerful enough to recognize  all context-free languages. 
This result was later extended in \cite{CM11,CM11tech} to hold also for 
the more restricted $\Delta$-clearing restarting automata.
In \cite{C12tech} there was proposed yet another model, the so-called 
subword-clearing restarting automata, which, based on a limited context, can 
replace a substring $z$ of the current content of its tape by a proper substring 
of $z$. This model proved useful in some grammatical inference scenarios. It was 
shown that it is possible, by using a simple learning algorithm, to identify any 
clearing (subword-clearing) restarting automaton in the limit from any ``reasonable'' 
presentation of positive and negative samples.

The goal of the project \emph{Clearing Restarting Automaton} is to provide 
a basic development framework for implementing the algorithms concerning 
clearing restarting automata and other similar models (like subword-clearing 
restarting automata, $\Delta$-clearing restarting automata etc.). 
In other words, our aim is to bring the theory closer to the real world.
We do not expect that the algorithms developed in this project will be applicable to 
the real world data. Instead, they can be used as tools for researchers who are 
interested in models used in the theory of automata and formal languages.
The project itself is hosted on the following website: 
\url{http://code.google.com/p/clearing-restarting-automata/}.


This guide has the following structure. Chapter \ref{chapter:theoretical-background} 
introduces the used automata models and fixes the notation. 
Chapter \ref{chapter:application} shows how to install the application and
how to use the application to define and investigate our automata models. 
It also shows how to infer automata from a sample computation and how to infer 
automata from the given set of positive and negative samples.

The project \emph{Clearing Restarting Automaton} is freely licensed under the 
GNU GPL v3.0 and available for Windows, Linux and Unix platforms. It was developed 
in C$\sharp$ 4 by using Microsoft Visual Studio 2010 and the target platform 
is .NET Framework 4.0. If you want to use the application on Windows, you need 
to have the .NET Framework 4.0 installed on your computer. For Linux and Unix 
platforms, you can use the open source Mono project for running this application. 
You are very welcome to use and modify the source code, and even to contribute to 
the project itself, provided that you contact me and mention my authorship in your 
own projects.

\mainmatter

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Theoretical Background}\label{chapter:theoretical-background}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%


As our reference concerning the theory of automata and
formal languages we use the monograph \cite{HU69}. 

An \emph{alphabet} is a finite nonempty set. The elements of
an alphabet $\Sigma$ are called \emph{letters} or \emph{symbols}.
A \emph{word} or \emph{string} over an alphabet $\Sigma$ is a finite
sequence consisting of zero or more letters of $\Sigma$, whereby the same
letter may occur several times. The sequence of zero letters is called
the \emph{empty word}, written $\lambda$. The set of all words (all
nonempty words, respectively) over an alphabet $\Sigma$ is denoted by
$\Sigma^*$ ($\Sigma^+$, respectively). If $x$ and $y$ are words over
$\Sigma$, then so is their \emph{catenation} (or \emph{concatenation})
$xy$ (or $x \cdot y$), obtained by juxtaposition, that is, writing $x$
and $y$ one after another. Catenation is an associative operation and the
empty word $\lambda$ acts as an identity: $w \lambda = \lambda w = w$
holds for all words $w$. Because of the associativity, we may use the
notation $w^i$ in the usual way. By definition, $w^0 = \lambda$.

Let $u$ be a word in $\Sigma^*$, say $u = a_1 \ldots a_n$ with
$a_i \in \Sigma$. 
%We use $u[i]$ to denote the $i$th letter of $u$,
%i.e. $u[i] = a_i$.
We say that $n$ is the \emph{length} of $u$ and we
write $|u|=n$. The sets of all words over $\Sigma$ of length $k$, or at
most $k$, are denoted by $\Sigma^k$ and $\Sigma^{\le k}$, respectively.
By $|u|_a$, for $a \in \Sigma$, we denote the total number of occurrences
of the letter $a$ in $u$. The \emph{reversal} (\emph{mirror image}) of
$u$, denoted $u^R$, is the word $a_n \ldots a_1$. 
Finally a \emph{factorization} of $u$ is any sequence $u_1$, ..., $u_t$ of
words such that $u = u_1 \cdots u_t$.

For a pair $u$, $v$ of words we define the following relations:
$u$ is a \emph{prefix} of $v$, if there exists
a word $z$ such that $v = uz$;
$u$ is a \emph{suffix} of $v$, if there exists
a word $z$ such that $v = zu$; and
$u$ is a \emph{factor} (or \emph{subword}) of $v$,
if there exist words $z$ and $z'$ such that $v = zuz'$.
Observe that $u$ itself and $\lambda$ are subwords, prefixes and
suffixes of $u$. Other subwords, prefixes and suffixes are called
\emph{proper}.

%In the following, $\Pref_k(u)$ denotes either the (nontrivial) prefix of length $k$ 
%of the word $u$ in case $|u|>k$, or the whole of $u$ in case $|u|\le k$.
%Similarly, $\Suff_k(u)$ denotes either the (nontrivial)suffix of length $k$ of 
%the word $u$ in case $|u|>k$, or the whole of $u$ in case $|u|\le k$.
%The set of all subwords of length $k$ of $u$ that occur in $u$ in a position
%other than the prefix or suffix is denoted $\Int_k(u)$ (interior words).

Subsets of $\Sigma^*$ are referred to as
(\emph{formal}) \emph{languages} over $\Sigma$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Context Rewriting Systems}\label{se:crs}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In this section we introduce our central concept, called 
\emph{context rewriting systems}, which will serve us as 
a framework for clearing (subword-clearing) restarting automata
and other similar models.

\begin{definition}[\cite{CM10}]\label{definition:crs}
Let $k$ be a positive integer. A \emph{$k$-context rewriting system}
(\emph{$\kCRS$} for short) is a system $M = (\Sigma, \Gamma, I)$,
where $\Sigma$ is an input alphabet, $\Gamma \supseteq \Sigma$ is a
working alphabet not containing the special symbols $\cent$ and $\$$,
called \emph{sentinels}, and $I$ is a finite set of \emph{instructions}
of the form:
$$(x, z \to t, y)\;,$$
where $x$ is called \emph{left context},
$x \in LC_k = \Gamma^k \cup \cent\cdot\Gamma^{\le k-1}$,
$y$ is called \emph{right context},
$y \in RC_k = \Gamma^k \cup \Gamma^{\le k-1}\cdot\$ $ and
$z \to t$ is called \emph{instruction-rule}, $z, t \in \Gamma^*$.
The \emph{width} of the instruction $i = (x, z \to t, y)$ is
$|i| = |xzty|$.

A word $w = uzv$ \emph{can be rewritten} into $utv$ (denoted as
$uzv \vdash_M utv$) if and only if there exists an instruction
$i = (x, z \to t, y) \in I$ such that $x$ is a suffix of $\cent \cdot u$
and $y$ is a prefix of $v \cdot \$ $.
We often underline the rewritten part of the word $w$, and if the
instruction $i$ is known we use $\vdash^{(i)}_M$ instead of $\vdash_M$, i.e.
$u \underline{z} v \vdash^{(i)}_M utv$.
The relation $\vdash_M \ \subseteq \Gamma^* \times \Gamma^*$ is called
\emph{rewriting relation}.

Let $l \in \cent \cdot \Gamma^* \cup \Gamma^*$, and 
$r \in \Gamma^* \cup \Gamma^* \cdot \$$.
A word $w = uzv$ \emph{can be rewritten in the context $(l, r)$} into $utv$ 
(denoted as $uzv \to_R utv$ \emph{in the context $(l, r)$}) if and only if 
there exists an instruction $i = (x, z \to t, y) \in I$, such that 
$x$ is a suffix of $l \cdot u$ and $y$ is a prefix of $v \cdot r$. 
Each definition that uses somehow the rewriting relation $\to_R$ can be
relativized to any context $(l, r)$. Unless told otherwise, we will use the 
\emph{standard context} $(l, r) = (\cent, \$)$.

The \emph{language} associated with $M$ is defined as
$L(M) = \{w \in \Sigma^* \mid w \vdash_M^* \lambda \}$,
where $\vdash_M^*$ is the reflexive and transitive closure of $\vdash_M$.
Note that, by definition, $\lambda \in L(M)$.

The \emph{characteristic language} associated with $M$ is defined as
$L_C(M) = \{w \in \Gamma^* \mid w \vdash_M^* \lambda \}$.
Similarly, by definition, $\lambda \in L_C(M)$.
Obviously, 
$L(M) = L_C(M) \cap \Sigma^*$.
\end{definition}

\begin{remark}\label{remark:zerocontext}
We also include a special case $k = 0$ in Definition \ref{definition:crs}. 
In this case we define $LC_0 = RC_0 = \{\lambda\}$, 
and the rest of the definition remains the same.
\end{remark}

\begin{remark}\label{remark:setinstructions}
We also extend Definition \ref{definition:crs} with the following notation:
if $X \subseteq LC_k$ and $Y \subseteq RC_k$ are finite nonempty sets, and $Z$ is a 
finite nonempty set of rules of the form $z \to t$, $z, t \in \Gamma^*$,
then we define $(X, Z, Y) = \{(x, z \to t, y) \mid x \in X, (z \to t) \in Z, y \in Y \}$.
However, if $X = \{ x \}$, then instead of writing $(\{ x \}, Z, Y)$ we write only 
$(x, Z, Y)$ for short. The same holds for the sets $Z$ and $Y$, too.
\end{remark}

Naturally, if we increase the length of contexts used in instructions of a $\CRS$, 
we can increase their power only.

\begin{remark}
Based on the above observation, in Definition \ref{definition:crs} we can allow 
contexts of any length up to $k$, i.e. we can use:\\
\indent \index{$LC_{\le k}$}
$LC_{\le k} = \Gamma^{\le k} \cup \cent \cdot \Gamma^{\le k-1} = 
\bigcup_{i \le k} LC_i$ instead of $LC_k$ and\\
\indent \index{$RC_{\le k}$}
$RC_{\le k} = \Gamma^{\le k} \cup \Gamma^{\le k-1} \cdot \$ = 
\bigcup_{i \le k} RC_i$ instead of $RC_k$.
\end{remark}

It is easy to see that general $\kCRS$ can simulate any type 0 grammar 
(according to the Chomsky hierarchy \cite{HU69}). Hence we will not 
consider $\kCRS$ in their general form, since they are
too powerful (they can represent all recursively enumerable languages).
Instead, we will always put some restrictions on the instruction-rules
and then study such restricted models. 
The first model we introduce is the so-called \emph{clearing restarting
automaton} which is a $\kCRS$ such that $\Sigma = \Gamma$ and
all instruction-rules are of the form $z \to \lambda$, where
$z \in \Sigma^+$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Clearing Restarting Automata}\label{se:clra}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{definition}[\cite{CM10}]\label{definition:clra}
Let $k$ be a nonnegative integer. A \emph{$k$-clearing restarting automaton}
(\emph{$\kclRA$} for short) is a $\kCRS$ $M = (\Sigma, \Sigma, I)$
(or $M = (\Sigma, I)$, for short), where for each instruction
$i = (x, z \to t, y) \in I$: $z \in \Sigma^+$ and
$t = \lambda$. Since $t$ is always the empty word, we use the notation
$i = (x, z, y)$. 
\end{definition}

\begin{remark}\label{remark:lambda}
Speaking about a $\kclRA$ $M$ we use ``automata terminology,'' e.g.
we say that $M$ \emph{accepts} a word $w$ if $w \in L(M)$.
By definition, each $\kclRA$ accepts $\lambda$. If we say that
a $\kclRA$ $M$ \emph{recognizes} (or \emph{accepts}) a
language $L$, we always mean that $L(M) = L \cup \{\lambda\}$.

This implicit acceptance of the empty word can be avoided by a
slight modification of the definition of clearing restarting automata,
or even context rewriting systems, but in principle, we would not get
a more powerful model.
\end{remark}

\begin{example}\label{example:a^n_b^n}
Let $M = (\Sigma, I)$ be a $\kclRA[1]$ with $\Sigma = \{a, b\}$ 
and $I$ consisting of the following two instructions:
$$
\begin{array}{l}
(1) \quad (a, ab, b),\\
(2) \quad (\cent, ab, \$).
\end{array}
$$
Then we have $
aaa\underline{ab}bbb \vdash^{(1)}_M aa\underline{ab}bb \vdash^{(1)}_M
a\underline{ab}b \vdash^{(1)}_M \underline{ab} \vdash^{(2)}_M \lambda
$
which means that $aaaabbbb \vdash_M^* \lambda$. So the word $aaaabbbb$
is accepted by $M$. It is easy to see that $M$ recognizes the language
$L(M) = \{a^n b^n \mid n\ge 0\}$.
\end{example}

Clearing restarting automata are studied in \cite{CM10}. We only mention
that they can recognize all regular languages, some context-free languages
and even some non-context-free languages. However, there are some
context-free languages that are outside the class of languages
accepted by clearing restarting automata.

\begin{theorem}[\cite{CM10}]\label{theorem:a^n_c_b^n}
The language $L = \{a^n c b^n \mid n \ge 0\}$ is not
recognized by any $\kclRA$.
\end{theorem}

The above limitation led to the development of the extended versions of 
clearing restarting automata. In \cite{CM10} there were introduced two extended 
versions -- the so-called $\Delta$-clearing restarting automata and
$\Delta^*$-clearing restarting automata. Both of them can use
a single auxiliary symbol $\Delta$ only. $\Delta$-clearing restarting
automata can leave a mark -- a symbol $\Delta$ -- at the place of deleting
besides rewriting into the empty word $\lambda$.
$\Delta^*$-clearing restarting automata can rewrite a subword $w$ into
$\Delta^k$ where $k$ is bounded from above by the length of $w$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{$\Delta$-Clearing Restarting Automata}\label{se:dclra}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\begin{definition}[\cite{CM10}]\label{definition:dclra}
Let $k$ be a nonnegative integer. A \emph{$k$-$\Delta$-clearing restarting
automaton} (\emph{$\kDclRA$} for short) is a system
$M = (\Sigma, I)$, where $R = (\Sigma, \Gamma, I)$ is a
$\kCRS$ such that $\Delta \notin \Sigma$,
$\Gamma = \Sigma \cup \{\Delta\}$, and for each instruction
$i = (x,z \to t,y) \in I$: $z \in \Gamma^+$ and either $t = \lambda$,
or $t = \Delta$.

Analogously, a \emph{$k$-$\Delta^*$-clearing restarting
automaton} (\emph{$\kDXclRA$} for short) is a system
$M = (\Sigma, I)$, such that for each instruction
$i = (x,z \to t,y) \in I$: $z \in \Gamma^+$ and
$t = \Delta^i$, where $0 \le i \le |z|$.

The $\kDclRA$ ($\kDXclRA$) $M$
\emph{recognizes} the language
$L(M) = \{w \in \Sigma^* \mid  w  \vdash_M^* \lambda \} = L(R)$,
where $\vdash_M$ is the \emph{rewriting relation} 
$\vdash_R$ of $R = (\Sigma, \Gamma, I)$.

The \emph{characteristic language} of $M$ is the language
$L_C(M) = L_C(R)$.
\end{definition}

\begin{example}\label{example:a^n_c_b^n}
Let $M = (\Sigma, I)$ be the \index{$\kDclRA[1]$}$\kDclRA[1]$ 
with $\Sigma = \{a, b, c\}$ and the set of instructions $I$ consisting of the 
following instructions:
$$
\begin{array}{l}
(1) \quad (a, c \to \Delta, b),\\
(2) \quad (a, a\Delta b \to \Delta, b),\\
(3) \quad (\cent, a \Delta b \to \Delta, \$),\\
(4) \quad (\cent, c \to \Delta, \$),\\
(5) \quad (\cent, \Delta \to \lambda, \$).
\end{array}
$$

An input word $a^n c b^n$, for arbitrary $n > 1$, is accepted by $M$ in the following way:
$$
a^n\underline{c}b^n \vdash_M^{(1)} a^{n-1}\underline{a \Delta b} b^{n-1}n \vdash_M^{(2)} 
a^{n-1} \Delta b^{n-1} \vdash_M^{(2)} \ldots \vdash_M^{(2)} \underline{a \Delta b} 
\vdash_M^{(3)} \underline{\Delta} \vdash_M^{(5)} \lambda\ .
$$
First, $M$ deletes $c$ while marking its position by $\Delta$. In each of the following 
steps, $M$ deletes one $a$ and one $b$ around $\Delta$ until it obtains single-letter word 
$\Delta$, which is then reduced into $\lambda$.

It is easy to see that $M$ recognizes the language 
$L = \{a^ncb^n \mid n\ge 0\} \cup \{\lambda\}$.

The \index{language!characteristic}characteristic language of $M$ is
$$L_C(M) = \{a^ncb^n,a^n \Delta b^n \mid n \ge 0 \} \cup\{\lambda\}\ .$$
\end{example}

It was shown in \cite{CM10} that $\Delta^*$-clearing restarting automata 
are powerful enough to recognize  all context-free languages. 
This result was later extended in \cite{CM11,CM11tech} to hold also for 
the more restricted $\Delta$-clearing restarting automata.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Subword-Clearing Restarting Automata}\label{se:dclra}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In \cite{C12tech} there was proposed yet another model, the so-called subword-clearing
restarting automaton, which proved useful in some grammatical inference scenarios.

\begin{definition}\label{definition:sclra}
Let $k$ be a nonnegative integer. A \emph{$k$-subword-clearing restarting automaton}
(\emph{$\ksclRA$} for short) is a $\kCRS$ $M = (\Sigma, \Sigma, I)$, where for each 
instruction $i = (x, z \to t, y) \in I$: $z \in \Sigma^+$ and
$t$ is a proper subword of $z$.
\end{definition}

Subword-clearing restarting automata are strictly more powerful than clearing
restarting automata. They can, for instance, recognize the language 
$\{a^n c b^n \mid n \ge 0\} \cup \{\lambda\}$, which lies outside the class of 
languages accepted by clearing restarting automata. However, they still cannot 
recognize all context-free languages. (Consider e.g. the language 
$\{ w w^R \mid w \in \Sigma^* \}$).

\section{Grammatical Inference}\label{section:inference}

In this section we introduce a learning schema for clearing 
(subword-clearing) restarting automata and other similar models.
It is possible to identify any hidden target model in the limit by using 
this schema. We provide only a short introduction for the purposes of this 
guide. For more details we refer the interested reader to \cite{C12tech}.

The problem we are interested in can be best described as follows. 
Suppose that we have two finite sets of words over the alphabet $\Sigma$: 
the set of positive samples $S^+$ and the set of negative samples $S^-$. 
Our goal is to find an automaton $M$, such that: $S^+ \subseteq L(M)$ and 
$S^- \cap L(M) = \emptyset$. We may assume that 
$S^+ \cap S^- = \emptyset$ and $\lambda \in S^+$.

If we have no other restrictions, then the task becomes trivial even for
clearing restarting automata. Just consider
the instructions $I = \{ (\cent, w, \$) \mid w \in S^+, w \neq \lambda \}$. 
It follows trivially, that in this case $L(M) = S^+$, where $M = (\Sigma, I)$.
Therefore, we impose the maximal allowed width $l \ge 1$ and 
also the specific length $k \ge 0$ of contexts for the instructions of the 
resulting automaton. 

The learning schema itself is defined in Algorithm \ref{algorithm:learning}.

\begin{algorithm}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\caption{Learning schema $\mathsf{Infer}(S^+, S^-, l, k)$}
\label{algorithm:learning}
%\DontPrintSemicolon
\LinesNumbered
\Input{The set of positive $S^+$ and negative $S^-$ samples over $\Sigma$,
$S^+ \cap S^- = \emptyset$, $\lambda \in S^+$.
The maximal width of instructions $l \ge 1$.
The length of contexts of instructions $k \ge 0$.}
\Output{An automaton consistent with $(S^+, S^-)$, or $\mathbf{Fail}$.}
$\Phi \leftarrow \mathsf{Assumptions}(S^+, l, k)$\label{algorithm:learning:assumptions}\;
\While{$\exists w_- \in S^-, w_+ \in S^+, \phi \in \Phi: w_- \vdash_{\phi} w_+$\label
{algorithm:learning:cycle}}
{$\Phi \leftarrow \Phi \setminus \{\phi\}$\;}
$\Phi \leftarrow \mathsf{Simplify}(\Phi)$\label{algorithm:learning:simplify}\;
\If{$\mathsf{Consistent}(\Phi, S^+, S^-)$\label{algorithm:learning:consistent}}
{\Return{Automaton with instructions $\Phi$}\;}
$\mathbf{Fail}$\;
\end{algorithm}

First, the function $\mathsf{Assumptions}(S^+, l, k)$ returns some set of instruction
candidates. Let us assume, for a moment, that this set already contains all 
instructions of the hidden target automaton. Then in Cycle 
\ref{algorithm:learning:cycle}  we gradually remove all instructions that allow
reduction from some negative sample to some positive sample.
(These filtered instructions are definitely not in the set of instructions
of the hidden target automaton).
In Step \ref{algorithm:learning:simplify} 
we remove redundant instructions and in Step \ref{algorithm:learning:consistent} 
we check if the remaining set of instructions is consistent with the given 
input set of positive and negative samples. 
In other words, we check if (1) for all $w_+ \in S^+: w_+ \vdash^*_{\Phi} \lambda$ 
and (2) for all $w_- \in S^-: w_- \not\vdash^*_{\Phi} \lambda$.
The condition (1) always holds, provided that in Step 
\ref{algorithm:learning:assumptions} we already obtained all instructions of 
the hidden target automaton. However, the condition (2) may fail. 
The success of the above algorithm, therefore, depends both on the initial 
assumptions obtained in Step \ref{algorithm:learning:assumptions}, and on the 
given set of positive and negative samples. Nevertheless, 
if we have a ``reasonable'' implementation of the function $\mathsf{Assumptions}$, 
then there is always a set of positive samples $S_0^+$ and a set of negative samples 
$S_0^-$ such that the above schema converges to a correct solution for all sets 
of positive samples $S^+ \supseteq S_0^+$ and negative samples $S^- \supseteq S_0^-$
consistent with the hidden target automaton. 
This also implies that we can infer a correct solution in the limit from any 
presentation of labeled samples that covers all the samples from $S_0^+$ and 
$S_0^-$.

There are ``reasonable'' implementations of the function $\mathsf{Assumptions}$  
(both for clearing and subword-clearing restarting automata) that run
in a polynomial time. In fact, they run in a linear time, if the maximal 
width of instructions $l$ and the length of contexts $k$ is considered 
to be a fixed constant. 

\begin{example}\label{example:assumptions}
Here we define two functions $\mathsf{Assumptions}$ that
we use in our inference algorithm.

\begin{enumerate}
\item $\mathsf{Assumptions}_{weak}(S^+, l, k) := \{ (x, z, y) \mid
x \in LC_k, y \in RC_k, |z| > 0, |xzy| \le l$ and $\exists w_1, w_2 \in S^+: xzy$ 
is a subword of $\cent w_1 \$$ and $xy$ is a subword of $\cent w_2 \$ \}$.

The basic intuition behind this procedure is the assumption that if both patterns 
$xzy$ and $xy$ occur in the set of positive samples, then it is somehow justified to 
clear the word $z$ based on the context $(x, y)$. Note that the more we increase the 
length of contexts $k$ the smaller (or equal) the number of such patterns we will find. 
The contexts serve here as a safety cushion against the inference of incorrect 
instructions.

\item $\mathsf{Assumptions}_{strong}(S^+, l, k) := \{ (x, z, y) \mid
x \in LC_k, y \in RC_k, |z| > 0, |xzy| \le l$ and 
$\exists w_1, w_2 \in S^+: w_1 = \alpha z \beta, w_2 = \alpha \beta$, $x$ 
is a suffix of $\cent \alpha$ and $y$ is a prefix of $\beta \$ \}$.

This condition is more restrictive than the previous one.
It basically says that the instruction $(x, z, y)$ is justified only in the 
case when there are positive samples $w_1, w_2 \in S^+$ such that we can obtain $w_2$ 
from $w_1$ by using this instruction.
\end{enumerate}
\end{example}

All these functions can be computed in a polynomial time with respect to 
$\size(S^+) = \sum_{w \in S^+} |w|$. In fact, if $l$ and $k$ are fixed constants, 
then these functions can be computed in a linear time, since we need to consider 
only subwords of length bounded from above by the constant $l$. 

The above examples can be easily extended to the model of $\ksclRA$. The
only difference would be that instead of patterns $xzy$ and $xy$ we would 
consider the patterns $xzy$ and $xty$, where $t$ is a proper subword of $z$.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\chapter{Application}\label{chapter:application}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

This chapter shows how to model and investigate clearing (subword-clearing,
$\Delta$-clearing, etc.) restarting automata and, in general, all
context rewriting systems. For simplicity, we use the term \emph{automaton} to refer to 
any context rewriting system. In Section \ref{section:installation} we
show how to install the application both on \emph{Windows} and \emph{UNIX} platforms. 
In Section \ref{section:basics} we explain how to enter 
the instructions into the application and how to test the properties
of correctly defined automata. In Section \ref{section:learning_reductions} 
we showcase the inference of automata based on the set of sample reductions. 
Finally, in Section \ref{section:learning_data} we show how to infer 
clearing (subword-clearing) automata from the set of positive and negative 
samples.

\section{Installation}\label{section:installation}

In this Section we explain how to install the project 
\emph{Clearing Restarting Automaton} both on \emph{Windows} and 
\emph{UNIX} platforms.

For \emph{Windows} platform you need to have the .NET Framework of version 
at least 4.0 installed on your computer. You can download this framework from 
the Microsoft web site: \url{http://www.microsoft.com/}.
To run the application just double-click on the application executable:

\begin{verbatim}
ClearingRestartingAutomaton.exe
\end{verbatim}

For \emph{UNIX} platform you need to have the Mono project installed 
on your computer. For more information on the project see the web page: 
\url{http://www.mono-project.com/}.
If the Mono project is installed correctly you can run the application 
by entering the following command:

\begin{verbatim}
> mono ClearingRestartingAutomaton.exe
\end{verbatim}

If you want to make modifications to the code we recommend to use
\emph{Microsoft Visual Studio} as the development environment.

Both the source code and the executable file can be downloaded freely from
the following website:\\ 
\url{http://code.google.com/p/clearing-restarting-automata/}

\section{Basics}\label{section:basics}

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{clearing_restarting_automaton.png}
\caption[Clearing Restarting Automaton.]{Clearing Restarting Automaton.}
\label{figure:clearing_restarting_automaton}
\end{figure}

After starting \emph{Clearing Restarting Automaton} application the window 
for entering the instructions displays as in Figure 
\ref{figure:clearing_restarting_automaton}. The instructions entered into
this window define the corresponding automaton. In the first part of this 
Section we explain the syntax of these instructions. After the automaton 
is correctly  defined (i.e. the instructions of the automaton are correctly 
entered), it is possible to investigate the properties of this automaton
(i.e. the language recognized by this automaton etc.). Therefore in the 
second part of this Section we describe \emph{Reduce/Generate Dialog}
that can be used to investigate the properties of the correctly defined automata.

\begin{example}
Suppose that we want to model the $\clRA$ $M = (\{a, b\}, I)$,
where the instructions $I$ are:
\begin{eqnarray*}
(a, ab, b),\\
(\cent, ab, \$).
\end{eqnarray*}
We can represent these instruction as:
\begin{verbatim}
    [a]ab[b]
    [^]ab[$]
\end{verbatim}
Note that the symbol \verb$^$ represents the left sentinel $\cent$ and the
symbol \verb#$# represents the right sentinel $\$$.
The set notation for instructions is also supported. 
The set braces are represented by the square brackets \verb$[$ and \verb$]$,
and the elements inside these brackets are separated by whitespace or by commas:
\verb$,$, \verb$;$.
For instance, if we  want to represent the instruction 
$(\{\cent, a, b\}, ab, \{a, b, \$\})$, we can do it as follows:
\begin{verbatim}
    [^ a b] ab [a b $]
\end{verbatim}
Note that the whitespace inside the instruction (i.e. between the left and 
the right context) is ignored.

All instructions of $\CRS$ are supported. For instance, the instruction 
$(a, ab \to ba, b)$ can be represented as:
\begin{verbatim}
    [a] ab -> ba [b]
\end{verbatim}
However, only the following symbols can occur inside the words and contexts: \\
\verb$a-z A-Z 0-9 ! @ # % & * ( ) ' \ / _ + : " | ? . $ \\
The dot symbol \verb$.$ has a special meaning. It represents the empty word $\lambda$.
This means, for instance, that the following two instructions are equivalent:
\begin{verbatim}
    [a] ab [b]
    [a] ab -> . [b]
\end{verbatim}
You can also use the empty contexts in instructions. For instance,
the following two instructions are equivalent:
\begin{verbatim}
    [] ab []
    [.] ab [.]
\end{verbatim}
\end{example}

In the following example we illustrate how to investigate the properties
of correctly defined automata.

\begin{example}
Suppose that we have a $\CRS$ $M = (\Sigma, \Sigma, I)$ with
$\Sigma = \{a, b\}$ and the following set of instructions $I$:
\begin{eqnarray}
(\cent, &ab \to \lambda,& \$)\label{i1},\\
(a, &ab \to \lambda,& b)\label{i2},\\
(\{\cent, a, b\}, &ba \to ab,& \{a, b, \$\})\label{i3}.
\end{eqnarray}
The language $L(M) = \{ w \in \{a, b\}^* \mid |w|_a = |w|_b \}$.
Obviously, if $w \in L(M)$, then $|w|_a = |w|_b$ (because each instruction preserves
this property). On the other hand, if $w \in \{a, b\}^*$ and $|w|_a = |w|_b$
then by using the instruction (\ref{i3}) finitely many times we can get the word 
$a^k b^k$ which can be easily reduced to $\lambda$ by using the instructions 
(\ref{i1}) and (\ref{i2}).

In \emph{Clearing Restarting Automaton} application we can represent the 
instructions of $M$ as:
\begin{verbatim}
    [^] ab [$]
    [a] ab [b]
    [^ a b] ba -> ab [a b $]
\end{verbatim}
After entering these instructions into the instruction window click
on \emph{Action} menu item and then click on \emph{Reduce/Generate} menu item. 
\emph{Reduce/Generate Dialog} will appear as in Figure \ref{figure:reduce_generate}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{reduce_generate.png}
\caption[Reduce/Generate Dialog.]{Reduce/Generate Dialog.}
\label{figure:reduce_generate}
\end{figure}

If you click on \emph{Generate Button}, the first $20$ words of the 
language $L(M)$ will appear in \emph{Result ListView} as in Figure
\ref{figure:result_listview}. You can change the number of generated words by 
modifying \emph{MaxCount} property ($20$ is set by default).
The button is called \emph{Generate Button} since the resulting set of words 
is \emph{generated} from \emph{Initial Word}, which is in our case set to the 
empty word $\lambda$, by using the breadth-first search technique. 
You can also specify the maximal length of the generated words by modifying 
\emph{MaxLength} property. By default this property is set to $0$ 
which means that there is no upper bound on the length of the generated words.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{result_listview.png}
\caption[Result ListView.]{Result ListView.}
\label{figure:result_listview}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{smaller_listviews.png}
\caption[Smaller ListViews.]{Smaller ListViews.}
\label{figure:smaller_listviews}
\end{figure}

Now click on the word \verb$abba$. Two smaller \emph{ListView}s on the right
side of \emph{Result ListView} show the list of words which the
selected word \verb$abba$ can be \emph{reduced from} and the list of words which
the selected word \verb$abba$ can be \emph{reduced to}. If a word in one of these two
smaller \emph{ListView}s has a gray color it means that this word is not in  
\emph{Result ListView}. For instance, the word \verb$abba$ can be reduced from the word
\verb$baba$, but the word \verb$baba$ is not in \emph{Result ListView}.
For illustration see Figure \ref{figure:smaller_listviews}.

If you click on the word \verb$baba$ then the reduction step from the word \verb$baba$
to the word \verb$abba$ will appear in \emph{Bottom TextBox} together with the
used instruction as in Figure \ref{figure:reduction_step}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{reduction_step.png}
\caption[Reduction Step.]{Reduction Step.}
\label{figure:reduction_step}
\end{figure}

If you double-click on the word \verb$baba$ you will add it to 
\emph{Result ListView}. However, the word will still have a gray color. 
In this way you can explore the derivation path of a word in both directions.

If you double-click on the word \verb$abba$ in \emph{Result ListView} then 
the reduction path of this word will appear in \emph{Bottom TextBox} as in 
Figure \ref{figure:reduction_path}.
Only one reduction path is shown in \emph{Bottom TextBox}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{reduction_path.png}
\caption[Reduction Path.]{Reduction Path.}
\label{figure:reduction_path}
\end{figure}

If the result list of words is large you can use a \emph{regular expression} to filter
the output. Set \emph{MaxCount} property to $200$ and click on
\emph{Generate Button}. Now enter the regular expression \verb#^(ba)*$# to 
\emph{Filter TextBox} next to \emph{Filter Button} and then click on 
\emph{Filter Button}. \emph{Result ListView} is shown in 
Figure \ref{figure:result_listview_filtered}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{result_listview_filtered.png}
\caption[Filtered Result ListView.]{Filtered Result ListView.}
\label{figure:result_listview_filtered}
\end{figure}

\emph{Reduce Button} can be used in the same way as \emph{Generate Button}.
The only difference is that \emph{Reduce Button} is used to find all words
that can be reduced from \emph{Initial Word}.

If you click on \emph{Instructions Button}, \emph{Information Dialog}
will appear with the list of instructions of the used automaton 
as in Figure \ref{figure:instructions}. Note that the set notation is not used 
here because this list of instructions represents the internal representation 
of the automaton.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{instructions.png}
\caption[Instructions.]{Instructions.}
\label{figure:instructions}
\end{figure}
\end{example}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Learning from Reductions}\label{section:learning_reductions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In this section we show how to infer the automaton from the set of 
\emph{sample reductions}.

\begin{example}
Suppose that we have the following sequence of reductions:
\begin{eqnarray*}
& & ababababababab\underline{a}b  \vdash_M
ababababab\underline{a}babb  \vdash_M
  ababab\underline{a}babbabb  \vdash_M\\
& & ab\underline{a}babbabbabb  \vdash_M
  abbabbabba\underline{b}b  \vdash_M
  abbabba\underline{b}bab  \vdash_M\\
& & abba\underline{b}babab  \vdash_M
  a\underline{b}bababab  \vdash_M
  ababab\underline{a}b  \vdash_M\\
& & ab\underline{a}babb  \vdash_M
  abba\underline{b}b  \vdash_M
  a\underline{b}bab  \vdash_M\\
& & ab\underline{a}b  \vdash_M
  a\underline{b}b  \vdash_M
  \underline{ab}  \vdash_M
  \lambda  \mbox{ accept}.
\end{eqnarray*}
From this \emph{sample computation}, we can collect $15$ reductions. All these 
reductions have unambiguous factorizations (the deleted symbols are underlined).

To enter these reduction into the application click on \emph{Action} menu item 
and then click on \emph{Learn from Samples of Reductions} menu item. 
\emph{Learning Dialog} will appear as in Figure \ref{figure:learning}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning.png}
\caption[Learning Dialog.]{Learning Dialog.}
\label{figure:learning}
\end{figure}

First enter the word \verb$abababababababab$ into \emph{Initial Word TextBox}
and then click on \emph{Start Button}. After clicking on \emph{Start Button}
\emph{Learning Step TextBox} will contain this word and also 
\emph{Learning Process TextBox} will contain this word as the first and 
the only word. For illustration see Figure \ref{figure:learning_start}.

If you want to enter the first reduction:
$$ababababababab\underline{a}b  \vdash_M abababababababb$$
you only need to select the underlined letter in \emph{Learning Step TextBox}
as in Figure \ref{figure:learning_select} and then you need to click
on \emph{Reduce Button}. The situation after clicking on \emph{Reduce Button}
is illustrated in Figure \ref{figure:learning_step}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning_start.png}
\caption[Learning Dialog Start.]{Learning Dialog Start.}
\label{figure:learning_start}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning_select.png}
\caption[Learning Step Select.]{Learning Step Select.}
\label{figure:learning_select}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning_step.png}
\caption[Learning Step.]{Learning Step.}
\label{figure:learning_step}
\end{figure}

Now we are left with the word \verb$abababababababb$ in \emph{Learning Step TextBox}
and we can repeat the same procedure with this word etc. until we enter the whole 
\emph{sample computation}. The result of this process is shown in 
Figure \ref{figure:learning_result}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning_result.png}
\caption[Learning Step.]{Learning Result.}
\label{figure:learning_result}
\end{figure}

If we want to append another sample computation we just need to enter
the first word of this sample computation into \emph{Initial Word TextBox}
and then click on \emph{Append Button}. The whole process of entering 
the sample computation will be the same as was described above. Note that if 
we click on \emph{Start Button} instead of \emph{Append Button} then 
the whole \emph{Learning Process TextBox} will be cleared in order to enable 
entering a new sample computation from scratch.

After we have entered all sample computations we wanted, everything is prepared
to infer the corresponding automaton. The only variable we have to choose 
is $k$ -- the length of the contexts for the instructions. For the purposes
of this example we set this parameter to $k = 4$. If we click on  
\emph{Generate Button}, \emph{Information Dialog} will appear containing 
the instructions of the resulting inferred automaton as in 
Figure \ref{figure:learning_automaton}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{learning_automaton.png}
\caption[Resulting Inferred Automaton.]{Resulting Inferred Automaton.}
\label{figure:learning_automaton}
\end{figure}

The resulting inferred $\kclRA[4]$ $M = (\{a, b\}, I)$ has the following
set of instructions $I$:
$$(\{\cent ab, abab\},a,\{b\$, babb\}), \quad
(\{\cent a, abba\},b,\{b\$,bab\$,baba\}), \quad
   (\cent,ab,\$).$$
It can be shown that the following holds:
$$L(M) \cap \{(ab)^n \mid n>0\} = \{(ab)^{2^l} \mid l \ge 0\}.$$
Suppose that we want to test this hypothesis. First we copy-paste the inferred
instructions from Figure \ref{figure:learning_automaton} into the
main window of \emph{Clearing Restarting Automaton} application (see Figure
\ref{figure:clearing_restarting_automaton}). Then we click on \emph{Action}
menu item and then on \emph{Reduce/Generate} menu item. We generate, for instance,
$600$ words and then filter them with the following regular expression:
\verb#^(ab)*$#. \emph{Result ListView} shown in Figure \ref{figure:hypothesis}
supports our hypothesis.

Note that there are two other buttons in \emph{Learning Dialog}:
\emph{Reduce to $\sharp$ Button} and \emph{Reduce to: Button}. 
These can be used to incorporate the generalized reductions of $\DclRA$ or 
even $\CRS$. The symbol $\sharp$ is usually used to represent 
the symbol $\Delta$ of $\DclRA$.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{hypothesis.png}
\caption[Testing the Hypothesis.]{Testing the Hypothesis.}
\label{figure:hypothesis}
\end{figure}
\end{example}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Learning from Data}\label{section:learning_data}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

In this Section we show how to infer clearing (subword-clearing) automata from the 
set of positive and negative samples. We use the learning schema described
in Section \ref{section:inference}.

\begin{example}
Consider that we want to infer a $\kclRA$ $M = (\{a, b\}, I)$ 
recognizing the language $L = \{ a^n b^n \mid n \ge 0 \}$.

First, click on \emph{Action} menu item and then click on \emph{Infer
from Positive and Negative Samples} menu item. 
\emph{Inference Dialog} will appear as in Figure \ref{figure:inference}.
Note that in \emph{Assumptions SelectBox} \emph{Weak Clearing Assumptions}
item is selected by default. There are several other options, as is shown
in Figure \ref{figure:assumptions}.
\emph{Weak Clearing Assumptions} item corresponds to the
function $\mathsf{Assumptions}_{weak}$ and \emph{Strong Clearing Assumptions}
item corresponds to the function $\mathsf{Assumptions}_{strong}$, both defined in
Example \ref{example:assumptions}. The other two items are the equivalents
to these two functions for subword-clearing restarting automata. The last two items 
are only for experimental purposes and are not covered in this Section. 
The item \emph{SEARCH Clearing Automaton} (\emph{SEARCH Delta-Clearing Automaton}) 
is used for the exhaustive search for clearing ($\Delta$-clearing) restarting 
automata consistent with the given set of positive and negative samples. 

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{inference.png}
\caption[Inference Dialog.]{Inference Dialog.}
\label{figure:inference}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[scale=1.0]{assumptions.png}
\caption[Assumptions.]{Assumptions.}
\label{figure:assumptions}
\end{figure}

Since we want to find a clearing restarting automaton, we leave this selection
set to the default \emph{Weak Clearing Assumptions} item. Our first attempt
will be to enter the positive samples: \verb$ab$, \verb$aabb$ and 
\verb$aaabbb$ into \emph{Postive Samples TextBox}. These words 
can be separated by whitespace or commas: \verb$,$, \verb$;$. Since it is not
allowed to leave the other \emph{Negative Samples TextBox} empty, we enter the
following negative samples into it: \verb$aab$, \verb$abb$. After that, we click
on \emph{Infer Button}. The result is shown in Figure 
\ref{figure:inference_anbn_01} and the inferred consistent automaton is shown 
in Figure \ref{figure:inference_anbn_01a}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{inference_anbn_01.png}
\caption[Inference Dialog.]{Inference Dialog.}
\label{figure:inference_anbn_01}
\end{figure}

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{inference_anbn_01a.png}
\caption[Inferred Consistent Automaton.]{Inferred Consistent Automaton.}
\label{figure:inference_anbn_01a}
\end{figure}

It is easy to see, that the resulting automaton is consistent with the given
set of positive and negative samples, but it does not recognize the target 
language $L$. We only mention that it is sufficient to add the following 
negative samples: \verb$aaab$, \verb$abbb$, \verb$aaabb$, \verb$aabbb$, 
to \emph{Negative Samples TextBox} in order to obtain the desired 
automaton. The reason is that these negative samples will filter out the
undesired instructions, as is shown in \emph{Debug Output} in Figure
\ref{figure:debug_anbn}.

\begin{figure}[htp]
\centering
\includegraphics[scale=0.8]{debug_anbn.png}
\caption[Debug Output.]{Debug Output.}
\label{figure:debug_anbn}
\end{figure}

\end{example}

\backmatter

\bibliographystyle{abbrv}
\bibliography{ClearingRestartingAutomaton}

\end{document}