%  To be submitted to the Department of Computer Science in partial fulfillment of the
%  requirements for the degree of Masters of Computer Science at Western
%  Washington University

% TODOs
% Add standard deviation to the data table
%

%Paper needs the following
%    * Abstract (about 200 - 300 words)
%    * Introduction
%          o definition of the problem
%          o significance of the research/application
%          o scope of the research/application
%          o introduction to the paper
%    * Related Research (all important previously used methods)
%    * Background to understand your approach (if needed)
%    * Approach
%          o general description of your approach
%          o scope of the research/application
%          o input/output representation (in detail with examples)
%          o theorem(s) and proof(s)
%          o algorithm(s) (if applicable)
%          o program organization chart (if applicable)
%          o data flow chart (if applicable)
%    * Experimental Results (if applicable)
%          o experiment set up
%          o test methods
%          o test results (table, pictures, charts, etc.)
%          o comparison and discussion of your result(s) with that of the previous research
%    * Conclusions
%          o summary of your research (what you have accomplished)
%          o interesting points shown in your tests (if applicable)
%          o differences between your methods and the previous methods
%          o significance of your research/application
%          o limitations of your research/application
%          o future work (what can be done and/or what is your plan)
%    * Reference

\documentclass{acm_proc_article-sp}

\begin{document}

\title{Discovering Unknown Finite State Transducers Using Aperiodic Sequences}

%Thinking about using one of these instead
%\title{Exploring Unknown Environments With Limited Sensory Input}
%\title{Discovering Finite State Transducers Using Aperiodic Sequences}
%\title{Learning Finite State Transducers Using Aperiodic Sequences}

\numberofauthors{2}
\author{
  \alignauthor Martin Neal\\
  \affaddr{Western Washington University}\\
  \affaddr{516 High Street}\\
  \affaddr{Bellingham, WA, 98225}\\
  \email{Marty.Neal@gmail.com}
% To do: Ask Dr. Hearne if he _does_ want his name on the paper
  \alignauthor Dr. James Hearne\\
  \affaddr{Western Washington University}\\
  \affaddr{516 High Street}\\
  \affaddr{Bellingham, WA, 98225}\\
  \email{James.Hearne@wwu.edu}
}

\maketitle

\begin{abstract}
\vspace{10pt}
We present a new method for the discovery of finite-state transducers via
on-line exploration.  Our exploration method uses aperiodic sequences to
generate experiments which are tested on the target transducer.  The outputs
from the transducer are then used to generate additional experiments.  This
process is repeated until a confidence threshold is reached.  Our method never
requires a reset of the target transducer, nor does it need a teacher to give
counter examples.

\end{abstract}

%to do: list comma-separated keywords here
%\keywords{}

% Should mention that NFSTs are more powerful than DFSTs

\section{Introduction}
\vspace{10pt}
A finite state transducer (FST), which may be modeled by a Mealy Machine or a
Moore Machine, is an automaton which, given an input, transitions to a new state
and produces an output.  This research attempts to learn these states,
transitions, and outputs of any given FST by actively querying the machine.
First, a hypothesis for the machine is generated.  The hypothesis machine is an
approximation for the target machine, that is, the machine to be learned.  The
hypothesis machine and target machine are explored in parallel until an input
produces an output which is inconsistent between the machines.

Hypotheses are considered distinct only if they are semantically different.
More specifically, if the machines for two hypotheses are equivalent up to
isomorphism after reduction, the hypotheses are not considered to be
distinct. In other words, if they are both reduced as much as possible and have
the same structure, they are not distinct.

The hypothesis space is biased towards the infinite set of all strongly
connected machines.  A machine is said to be strongly connected if and only if
every state is accessible from any state within the machine.  Without the
possibility of resetting the machine, exploring a weakly connected machine may
lead to a situation in which it is impossible to transition out of a subset of
states.

Like other assumption biases, our learning method will never learn a machine
that does not conform to these assumptions.  Furthermore, given a nonconforming
machine, the learner will never exhaust the hypothesis space, and thus will
never halt.  Our method also induces a preference bias towards small machines.

The version space is every hypothesis in our hypothesis space which is
consistent with the results of the queries \cite{Mitchell}.  Before any query is
performed, the version space is equal to the hypothesis space.  The result for
every query to the target machine reduces the version space.  When the result of
a query is inconsistent with the hypothesis machine, the hypothesis machine is
replaced by the next smallest machine in the version space.

\section{Background}
\vspace{10pt}

Exploring automaton with output has been extensively studied.  Drescher
describes the necessary requirements for modeling the cognitive development of
infants as described by Piaget.  In his paper, he likens concept learning to
evolving schemas, where a schema is a state-dependant cause and effect view of
the world.  His ``Schema Mechanism'' develops fundamental concepts using only
sensory and motor primitives.  For example, his Schema Mechanism models how an
infant develops the concept of a physical object from only sensory perceptions
\cite{Drescher86,Drescher87}.

In his seminal paper on learnability, Gold describes which classes of languages
may be learned in the limit.  A language may be learned in the limit if the
learner can correctly guess which strings are in the language after some finite
number of training examples.  He describes two basic methods of information
presentation: Text and Informant.  The Text method is less powerful, because it
provides only strings from the language.  Informant training is more powerful,
because it provides examples for both strings which are and are not in the
language.  The Informant method is most representative of our research; Text
information is not powerful enough to learn regular languages in the limit.

Gold describes two types of Informant presentations: Arbitrary Informant and
Request Informant.  Each has a different presentation order.  The Arbitrary
Informant presents the learner with examples in a random order.  With the
Request Informant method, the learner queries the Informant by presenting it
with an example.  The Informant responds by classifying the example as in the
language or not in the language.  Both of these types are equivalent in that
they are equally powerful for determining if a language is learnable in the
limit.  Our research further investigates the effects of the Informant type on
the number of examples required to learn the language \cite{Gold67}.

Trakhtenbrot and Barzdin' give a good survey of automaton identification,
covering resettablity, dependence on the previous history of the process, and
dependence on a priori information.  An algorithm without a reset, with the
ability to use previous results, and which does not use prior knowledge is
called a simple conditional algorithm over an absolute black box.  Trakhtenbrot
and Barzdin' prove that there is no algorithm which can identify all absolute
black boxes with complete certainty.  They describe an iterative algorithm that
halts when the hypothesis machine is indistinguishable from the target machine
after testing an input word of fixed length \cite{Trakhtenbrot73}.

Angluin shows that there exists a polynomial time algorithm to learn a machine,
given a minimally adequate teacher.  The teacher must be able to answer
membership queries and equivalence queries.  A membership query is used to
determine if an input string is a member of the language.  An equivalence query
is used to determine if the hypothesis machine is equivalent to the target
machine; if not, the teacher provides a counter example.  Her algorithm repeats
the following steps.  Extend the machine via membership queries until the
hypothesis machine has a transition from each state for every input, and until
it is consistent with the words provided to it.  Then test the hypothesis
machine for equivalence.  If the teacher replies with a counter example, add the
example to the list and go back to extend the machine.  Otherwise, the machine
is correct and the algorithm finishes \cite{Angluin87}.

\section{Methods}
\vspace{10pt}
We begin with a high level overview of our algorithm to explore a target
machine.  The algorithm continually chooses queries for the target machine until
it encounters a query which is not satisfied by the current hypothesis machine.
When an inconsistency occurs, the algorithm chooses a new hypothesis machine
that is consistent with the new query and all previous queries.  Thus, the
algorithm is divided into two major sections: choosing a query, and choosing the
next hypothesis. These sections are repeated alternately until the hypothesis
has been explored enough without encountering any inconsistencies.  Here,
``enough'' is a parameter of the system.  We describe the two major sections of
the algorithm below, as well as how to initialize the hypothesis machine.

\subsection{Initializing the Machine}
\vspace{10pt}
We use Mealy machines to model our hypothesis and target machines.  A Mealy
machine is defined by the six tuple $M = \{Q, \Sigma, \Gamma, \delta,
  \gamma,q_0\}$.  $Q$ is the set of states.  The input and output alphabets,
$\Sigma$ and $\Gamma$, are parameters of the algorithm.  $\delta$, is a mapping
from $Q \times \Sigma \rightarrow Q$.  That is, $\delta$ determines which states
the transitions lead to. $\gamma$ is a mapping from $Q \times \Sigma \rightarrow
\Gamma$.  That is, $\gamma$ determines the output that each transition
produces.  $q_0$ is the start state \cite{Hein02}.

The hypothesis machine is initially defined as:

%tab me later
$Q = \{q_0\}$\\
$\delta(q_0,a_i) = q_0 \mid a_i \in \Sigma$\\
$\gamma(q_0,a_i) = b_0 \mid a_i \in \Sigma$ and $b_0 \in \Gamma$

That is, $Q$ contains only the start state $q_0$, and $\delta$ maps $q_0$ to
itself for each input in $\Sigma$.  Each of these reflexive transitions produces
the first element of the output set as defined by an arbitrary ordering on
$\Gamma$.
\subsection{Choosing a Query}
\vspace{7pt}
The optimal query strategy is one that eliminates the most hypotheses from the
version space.  Determining the optimal query strategy is a difficult problem.
Our research experiments with aperiodic sequences as an approximately optimal
strategy.

A sequence which covers all possible machines must be aperiodic and infinite.
If it were periodic, there would exist a machine containing a cycle that a
period of the sequence would traverse.  Any transition that was not part of that
cycle would never be explored.  If the sequence were finite, then a machine
containing more states than elements in the sequence could never be fully
explored.

The chosen sequence should rely on what has already been tried in order to
formulate the remainder of the sequence.  The state history should be used as a
seed to the sequence.  Here, the state history refers to the sequence of queries
performed when the learner was exploring that state.  We investigate two
aperiodic sequences: de Bruijn sequences and Linus sequences.  We use a random
sequence as a base-line comparison.

\subsubsection{Linus Sequences}
\vspace{6pt}
A Linus sequence is a sequence which was designed to be as aperiodic as
possible.  Each symbol in the sequence is chosen to generate the smallest
duplicate suffix in the sequence.  When more than one symbol produces a repeated
subsequence of length zero, one is chosen arbitrarily.  The next symbol in a
Linus sequence may be generated in linear time, regardless of the seed.

For each element in the alphabet, calculate the length of the longest duplicated
suffix that would occur if the element were used as the next symbol in the
sequence.  The element which causes the shortest duplicate subsequence is chosen
as the next symbol.

An example Linus sequence over an alphabet, $\Sigma = \{1,2\}$ is
$$1, 2, 1, 1, 2, 2, 1, 2, 1, 1, 2, 1, ...$$
The first symbol, $1$, is chosen arbitrarily because no symbol produces a
duplicated subsequence.  The symbol, $2$, is chosen next because a $1$ would
cause the duplicated suffix of $1,1$, After that, a $1$ is chosen because a $2$
would cause the duplicated suffix $2,2$.  Then, a $1$ is chosen again.  This
causes a duplicated suffix of $1,1$ but avoids the longer duplicated suffix of
$1,2,1,2$.  This process repeats indefinitely.  Further information can be found
in the On-line Encyclopedia of Integer Sequences \cite{Sloane}.
\pagebreak
\subsubsection{de Bruijn Sequences}
\vspace{10pt}

A de Bruijn sequence $B_{cyclic}(k,n)$ is a cyclic sequence such that
every string of length $n$ over the alphabet $a$, which has a size of $k$,
exists as a contiguous subsequence in the sequence.  A linear de Bruijn sequence
$B_{linear}(k,n)$ has the same property, but is a linear sequence and thus has
$n-1$ more symbols than its cyclic counterpart.  eg.
$$B_{cyclic}(2,3) = \langle 0,0,0,1,0,1,1,1\rangle$$
$$B_{linear}(2,3) = \langle 0,0,0,1,0,1,1,1,0,0\rangle$$
An $n$-dimensional de Bruijn graph over $a$ is a directed graph with $k^n$
vertices each uniquely labeled with a concatenation of $n$ symbols chosen from
$a$.  For any 2 vertices, $v_{head}$ and $v_{tail}$, with their respective
labels, $s_1s_2s_3...s_n$ and $t_1t_2t_3...t_n$, there exists an edge with the
label $t_n$ connecting $v_{head}$ and $v_{tail}$ iff $s_2s_3...s_n =
t_1t_2...t_{n-1}$.  Each vertex in a de Bruijn graph has an in degree and out
degree of $k$ and thus is assured to have an Eulerian circuit.
%Also, The D in De Bruijn shouldn't be capitalized in the captions

\begin{figure}[h!]
  \centering
  \includegraphics{deBruijn2New.eps}
  \caption{A two dimensional de Bruijn Graph}
  \includegraphics{deBruijn3New.eps}
  \caption{A three dimensional de Bruijn Graph}
%  \includegraphics{DeBruijn2a.eps}
%  \includegraphics{DeBruijn3.eps}
\end{figure}
It is possible to obtain a de Bruijn sequence, $B_{cyclic}(k,n)$, by traversing
a Hamiltonian circuit on an $n$-dimensional de Bruijn graph and recording the
labels on each edge or by traversing an Eulerian circuit on an $n-1$ dimensional
de Bruijn graph and recording the labels.  The latter can be achieved in
polynomial time and thus is commonly used.

\newpage
As it occurs, $B_{cyclic}(k,n)$ has length $k^n$ and $B_{linear}(k,n)$ has
length $k^n+n-1$.  We define a different sequence which is a relaxed variation
of $B_{linear}(k,n)$ called $B_{relaxed}(k,n)$ which is defined to be the
recurrence:

$$B_{relaxed}(k,1) \equiv (s_1,s_2,\ldots,s_k) \mid s_1,s_2,\ldots,s_k \in a$$
$$B_{relaxed}(k,n) \equiv append(B_{relaxed}(k,n-1),S_n)$$

where $S_n$ is the shortest sequence of symbols that can be appended
to $B_{relaxed}(k,n-1)$, such that every string of length $n$ over the
alphabet $a$ exists as a contiguous subsequence in $B_{relaxed}(k,n)$.  Note
that there may be many $S_n$s that satisfy this requirement.

Put into the framework of an $n$-dimensional de Bruijn graph, finding $S_n$ is
equivalent to finding the shortest route containing the subset of arcs which
have not yet been traversed by $B_{relaxed}(k,n-1)$.  Finding $S_n$ is
equivalent to the Directed Rural Postman Problem (DRPP).

The Directed Rural Postman Problem was introduced by Orloff (1974, 1976). Given
a directed graph $G=(V = V_1 \cup V_2; A = A_1 \cup A_2)$ with a constant cost
function on all of the arcs, construct a minimal cost Euler cycle which
traverses the arcs in $A_1$ at least once.  The Arcs in $A_2$ can be traversed
if necessary. The key point in the DRPP is that the subgraph induced by $A_1$
does not need to be connected, however the graph G does need to be
connected. The DRPP was demonstrated to be NP-hard by Lenstra and Rinnooy
Kan, via a reduction from the symmetric TSP. \cite{Dror00}

There is a special case when the subgraph induced by the required arcs is fully
connected.  In this case, it reduces to the directed Chinese Postman Problem,
which is solvable in polynomial time.

\subsubsection{Random Sequences}
\vspace{15pt}

One of the easiest sequences to produce is the random sequence.  A random
sequence is guaranteed to be aperiodic by definition.  It does not use any
previous information gained, and therefore should not be better than any
sequence which does use previously gathered information.  Random walks on finite
state machines have been a subject of great study.  Feller gives a thorough
introduction to random walks \cite{Feller61}.

\subsubsection{Variations}
\vspace{15pt}

There are a number of other sequences and alterations to sequences which may be
used as heuristics.  Some of these are listed in the Future Works section.  One
alteration to the algorithm found inconsistencies in the hypothesis machine using a
fewer number of queries.

By keeping the history of inputs factorized by states, some states have shorter
histories than others.  By travelling to these states using the fewest number of
queries possible, and then extending the history of these states via the chosen
sequence, inconsistencies in the hypothesis machine are found earlier.

Discrepancies between the hypothesis machine and the target machine are more
likely to be where the hypothesis machine has been scarcely explored.  This
heuristic alteration can induce a regular pattern into the exploration.  As
stated before, any repeating pattern occurring at a regular interval may not
fully explore the target machine.

\subsection{Finding the Next Hypothesis}
\vspace{10pt}

When a query proves the hypothesis to be inconsistent, the search for a new
hypothesis is continued by enumerating through the remaining machines in our
version space until a machine is found that is consistent with the query
history.  The set of machines we encounter during this search are referred to as
candidate machines.

We enumerate through the queries beginning with the last query in the set of
queries from the previous search.  We continue to step through the search
queries and verify that the machine satisfies them.  If a transition is missing,
then a new transition is added that produces the same output as the query.  If a
query is not satisfied by the machine, the algorithm backtracks to the last
transition added to the candidate machine, and modifies the transition so that
its destination is a new state.  Once the algorithm has tried every state for a
particular transition, the algorithm backtracks to the previous transition.  If,
continuing in this manner, the algorithm attempts to backtrack from the original
transition added, and no states remain, a new state is added which becomes the
destination state for the initial transition.

There are $(n*i)^{n*o}$ possible machines where $n$ is the number of states in
the machines, $i$ is the number of inputs, and $o$ is the number of outputs.  Of
these machines, many are not elements of our hypothesis space because of our
assumption bias.  However, there are still an exponential number of them with
respect to $n$.  Because it is impractical to store an exponential number of
candidate machines, we lazily construct them as they are needed.  We construct
candidate machines by beginning with the last machine that was returned.  This
choice allows the learning algorithm to safely prune a significant portion of
candidate machines.

When we are constructing a candidate machine, we enumerate through all possible
transition mappings for $\delta$.  Using the query history and the chosen
$\delta$, we can deductively choose a complete $\gamma$ mapping to match the
query history.  This effectively eliminates the size of the output set $o$ from
the number of machines in our version space.

During a depth first construction of the machine, an inconsistency may be found
before the machine is fully constructed.  For example, if the first two queries
to the target machine used the same input and the returned output was different,
the learning algorithm may deduce that the start state cannot have a self loop on
that input.  Having found this early non-feature, the algorithm does not need to
enumerate through any machine which has this feature.

When a consistent candidate machine is found, it becomes the new hypothesis
machine.  The search environment is saved, and the algorithm returns to choosing
queries.

\section{Results and Discussion}
\vspace{10pt}
To test the algorithm, we generated one hundred machines within our hypothesis space.
Our hypothesis space contained only machines with two states, two inputs and two
outputs.  We attempted to learn each machine, using three separate methods.
The first method traveled directly to the state with the shortest history and used de
Bruijn sequences to extend that history.  The second method was similar, but it
used Linus sequences to extend the history.  For the third method we used a
random input at each step; we used this method as our baseline comparison.

During the trials, the first two methods did not always converge to the correct
machine.  This is because travelling to the state with the shorter history first
sometimes induces a regular pattern into the exploration.  This phenomenon
represents a local minimum problem in our search heuristic.

The heuristic estimates that the state with the shortest history is the state
which will yield the most information, and thus it suggests the learner explore
that state first.  As the learner explores a state, the state's history
increases.  If there exists another state which also has a short history, the
heuristic may suggest that the learner switch its efforts to exploring the other
state.  If the histories of the states are close, the search may oscillate
between these states.

Sometimes the exploration requires more time to explore a state than is given
before being forced to transition away from the state.  The learner may not gain
any information about the state, and therefore, may cease to make forward
progress. Our learner became trapped in a local minimum in 10\% of the machines
we tested.  These data points were not included in the results below.

For the remaining 90\% of the machines, we measured the number of queries
required to learn the target machine.  The table below summarizes our results
for each method.

\begin{enumerate}
\item Linus Sequence:\\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
   Min. & 1st Qu. & Median & Mean  & 3rd Qu. & Max.\\ \hline
   3.00 & 6.00    & 7.00   & 9.72  & 11.00   & 41.00\\ \hline
\end{tabular}
\item de Bruijn Sequence:\\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
   Min. & 1st Qu. & Median & Mean  & 3rd Qu. & Max.\\ \hline
   3.00 & 6.00    & 7.00   & 7.86  & 8.00    & 15.00\\ \hline
\end{tabular}
\item Random Sequence:\\
\begin{tabular}{|c|c|c|c|c|c|}
\hline
   Min. & 1st Qu. & Median & Mean & 3rd Qu. & Max.\\ \hline
   3.00 & 5.00    & 6.00   & 7.80 & 9.00    & 20.00\\ \hline
\end{tabular}
\end{enumerate}

On average, random sequences discover the target machine in the fewest number of
queries.  They learn the target machine in significantly fewer queries than
Linus sequences, but are still comparable to de Bruijn sequences.  De Bruijn
sequences have a smaller standard deviation than random sequences, which may be
desirable in some applications.
\pagebreak

One possible explanation for these results may be the size of the learning task.
For learning simple machines, random probing is expected to learn with few
queries because the deficiency of the graph is zero.  The deficiency of the
graph is equal to the number of additional edges needed to make the graph
Eularian.  Graphs with low deficiencies are easier to explore \cite{Angluin87},
and most exploration methods are expected to do well on these types of machines.
This may explain why random sequences and de Bruijn sequences are comparable.

\section{Conclusions and Future work}
\vspace{10pt}

We have presented an algorithm for discovering strongly connected finite state
transducers by querying the machine and observing its outputs.  The query is
chosen from aperiodic sequences.  We keep the simplest hypothesis machine which
is consistent with all queries.  Constructed queries do not learn small machines
significantly faster than random queries.

When the machines are larger and the deficiencies for the machines are higher,
probing is expected to learn a large portion of the machine and have difficulty
learning the deficient parts of the machine.  To test this theory, we would like
to experiment on larger machines with a higher deficiency.

As previously discussed, determining the best query strategy is a difficult
problem.  Despite this, it would be interesting to compare the query choices
between the optimal query strategy and the methods that our research
investigated.  This experiment is only feasible for small machines because
of the computational complexity for determining the optimal query strategy.

\newpage

\bibliographystyle{plain}
\bibliography{Paper_Writeup}

%\balancecolumns


\end{document}
