%!TEX root = ../rapport.tex

\chapter{Implementation}
\label{chap:implementation}

In this Chapter we will briefly discuss the implementation of the concepts
presented in the preceding Chapters. The presentation here is only
superficial; for details the actual source code should be consulted.

\section{Representation of Lambda Terms}

A lambda term is represented as a directed acyclic graph (DAG). The nodes of the
DAG are encoded in Python as objects. There are three types of nodes: 
abstraction, application and variable:
\begin{description}
	\item[Abstraction nodes] define scopes for bound variables and can be thought
	of as the encoding of the $\lambda$s in the DAG-structure. This node type
	always has one child.
	
	\item[Application nodes] have two children. The first of these
	is either the DAG-representation of some \emph{active} subterm of 
	$M$, or another application node. The second child is the DAG-representation
	of the ``argument'', that is, the lambda term that are to be substituted
	in place for a bound variable. An application node represents a possible
	one-step $\beta$-reduction.
	
	\item[Variable nodes] represent free or bound variables. A bound 
	variable is associated with a scope, represented by an ancestral
	abstraction node. Variable nodes cannot have descendants.
\end{description}

This representation is straightforwardly encoded in Python with a general node
class, \pycode{LambdaNode}, which has a subclass for each node type.
\begin{itemize}
	\item Each \pycode{LambdaNode}-object maintains pointers to its immediate 
	descendants as well as a pointer to its parent node.
	
	\item The abstraction nodes furthermore maintain a list of pointers to the
	variable nodes that represent the variables bound by this particular abstractor.
	These variable nodes must be descendants of the abstraction node, although 
	not necessarily direct descendants.
	
	\item Every abstraction node knows the name of the variable that it binds.
	This is done in order to be able to easily build the list of bound 
	variables after parsing a string representation of a lambda term. When
	the list of variables has been built, this name is not important anymore, 
	but it still serves as a means of producing human-readable output of the DAG.
	
	\item When a term is reduced, the problem is attacked ``top-down'', i.e. we first
	see the abstraction node, and then we have to deal with its variables.
	Therefore there is no need for a doubly linked connection between abstraction
	nodes and variable nodes, and we can safely encode the relationship as a simple
	list of bound nodes in each abstraction node.
	
	\item The comparison of lambda terms have to follow Convention \ref{con:alpha} regarding
	$\alpha$-congruence. Two DAGs are compared by their \emph{normalized} string representations,
	which simply is the lambda term with all
	variables renamed in a systematic, well-defined way, such that if two terms
	are equal modulo $\alpha$-congruence, then their normalized string representations will
	be the same.
\end{itemize}

A term that is not in $\beta$-normal-form has at least one $\beta$-contractum,
and therefore the DAG-re\-pre\-sen\-ta\-tion of that term contains at least one
application node and one abstraction node. More specifically, for each
possible one-step $\beta$-reduction in a term $M$, its DAG will contain one
application node. The first descendant of this node will be an abstraction
node, while the second, ``rightmost'', descendant will be the argument that
will be applied to the function represented by the abstraction node.

\section{Parsing}

We have implemented a parser that generates the aforementioned representation
for the grammar specified in Definition \ref{def:grammar} with a few subtle
differences. First of all, the symbol used for $\lambda$ is a \texttt{\#}.
Secondly, all lambda expressions fed into the parser are run through a
preprocessing step that adds all implied parentheses to the string. This step
is included to make the parser definition itself simpler. The preprocessing
step ensures that Conventions \ref{con:assoc} and \ref{con:extent} are
followed.

We used a parser generator to make the parser, since we found it simpler and
more straightforward than to implement it from scratch. There exist numerous
parser generators for Python \cite{pythonParserGeneratorList}. We chose YAPPS2
(Yet Another Python Parser System 2) \cite{YAPPS2} because it is designed such
that it generates human-readable code. YAPPS2 takes a specification of an
LL(1) grammar and writes the Python code for the corresponding parser.

\section{Representation of $\beta$-reduction Graphs}

A reduction graph is represented in Python as an instance of the
\pycode{Graph}-class. A \pycode{Graph}-object contains its nodes, represented
as instances of the \pycode{Node}-class, in three different data structures:
\begin{enumerate}
	\item A dictionary with references to the nodes, where the keys are normalized string 
	representations of the lambda terms represented by the nodes.
	
	\item Another dictionary with references to the same nodes, but keyed on the
	names of the nodes instead of the lambda terms.
	
	\item A list of the nodes. This is included to ensure that the nodes can be
	accessed in a consistent ordering, since the Python function \pycode{dict.items()}
	returns the elements of the dictionary in an arbitrary order.
\end{enumerate}
A \pycode{Graph}-object contains exactly one \pycode{Node} for each lambda
term in the reduction graph; if one tries to add a node for an existing term,
nothing happens.

A \pycode{Node}-object has a name and a reference to the lambda term which it
represents. It furthermore has a list of its children and a list of its
parents. These lists contain instances of the \pycode{Edge}-class, which
basically just implements a coupling between two \pycode{Node}-objects.

\section{Generation of $\beta$-reduction Graphs}

In the file \texttt{operations.py}, the functions necessary for the generation
of reduction graphs are implemented. The main function here is
\pycode{reductiongraphiter()} which returns an iterator over the different
``generations'' of the reduction graph. Each generation is a reduction graph
containing everything from the generation immediately before, along with the
contracta of all the redexes of one term that was not reduced in the graph of
the preceding generation. Thus, the longer one iterates, the bigger and more
complete a reduction graph is returned. 

The function takes three arguments: the lambda term in whose reduction graph
we are interested, a start parameter controlling when the iterator should
start emitting graphs (making it possible to return e.g. only the ten last
generations) and an end parameter that controls how long the computation shall
continue, i.e. how many terms should be reduced. Note that, in general, a
``large'' end value is preferred, since it allows the user to continue to step
forwards. Therefore, the value used in practice in the implementation is fixed
to $10^6$.

{\textcolor{black}
In the case of reduction graphs of infinite size, the computation only halts
when the given maximum is reached. For finite graphs, the computation halts
either when the maximum is reached or when the graph is fully computed,
depending on the chosen maximum value.

Each time a reduction graph is returned by Pythons \pycode{yield}-statement, a
deep copy operation is performed. This is done in order to ensure that later
stages in the program indeed see the different generations of graphs as
distinct objects, instead of a collection of references to the same set of
\pycode{Node}s. The built-in copy operation \pycode{copy.deepcopy()} proved to
be too slow, therefore a custom deep copy operation was implemented.
}
\section{Drawing Algorithms}

All drawing algorithms are implemented as classes extending the
interface-style class \pycode{DrawingAlgorithm}. The drawing functions use
only those methods that are defined in \pycode{DrawingAlgorithm}, so to extend
the software with new algorithms, the only necessity imposed is that this
interface is implemented. For instance, the algorithms that we provide are
implemented in two ways: algorithms using the software package GraphViz
\cite{GraphViz} and a ``home-cooked'' implementation. The actual code provided
by us for the GraphViz-algorithms are merely a layer that calls the underlying
GraphViz-routines and ensures compatibility with the graph representations
that we use.

GraphViz is a tool meant for ``end-to-end'' treatment of graph drawing, i.e.
it has a graph description language and features for different graphic output
modes with different shapes for nodes etc. The only property relevant for us
is the node positions, so when running the algorithms from GraphViz we first
convert our graph model into GraphViz's format, ask GraphViz to compute the
node positions and finally copy them into our model. 

A common task when working with graphs and drawing algorithms is to find the
graph-theoretical distance between all pairs of nodes. It is used in the
algorithm that we implemented ourselves, and it is highly likely that a new
algorithm needs this functionality too. An implementation of the all-pairs
distance algorithm from \cite{Seidel1992} is available for future use.

The following drawing algorithms are implemented:
\begin{description}
	\item[Majorization] is an implementation of the algorithm by Gansner et al. 
	described in \cite{Gansner2004} that makes heavy use of the NumPy library \cite{NumPy}. 
	It is a force-based approach that uses stress majorization
	to minimize an energy function. This algorithm is also implemented in the GraphViz
	package, but there are several reasons to why we chose to make our own implementation:
	\begin{itemize}
		\item we have complete control with the algorithm, allowing us to tweak it;
		\item for the same reason we are able to call functions to update the screen
		when we see fit, allowing us to create an animation-like look.
	\end{itemize}
	
	This is the only non-GraphViz algorithm implemented.
	
	In the program it is called ``Neato Animated''.

	\item[Neato] is GraphViz's implementation of the algorithm by Gansner et al. 
	Since it behaves like a ``black box'', it uses its own settings for the
	stress function which we cannot change. This function has been modified in our
	own implementation.
		
	\item[Dot] is an algorithm that makes a hierachichal layout of a graph. As the only
	algorithm implemented, it does not necessarily draw straight lines for edges; it computes
	a (possibly straight) B\'ezier curve for the edges. Like the other algorithms except ``Majorization''
	it is really a layer on top of the GraphViz-module Dot.
	
	\item[Circo] performs a circular layout of the nodes. It is an implementation of 
	the algorithm described in \cite{Six1999}.
	
	\item[Twopi] also performs a circular layout, but it places the nodes on several circles instead
	of only one as Circo does. 
	
	\item[Fdp] is another force-based algorithm. It is an implementation of 
	the algorithm described by Fruchterman et al. \cite{Fruchterman1991}.
\end{description}

\section{Graphical User Interface}

The focus of this work has not been on the design of a graphical user
interface. However, in order to make the program accessible, a draft user
interface has been implemented. We have chosen to use the widget library GTK
\cite{PyGTK}, since it runs on several platforms and is already widely used in
e.g. the Gnome desktop environment.

Sometimes the layout algorithms place the nodes in a suboptimal way compared
to the users perception of the graph. Because of this, it is possible to drag
and manually reorder the nodes in the graph. Some algorithms tries to optimize
the graph layout after the node has been dragged, while others simply let the
user move the nodes around. This difference exists because the algorithms are
very different in nature, and on some of the algorithms an optimization of a
reordered graph would completely render the manual reordering obsolete.