\chapter{Lambda calculus}

Lambda calculus is based on the so-called `lambda notation' for denoting
functions. In informal mathematics, when one wants to refer to a function, one
usually first gives the function an arbitrary name, and thereafter uses that
name, e.g.

\begin{quote}

Suppose $f:\real \to \real$ is defined by:

$$ f(x) = \left\{ \begin{array}{ll}
                                   0 & \mbox{if $x = 0$} \\
                    x^2 sin(1 / x^2) & \mbox{if $x \not= 0$}
                 \end{array} \right. $$

Then $f'(x)$ is not Lebesgue integrable over the unit interval $[0,1]$.

\end{quote}

Most programming languages, C for example, are similar in this respect: we can
define functions only by giving them names. For example, in order to use the
successor function (which adds $1$ to its argument) in nontrivial ways (e.g.
consider a pointer to it), then even though it is very simple, we need to name
it via some function definition such as:

\begin{verbatim}
  int suc(int n)
   { return n + 1;
   }
\end{verbatim}

In either mathematics or programming, this seems quite natural, and generally
works well enough. However it can get clumsy when higher order functions
(functions that manipulate other functions) are involved. In any case, if we
want to treat functions on a par with other mathematical objects, the
insistence on naming is rather inconsistent. When discussing an arithmetical
expression built up from simpler ones, we just write the subexpressions down,
without needing to give them names. Imagine if we always had to deal with
arithmetic expressions in this way:

\begin{quote}

Define $x$ and $y$ by $x = 2$ and $y = 4$ respectively. Then $x x = y$.

\end{quote}

Lambda notation allows one to denote functions in much the same way as any
other sort of mathematical object. There is a mainstream notation sometimes
used in mathematics for this purpose, though it's normally still used as part
of the definition of a temporary name. We can write

$$ x \mapsto t[x] $$

\noindent to denote the function mapping any argument $x$ to some arbitrary
expression $t[x]$, which usually, but not necessarily, contains $x$ (it is
occasionally useful to ``throw away'' an argument). However, we shall use a
different notation developed by \citeN{church-book}:

$$ \lamb{x} t[x]$$

\noindent which should be read in the same way. For example, $\lamb{x} x$ is
the identity function which simply returns its argument, while $\lamb{x} x^2$
is the squaring function.

The symbol $\lambda$ is completely arbitrary, and no significance should be
read into it. (Indeed one often sees, particularly in French texts, the
alternative notation $[x]\; t[x]$.) Apparently it arose by a complicated
process of evolution. Originally, the famous {\em Principia Mathematica}
\cite{whitehead-principia} used the `hat' notation $t[\hat{x}]$ for the
function of $x$ yielding $t[x]$. Church modified this to $\hat{x}.\;t[x]$, but
since the typesetter could not place the hat on top of the $x$, this appeared
as ${\tiny \wedge} x.\;t[x]$, which then mutated into $\lamb{x} t[x]$ in the
hands of another typesetter.

\section{The benefits of lambda notation}

Using lambda notation we can clear up some of the confusion engendered by
informal mathematical notation. For example, it's common to talk sloppily about
`$f(x)$', leaving context to determine whether we mean $f$ itself, or the
result of applying it to particular $x$. A further benefit is that lambda
notation gives an attractive analysis of practically the whole of mathematical
notation. If we start with variables and constants, and build up expressions
using just lambda-abstraction and application of functions to arguments, we can
represent very complicated mathematical expressions.

We will use the conventional notation $f(x)$ for the application of a function
$f$ to an argument $x$, except that, as is traditional in lambda notation, the
brackets may be omitted, allowing us to write just $f\;x$. For reasons that
will become clear in the next paragraph, we assume that function application
associates to the left, i.e. $f\;x\;y$ means $(f(x))(y)$. As a shorthand for
$\lamb{x} \lamb{y} t[x,y]$ we will use $\lamb{x\;y} t[x,y]$, and so on. We also
assume that the scope of a lambda abstraction extends as far to the right as
possible. For example $\lamb{x} x\; y$ means $\lamb{x} (x\; y)$ rather than
$(\lamb{x} x)\; y$.

At first sight, we need some special notation for functions of several
arguments. However there is a way of breaking down such applications into
ordinary lambda notation, called {\em currying}, after the logician
\citeN{curry-comb}. (Actually the device had previously been used by both
\citeN{frege-arith} and \citeN{schonfinkel}, but it's easy to understand why
the corresponding appellations haven't caught the public imagination.) The idea
is to use expressions like $\lamb{x\;y} x + y$. This may be regarded as a
function $\real \to (\real \to \real)$, so it is said to be a `higher order
function' or `functional' since when applied to one argument, it yields another
function, which then accepts the second argument. In a sense, it takes its
arguments one at a time rather than both together. So we have for example:

$$ (\lamb{x\;y} x + y)\;1\;2 = (\lamb{y} 1 + y)\;2 = 1 + 2 $$

Observe that function application is assumed to associate to the left in lambda
notation precisely because currying is used so much.

Lambda notation is particularly helpful in providing a unified treatment of
bound variables. Variables in mathematics normally express the dependency of
some expression on the value of that variable; for example, the value of $x^2 +
2$ depends on the value of $x$. In such contexts, we will say that a variable
is {\em free}. However there are other situations where a variable is merely
used as a place-marker, and does not indicate such a dependency. Two common
examples are the variable $m$ in

$$ \sum_{m = 1}^{n} m = \frac{n (n + 1)}{2} $$

\noindent and the variable $y$ in

$$ \int_{0}^{x} 2 y + a \ dy = x^2 + a x $$

In logic, the quantifiers $\all{x} P[x]$ (`for all $x$, $P[x]$') and $\ex{x}
P[x]$ (`there exists an $x$ such that $P[x]$') provide further examples, and in
set theory we have set abstractions like $\{x \mid P[x]\}$ as well as indexed
unions and intersections. In such cases, a variable is said to be {\em bound}.
In a certain subexpression it is free, but in the whole expression, it is bound
by a {\em variable-binding operation} like summation. The part `inside' this
variable-binding operation is called the {\em scope} of the bound variable.

A similar situation occurs in most programming languages, at least from Algol
60 onwards. Variables have a definite scope, and the formal arguments of
procedures and functions are effectively bound variables, e.g. $n$ in the C
definition of the successor function given above. One can actually regard
variable declarations as binding operations for the enclosed instances of the
corresponding variable(s). Note, by the way, that the {\em scope} of a variable
should be distinguished sharply from its {\em lifetime}. In the C function {\tt
rand} that we gave in the introduction, $n$ had a textually limited scope but
it retained its value even outside the execution of that part of the code.

We can freely change the name of a bound variable without changing the meaning
of the expression, e.g.

$$ \int_{0}^{x} 2 z + a \ dz = x^2 + a x $$

Similarly, in lambda notation, $\lamb{x} E[x]$ and $\lamb{y} E[y]$ are
equivalent; this is called {\em alpha}-equivalence and the process of
transforming between such pairs is called alpha-conversion. We should add the
proviso that $y$ is not a free variable in $E[x]$, or the meaning clearly may
change, just as

$$ \int_{0}^{x} 2 a + a \  da \not= x^2 + a x $$

It is possible to have identically-named free and bound variables in the same
expression; though this can be confusing, it is technically unambiguous, e.g.

$$ \int_{0}^{x} 2 x + a \ dx = x^2 + a x $$

\noindent In fact the usual Leibniz notation for derivatives has just this
property, e.g. in:

$$ \frac{d}{dx}x^2 = 2 x$$

\noindent $x$ is used both as a bound variable to indicate that differentiation
is to take place with respect to $x$, and as a free variable to show where to
evaluate the resulting derivative. This can be confusing; e.g. $f'(g(x))$ is
usually taken to mean something different from $\frac{d}{dx} f(g(x))$. Careful
writers, especially in multivariate work, often make the separation explicit by
writing:

$$ |\frac{d}{dx}x^2|_x = 2 x $$

\noindent or

$$ |\frac{d}{dz}z^2|_x = 2 x $$

Part of the appeal of lambda notation is that all variable-binding operations
like summation, differentiation and integration can be regarded as functions
applied to lambda-expressions. Subsuming all variable-binding operations by
lambda abstraction allows us to concentrate on the technical problems of bound
variables in one particular situation. For example, we can view
$\frac{d}{dx}x^2$ as a syntactic sugaring of $D\; (\lamb{x} x^2)\; x$ where
$D:(\real \to \real) \to \real \to \real$ is a differentiation operator,
yielding the derivative of its first (function) argument at the point indicated
by its second argument. Breaking down the everyday syntax completely into
lambda notation, we have $D\; (\lamb{x} \mbox{EXP } x\; 2)\; x$ for some
constant $\mbox{EXP}$ representing the exponential function.

In this way, lambda notation is an attractively general `abstract syntax' for
mathematics; all we need is the appropriate stock of constants to start with.
Lambda abstraction seems, in retrospect, to be the appropriate primitive in
terms of which to analyze variable binding. This idea goes back to Church's
encoding of higher order logic in lambda notation, and as we shall see in the
next chapter, Landin has pointed out how many constructs from programming
languages have a similar interpretation. In recent times, the idea of using
lambda notation as a universal abstract syntax has been put especially clearly
by Martin-L\"of, and is often referred to in some circles as `Martin-L\"of's
theory of expressions and arities'.\footnote{This was presented at the Brouwer
Symposium in 1981, but was not described in the printed proceedings.}

\section{Russell's paradox}

As we have said, one of the appeals of lambda notation is that it permits an
analysis of more or less all of mathematical syntax. Originally, Church hoped
to go further and include set theory, which, as is well known, is powerful
enough to form a foundation for much of modern mathematics. Given any set $S$,
we can form its so-called {\em characteristic predicate} $\chi_S$, such that:

$$ \chi_S(x) = \left\{ \begin{array}{ll}
                        true & \mbox{if $x \in S$} \\
                        false & \mbox{if $x \not\in S$}
                 \end{array} \right. $$

Conversely, given any unary predicate (i.e. function of one argument) $P$, we
can consider the set of all $x$ satisfying $P(x)$ --- we will just write $P(x)$
for $P(x) = true$. Thus, we see that sets and predicates are just different
ways of talking about the same thing. Instead of regarding $S$ as a set, and
writing $x \in S$, we can regard it as a predicate and write $S(x)$.

This permits a natural analysis into lambda notation: we can allow arbitrary
lambda expressions as functions, and hence indirectly as sets. Unfortunately,
this turns out to be inconsistent. The simplest way to see this is to consider
the Russell paradox of the set of all sets that do not contain themselves:

$$ R = \{x \mid x \not\in x\} $$

We have $R \in R \Iff R \not\in R$, a stark contradiction. In terms of
lambda defined functions, we set $R = \lamb{x} \Not (x\; x)$, and find that
$R\; R = \Not(R\; R)$, obviously counter to the intuitive meaning of the
negation operator $\Not$.

To avoid such paradoxes, \citeN{church-types} followed Russell in augmenting
lambda notation with a notion of {\em type}; we shall consider this in a later
chapter. However the paradox itself is suggestive of some interesting
possibilities in the standard, untyped, system, as we shall see later.

\section{Lambda calculus as a formal system}

We have taken for granted certain obvious facts, e.g. that $(\lamb{y} 1 + y)\;2
= 1 + 2$, since these reflect the intended meaning of abstraction and
application, which are in a sense converse operations. Lambda {\em calculus}
arises if we enshrine certain such principles, and {\em only} those, as a set
of formal rules. The appeal of this is that the rules can then be used
mechanically, just as one might transform $x - 3 = 5 - x$ into $2 x = 5 + 3$
without pausing each time to think about {\em why} these rules about moving
things from one side of the equation to the other are valid. As
\citeN{whitehead-intro} says, symbolism and formal rules of manipulation:

\begin{quote}
[\ldots] have invariably been introduced to make things easy. [\ldots] by
the aid of symbolism, we can make transitions in reasoning almost mechanically
by the eye, which otherwise would call into play the higher faculties of the
brain. [\ldots] Civilisation advances by extending the number of important
operations which can be performed without thinking about them.
\end{quote}

\subsection{Lambda terms}

Lambda calculus is based on a formal notion of lambda term, and these terms are
built up from variables and some fixed set of constants using the operations of
function application and lambda abstraction. This means that every lambda term
falls into one of the following four categories:

\begin{enumerate}

\item {\bf Variables:} these are indexed by arbitrary alphanumeric strings;
typically we will use single letters from towards the end of the alphabet, e.g.
$x$, $y$ and $z$.

\item {\bf Constants:} how many constants there are in a given syntax of lambda
terms depends on context. Sometimes there are none at all. We will also
denote them by alphanumeric strings, leaving context to determine when they are
meant to be constants.

\item {\bf Combinations}, i.e. the application of a function $s$ to an argument
$t$; both these components $s$ and $t$ may themselves be arbitrary
$\lambda$-terms. We will write combinations simply as $s\;t$. We often
refer to $s$ as the `rator' and $t$ as the `rand' (short for `operator' and
`operand' respectively).

\item {\bf Abstractions} of an arbitrary lambda-term $s$ over a variable
$x$ (which may or may not occur free in $s$), denoted by $\lamb{x} s$.

\end{enumerate}

Formally, this defines the set of lambda terms inductively, i.e. lambda
terms arise {\em only} in these four ways. This justifies our:

\begin{itemize}

\item Defining functions over lambda terms by primitive recursion.

\item Proving properties of lambda terms by structural induction.

\end{itemize}

A formal discussion of inductive generation, and the notions of primitive
recursion and structural induction, may be found elsewhere. We hope most
readers who are unfamiliar with these terms will find that the examples below
give them a sufficient grasp of the basic ideas.

We can describe the syntax of lambda terms by a BNF (Backus-Naur form)
grammar, just as we do for programming languages.

$$ Exp = Var \mid Const \mid Exp\; Exp \mid \lambda\; Var . \; Exp $$

\noindent and, following the usual computer science view, we will identify
lambda terms with abstract syntax trees, rather than with sequences of
characters. This means that conventions such as the left-association of
function application, the reading of $\lamb{x\;y} s$ as $\lamb{x} \lamb{y} s$,
and the ambiguity over constant and variable names are purely a matter of
parsing and printing for human convenience, and not part of the formal system.

One feature worth mentioning is that we use single characters to stand for
variables and constants in the formal system of lambda terms {\em} and as
so-called `metavariables' standing for arbitrary terms. For example, $\lamb{x}
s$ might represent the constant function with value $s$, or an arbitrary lambda
abstraction using the variable $x$. To make this less confusing, we will
normally use letters such as $s$, $t$ and $u$ for metavariables over terms. It
would be more precise if we denoted the variable $x$ by $V_x$ (the $x$'th
variable) and likewise the constant $k$ by $C_k$ --- then all the variables in
terms would have the same status. However this makes the resulting terms a bit
cluttered.

\subsection{Free and bound variables}

We now formalize the intuitive idea of free and bound variables in a
term, which, incidentally, gives a good illustration of defining a
function by primitive recursion. Intuitively, a variable in a term is
free if it does not occur inside the scope of a corresponding abstraction. We
will denote the set of free variables in a term $s$ by $FV(s)$, and
define it by recursion as follows:

\begin{eqnarray*}
   FV(x)          & = & \{ x \}                 \\
   FV(c)          & = & \emptyset               \\
   FV(s\; t)      & = & FV(s) \Union FV(t)      \\
   FV(\lamb{x} s) & = & FV(s) - \{ x \}
\end{eqnarray*}

\noindent Similarly we can define the set of bound variables in a term $BV(s)$:

\begin{eqnarray*}
   BV(x)          & = & \emptyset               \\
   BV(c)          & = & \emptyset               \\
   BV(s\; t)      & = & BV(s) \Union BV(t)      \\
   BV(\lamb{x} s) & = & BV(s) \Union \{ x \}
\end{eqnarray*}

For example, if $s = (\lamb{x\; y} x)\; (\lamb{x} z\; x)$ we have $FV(s) = \{ z
\}$ and $BV(s) = \{ x, y \}$. Note that in general a variable can be both free
and bound in the same term, as illustrated by some of the mathematical examples
earlier. As an example of using structural induction to establish properties of
lambda terms, we will prove the following theorem (a similar proof works for
$BV$ too):

\begin{theorem}
For any lambda term $s$, the set $FV(s)$ is finite.

\proof By structural induction. Certainly if $s$ is a variable or a constant,
then by definition $FV(s)$ is finite, since it is either a singleton or empty.
If $s$ is a combination $t\;u$ then by the inductive hypothesis, $FV(t)$ and
$FV(u)$ are both finite, and then $FV(s) = FV(t) \Union FV(u)$, which is
therefore also finite (the union of two finite sets is finite). Finally, if $s$
is of the form $\lamb{x} t$ then $FV(t)$ is finite, by the inductive
hypothesis, and by definition $FV(s) = FV(t) - \{ x \}$ which must be finite
too, since it is no larger. \qed

\end{theorem}

\subsection{Substitution}

The rules we want to formalize include the stipulation that lambda abstraction
and function application are inverse operations. That is, if we take a term
$\lamb{x} s$ and apply it as a function to an argument term $t$, the answer is
the term $s$ with all free instances of $x$ replaced by $t$. We often make this
more transparent in discussions by using the notation $\lamb{x} s[x]$ and
$s[t]$ for the respective terms.

However this simple-looking notion of substituting one term for a variable in
another term is surprisingly difficult. Some notable logicians have made faulty
statements regarding substitution. In fact, this difficulty is rather
unfortunate since as we have said, the appeal of formal rules is that they can
be applied mechanically.

We will denote the operation of substituting a term $s$ for a variable $x$ in
another term $t$ by $t[s/x]$. One sometimes sees various other notations, e.g.
$t[x \mbox{:=} s]$, $[s/x]t$, or even $t[x/s]$. The notation we use is perhaps
most easily remembered by noting the vague analogy with multiplication of
fractions: $x[t/x] = t$. At first sight, one can define substitution formally
by recursion as follows:

\begin{eqnarray*}
   x[t/x]            & = & t                           \\
   y[t/x]            & = & y \mbox{ if $x \not= y$}    \\
   c[t/x]            & = & c                           \\
   (s_1\;s_2)[t/x]   & = & s_1[t/x] \; s_2[t/x]        \\
   (\lamb{x} s)[t/x] & = & \lamb{x} s                  \\
   (\lamb{y} s)[t/x] & = & \lamb{y} (s[t/x]) \mbox{ if $x \not= y$}
\end{eqnarray*}

However this isn't quite right. For example $(\lamb{y} x + y)[y/x] = \lamb{y} y
+ y$, which doesn't correspond to the intuitive answer.\footnote{We will
continue to use infix syntax for standard operators; strictly we should write
$+\; x\; y$ rather than $x + y$.} The original lambda term was `the
function that adds $x$ to its argument', so after substitution, we might expect
to get `the function that adds $y$ to its argument'. What we actually get is
`the function that doubles its argument'. The problem is that the variable $y$
that we have substituted is {\em captured} by the variable-binding operation
$\lamb{y} \ldots$. We should first rename the bound variable:

$$ (\lamb{y} x + y) = (\lamb{w} x + w) $$

\noindent and only now perform a naive substitution operation:

$$ (\lamb{w} x + w)[y/x] = \lamb{w} y + w $$

We can take two approaches to this problem. Either we can add a condition on
all instances of substitution to disallow it wherever variable capture would
occur, or we can modify the formal definition of substitution so that it
performs the appropriate renamings automatically. We will opt for this latter
approach. Here is the definition of substitution that we use:

\begin{eqnarray*}
   x[t/x]            & = & t                           \\
   y[t/x]            & = & y \mbox{  if $x \not= y$}    \\
   c[t/x]            & = & c                           \\
   (s_1\;s_2)[t/x]   & = & s_1[t/x] \; s_2[t/x]        \\
   (\lamb{x} s)[t/x] & = & \lamb{x} s                  \\
   (\lamb{y} s)[t/x] & = & \lamb{y} (s[t/x]) \mbox{  if $x \not= y$ and
                                                    either $x \not\in FV(s)$
                                                    or $y \not\in FV(t)$}\\
   (\lamb{y} s)[t/x] & = & \lamb{z} (s[z/y][t/x]) \mbox{  otherwise, where
                                               $z \not\in FV(s) \Union FV(t)$}
\end{eqnarray*}

The only difference is in the last two lines. We substitute as before in the
two safe situations where either $x$ isn't free in $s$, so the substitution is
trivial, or where $y$ isn't free in $t$, so variable capture won't occur (at
this level). However where these conditions fail, we first rename $y$ to a new
variable $z$, chosen not to be free in either $s$ or $t$, then proceed as
before. For definiteness, the variable $z$ can be chosen in some canonical way,
e.g. the lexicographically first name not occurring as a free variable in
either $s$ or $t$.\footnote{Cognoscenti may also be worried that this
definition is not in fact, {\em primitive} recursive, because of the last
clause. However it can easily be modified into a primitive recursive definition
of multiple, parallel, substitution. This procedure is analogous to
strengthening an induction hypothesis during a proof by induction. Note that by
construction the pair of substitutions in the last line can be done in
parallel rather than sequentially without affecting the result.}

\subsection{Conversions}

Lambda calculus is based on three `conversions', which transform one term into
another one intuitively equivalent to it. These are traditionally denoted by
the Greek letters $\alpha$ (alpha), $\beta$ (beta) and $\eta$
(eta).\footnote{These names are due to Curry. Church originally referred to
$\alpha$-conversion and $\beta$-conversion as `rule of procedure I' and `rule
of procedure II' respectively.} Here are the formal definitions of the
operations, using annotated arrows for the conversion relations.

\begin{itemize}

\item Alpha conversion: $\lamb{x} s \alphas \lamb{y} s[y/x]$ provided $y
\not\in FV(s)$. For example, $\lamb{u} u\; v \alphas \lamb{w} w\; v$, but
$\lamb{u} u\; v \not\alphas \lamb{v} v\; v$. The restriction avoids another
instance of variable capture.

\item Beta conversion: $(\lamb{x} s)\; t \betas s[t/x]$.

\item Eta conversion: $\lamb{x} t\; x \etas t$, provided $x \not\in FV(t)$. For
example $\lamb{u} v\; u \etas v$ but $\lamb{u} u\; u \not\etas u$.

\end{itemize}

Of the three, $\beta$-conversion is the most important one to us, since it
represents the evaluation of a function on an argument. $\alpha$-conversion is
a technical device to change the names of bound variables, while
$\eta$-conversion is a form of {\em extensionality} and is therefore mainly of
interest to those taking a logical, not a programming, view of lambda calculus.

\subsection{Lambda equality}

Using these conversion rules, we can define formally when two lambda terms are
to be considered equal. Roughly, two terms are equal if it is possible to get
from one to the other by a finite sequence of conversions ($\alpha$, $\beta$ or
$\eta$), either forward or backward, at any depth inside the term. We can say
that lambda equality is the {\em congruence closure} of the three reduction
operations together, i.e. the smallest relation containing the three conversion
operations and closed under reflexivity, symmetry, transitivity and
substitutivity. Formally we can define it inductively as follows, where the
horizontal lines should be read as `if what is above the line holds, then so
does what is below'.

$$ \frac{s \alphas t \mbox{ or } s \betas t \mbox{ or } s \etas t}{s = t} $$

$$ \frac{}{t = t} $$

$$ \frac{s = t}{t = s} $$

$$ \frac{s = t \mbox{ and } t = u}{s = u} $$

$$ \frac{s = t}{s\; u = t\; u} $$

$$ \frac{s = t}{u\; s = u\; t} $$

$$ \frac{s = t}{\lamb{x} s = \lamb{x} t}$$

Note that the use of the ordinary equality symbol ($=$) here is misleading. We
are actually {\em defining} the relation of lambda equality, and it isn't clear
that it corresponds to equality of the corresponding mathematical objects in
the usual sense.\footnote{Indeed, we haven't been very precise about what the
corresponding mathematical objects {\em are}. But there are models of the
lambda calculus where our lambda equality is interpreted as actual equality.}
Certainly it must be distinguished sharply from equality at the {\em syntactic}
level. We will refer to this latter kind of equality as `identity' and use the
special symbol $\equiv$. For example $\lamb{x} x \not\equiv \lamb{y} y$ but
$\lamb{x} x = \lamb{y} y$.

For many purposes, $\alpha$-conversions are immaterial, and often
$\equiv_{\alpha}$ is used instead of strict identity. This is defined like
lambda equality, except that only $\alpha$-conversions are allowed. For
example, $(\lamb{x} x) y \equiv_{\alpha} (\lamb{y} y) y$. Many writers use this
as identity on lambda terms, i.e. consider equivalence classes of terms under
$\equiv_{\alpha}$. There are alternative formalizations of syntax where bound
variables are unnamed \cite{debruijn-terms}, and here syntactic identity
corresponds to our $\equiv_{\alpha}$.

\subsection{Extensionality}

We have said that $\eta$-conversion embodies a principle of {\em
extensionality}. In general philosophical terms, two properties are said to be
{\em extensionally} equivalent (or {\em coextensive}) when they are satisfied
by exactly the same objects. In mathematics, we usually take an extensional
view of sets, i.e. say that two sets are equal precisely if they have the same
elements. Similarly, we normally say that two functions are equal precisely if
they have the same domain and give the same result on all arguments in that
domain.

As a consequence of $\eta$-conversion, our notion of lambda equality is
extensional. Indeed, if $f\;x$ and $g\;x$ are equal for any $x$, then in
particular $f\;y = g\;y$ where $y$ is chosen not to be free in either $f$ or
$g$. Therefore by the last rule above, $\lamb{y} f\;y = \lamb{y} g\;y$. Now by
$\eta$-converting a couple of times at depth, we see that $f = g$. Conversely,
extensionality implies that all instances of $\eta$-conversion do indeed give a
valid equation, since by $\beta$-reduction, $(\lamb{x} t\;x)\;y = t\;y$ for any
$y$ when $x$ is not free in $t$. This is the import of $\eta$-conversion, and
having discussed that, we will largely ignore it in favour of the more
computationally significant $\beta$-conversion.

\subsection{Lambda reduction}

Lambda equality, unsurprisingly, is a symmetric relation. Though it captures
the notion of equivalence of lambda terms well, it is more interesting from a
computational point of view to consider an asymmetric version. We will define a
`reduction' relation $\goesto$ as follows:

$$ \frac{s \alphas t \mbox{ or } s \betas t \mbox{ or } s \etas t}
        {s \goesto t} $$

$$ \frac{}{t \goesto t} $$

$$ \frac{s \goesto t \mbox{ and } t \goesto u}{s \goesto u} $$

$$ \frac{s \goesto t}{s\; u \goesto t\; u} $$

$$ \frac{s \goesto t}{u\; s \goesto u\; t} $$

$$ \frac{s \goesto t}{\lamb{x} s \goesto \lamb{x} t}$$

Actually the name `reduction' (and one also hears $\beta$-conversion called
$\beta$-reduction) is a slight misnomer, since it can cause the size of a term
to grow, e.g.

\begin{eqnarray*}
(\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)
& \goesto & (\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)\;
(\lamb{x} x\; x\; x) \\
& \goesto & (\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\;
x)\; (\lamb{x} x\; x\; x) \\ & \goesto & \ldots
\end{eqnarray*}

However reduction does correspond to a systematic attempt to evaluate a
term by repeatedly evaluating combinations $f(x)$ where $f$ is a lambda
abstraction. When no more reductions except for $\alpha$ conversions are
possible we say that the term is in {\em normal form}.

\subsection{Reduction strategies}

Let us recall, in the middle of these theoretical considerations, the relevance
of all this to functional programming. A functional program is an {\em
expression} and executing it means evaluating the expression. In terms of the
concepts discussed here, we are proposing to start with the relevant term and
keep on applying reductions until there is nothing more to be evaluated. But
how are we to choose which reduction to apply at each stage? The reduction
relation is not deterministic, i.e. for some terms $t$ there are several $t_i$
such that $t \goesto t_i$. Sometimes this can make the difference between a
finite and infinite reduction sequence, i.e. between a program terminating and
failing to terminate. For example, by reducing the innermost {\em redex}
(reducible expression) in the following, we have an infinite reduction
sequence:

\begin{eqnarray*}
& & (\lamb{x} y)\; ((\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)) \\
& \goesto & (\lamb{x} y)\; ((\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)\;
                            (\lamb{x} x\; x\; x))               \\
& \goesto & (\lamb{x} y)\; ((\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x)\;
                            (\lamb{x} x\; x\; x)\; (\lamb{x} x\; x\; x))  \\
& \goesto & \cdots
\end{eqnarray*}

\noindent and so ad infinitum. However the alternative of reducing the
outermost redex first gives:

$$ (\lamb{x} y)\; ((\lamb{x} x\; x\; x)\;(\lamb{x} x\; x\; x)) \goesto y $$

\noindent immediately, and there are no more reductions to apply.

The situation is clarified by the following theorems, whose proofs are too long
to be given here. The first one says that the situation we have noted above is
true in a more general sense, i.e. that reducing the leftmost outermost redex
is the best strategy for ensuring termination.

\begin{theorem}
If $s \goesto t$ with $t$ in normal form, then the reduction sequence that
arises from $s$ by always reducing the leftmost outermost redex is guaranteed
to terminate in normal form.
\end{theorem}

Formally, we define the `leftmost outermost' redex recursively: for a term
$(\lamb{x} s)\;t$ it is the term itself; for any other term $s\;t$ it is the
leftmost outermost redex of $s$, and for an abstraction $\lamb{x} s$ it is the
leftmost outermost redex of $s$. In terms of concrete syntax, we always reduce
the redex whose $\lambda$ is the furthest to the left.

\subsection{The Church-Rosser theorem}

The next assertion, the famous Church-Rosser theorem, states that if we start
from a term $t$ and perform any two finite reduction sequences, there are
always two more reduction sequences that bring the two back to the same term
(though of course this might not be in normal form).

\begin{theorem}
If $t \goesto s_1$ and $t \goesto s_2$, then there is a term $u$ such that $s_1
\goesto u$ and $s_2 \goesto u$.
\end{theorem}

\noindent This has at least the following important consequences:

\begin{corollary}
If $t_1 = t_2$ then there is a term $u$ with $t_1 \goesto u$ and $t_2 \goesto
u$.

\proof It is easy to see (by structural induction) that the equality relation
$=$ is in fact the symmetric transitive closure of the reduction relation. Now
we can proceed by induction over the construction of the symmetric transitive
closure. However less formally minded readers will probably find the following
diagram more convincing:

\begin{picture}(140,140)(-100,-20)

\put(-10,100){$t_1$}
\put(162,100){$t_2$}
\put(0,100){\vector(1,-1){20}}
\put(40,100){\vector(-1,-1){20}}
\put(40,100){\vector(1,-1){20}}
\put(80,100){\vector(-1,-1){20}}
\put(80,100){\vector(1,-1){20}}
\put(120,100){\vector(-1,-1){20}}
\put(120,100){\vector(1,-1){20}}
\put(160,100){\vector(-1,-1){20}}
\put(20,80){\vector(1,-1){20}}
\put(60,80){\vector(-1,-1){20}}
\put(60,80){\vector(1,-1){20}}
\put(100,80){\vector(-1,-1){20}}
\put(100,80){\vector(1,-1){20}}
\put(140,80){\vector(-1,-1){20}}
\put(40,60){\vector(1,-1){20}}
\put(80,60){\vector(-1,-1){20}}
\put(80,60){\vector(1,-1){20}}
\put(120,60){\vector(-1,-1){20}}
\put(60,40){\vector(1,-1){20}}
\put(100,40){\vector(-1,-1){20}}
\put(76,10){$u$}
\end{picture}

We assume $t_1 = t_2$, so there is some sequence of reductions in both directions
(i.e. the zigzag at the top) that connects them. Now the Church-Rosser theorem
allows us to fill in the remainder of the sides in the above diagram, and hence
reach the result by composing these reductions. \qed

\end{corollary}

\begin{corollary}
If $t = t_1$ and $t = t_2$ with $t_1$ and $t_2$ in normal form, then $t_1
\equiv_{\alpha} t_2$, i.e. $t_1$ and $t_2$ are equal apart from $\alpha$
conversions.

\proof By the first corollary, we have some $u$ with $t_1 \goesto u$ and $t_2
\goesto u$. But since $t_1$ and $t_2$ are already in normal form, these
reduction sequences to $u$ can only consist of alpha conversions. \qed

\end{corollary}

Hence normal forms, when they do exist, are unique up to alpha conversion. This
gives us the first proof we have available that the relation of lambda equality
isn't completely trivial, i.e. that there are any unequal terms. For example,
since $\lamb{x\; y} x$ and $\lamb{x\; y} y$ are not interconvertible by alpha
conversions alone, they cannot be equal.

Let us sum up the computational significance of all these assertions. In some
sense, reducing the leftmost outermost redex is the best strategy, since it
will work if any strategy will. This is known as {\em normal order reduction}.
On the other hand {\em any} terminating reduction sequence will always give the
same result, and moreover it is never too late to abandon a given strategy and
start using normal order reduction. We will see later how this translates into
practical terms.

\section{Combinators}

Combinators were actually developed as an independent theory by
\citeN{schonfinkel} before lambda notation came along. Moreover Curry
rediscovered the theory soon afterwards, independently of Sch\"onfinkel and of
Church. (When he found out about Sch\"onfinkel's work, Curry attempted to
contact him, but by that time, Sch\"onfinkel had been committed to a lunatic
asylum.) We will distort the historical development by presenting the theory of
combinators as an aspect of lambda calculus.

We will define a {\em combinator} simply to be a lambda term with no free
variables. Such a term is also said to be {\em closed}; it has a fixed meaning
independent of the values of any variables. Now we will later, in the course of
functional programming, come across many useful combinators. But the
cornerstone of the theory of combinators is that one can get away with just a
few combinators, and express in terms of those and variables {\em any} term at
all: the operation of lambda abstraction is unnecessary. In particular, a
closed term can be expressed purely in terms of these few combinators. We start
by defining:

\begin{eqnarray*}
I & = & \lamb{x} x                              \\
K & = & \lamb{x\; y} x                          \\
S & = & \lamb{f\; g\; x} (f\; x)(g\; x)
\end{eqnarray*}

We can motivate the names as follows.\footnote{We are not claiming these are
the historical reasons for them.} $I$ is the identity function. $K$ produces
constant functions:\footnote{Konstant --- Sch\"onfinkel was German, though in
fact he originally used $C$.} when applied to an argument $a$ it gives the
function $\lamb{y} a$. Finally $S$ is a `sharing' combinator, which takes two
functions and an argument and shares out the argument among the functions. Now
we prove the following:

\begin{lemma}
For any lambda term $t$ not involving lambda abstraction, there is a term $u$
also not containing lambda abstractions, built up from $S$, $K$, $I$ and
variables, with $FV(u) = FV(t) - \{x\}$ and $u = \lamb{x} t$, i.e. $u$ is
lambda-equal to $\lamb{x} t$.

\proof By structural induction on the term $t$. By hypothesis, it cannot
be an abstraction, so there are just three cases to consider.

\begin{itemize}

\item If $t$ is a variable, then there are two possibilities. If it is equal to
$x$, then $\lamb{x} x = I$, so we are finished. If not, say $t = y$, then
$\lamb{x} y = K\; y$.

\item If $t$ is a constant $c$, then $\lamb{x} c = K\; c$.

\item If $t$ is a combination, say $s\; u$, then by the inductive hypothesis,
there are lambda-free terms $s'$ and $u'$ with $s' = \lamb{x} s$ and $u' =
\lamb{x} u$. Now we claim $S\;s'\;u'$ suffices. Indeed:

\begin{eqnarray*}
S\;s'\;u'\;x & = & S\;(\lamb{x} s)\; (\lamb{x} u)\; x     \\
             & = & ((\lamb{x} s)\; x) ((\lamb{x} u)\; x)  \\
             & = & s\; u                                  \\
             & = & t
\end{eqnarray*}

Therefore, by $\eta$-conversion, we have $S\;s'\;u' = \lamb{x} S\;s'\;u'\; x =
\lamb{x} t$, since by the inductive hypothesis $x$ is not free in $s'$ or $u'$.

\end{itemize}

\qed

\end{lemma}

\begin{theorem}
For any lambda term $t$, there is a lambda-free term $t'$ built up from $S$,
$K$, $I$ and variables, with $FV(t') = FV(t)$ and $t' = t$.

\proof By structural induction on $t$, using the lemma. For example, if $t$ is
$\lamb{x} s$, we first find, by the inductive hypothesis, a lambda-free
equivalent $s'$ of $s$. Now the lemma can be applied to $\lamb{x} s'$. The
other cases are straightforward. \qed

\end{theorem}

This remarkable fact can be strengthened, since $I$ is definable in terms of
$S$ and $K$. Note that for any $A$:

\begin{eqnarray*}
S\;K\;A\;x & = & (K\; x) (A\; x)                \\
           & = & (\lamb{y} x) (A\; x)           \\
           & = & x
\end{eqnarray*}

So again by $\eta$-converting, we see that $I = S\;K\;A$ for any $A$. It is
customary, for reasons that will become clear when we look at types, to use $A
= K$. So $I = S\;K\;K$, and we can avoid the use of $I$ in our combinatory
expression.

Note that the proofs above are constructive, in the sense that they guide one
in a definite procedure that, given a lambda term, produces a combinator
equivalent. One proceeds bottom-up, and at each lambda abstraction, which by
construction has a lambda-free body, applies the top-down transformations given
in the lemma.

Although we have presented combinators as certain lambda terms, they can also
be developed as a theory in their own right. That is, one starts with a formal
syntax excluding lambda abstractions but including combinators. Instead of
$\alpha$, $\beta$ and $\eta$ conversions, one posits {\em conversion rules} for
expressions involving combinators, e.g. $K\; x\; y \goesto x$. As an
independent theory, this has many analogies with lambda calculus, e.g. the
Church-Rosser theorem holds for this notion of reduction too. Moreover, the
ugly difficulties connected with bound variables are avoided completely.
However the resulting system is, we feel, less intuitive, since combinatory
expressions can get rather obscure.

Apart from their purely logical interest, combinators have a certain practical
potential. As we have already hinted, and as will become much clearer in the
later chapters, lambda calculus can be seen as a simple functional language,
forming the core of real practical languages like ML. We might say that the
theorem of combinatory completeness shows that lambda calculus can be `compiled
down' to a `machine code' of combinators. This computing terminology is not as
fanciful as it appears. Combinators have been used as an implementation
technique for functional languages, and real hardware has been built to
evaluate combinatory expressions.

% \section{Proof of the Church-Rosser theorem*}
%
% \section{The semantics of untyped lambda calculus*}
%
% Should discuss stuff in general, and perhaps give $\powerset(\omega)$ or even
% $D_\infty$.
%
% Term model is possible and by CRT nontrivial, but there's little point; such a
% model is not so likely to give new insight.

\section*{Further reading}

An encyclopedic but clear book on lambda calculus is \citeN{barendregt}.
Another popular textbook is \citeN{hindley-seldin}. Both these contain proofs
of the results that we have merely asserted. A more elementary treatment
focused towards computer science is Part II of \citeN{gordon-plt}. Much of our
presentation here and later is based on this last book.

\section*{Exercises}

\begin{enumerate}

\item Find a normal form for $(\lamb{x\; x\; x} x)\; a\; b\; c$.

\item Define $twice = \lamb{f\; x} f(f x)$. What is the intuitive meaning of
$twice$? Find a normal form for $twice\; twice\; twice\; f\; x$. (Remember that
function application associates to the left.)

\item Find a term $t$ such that $t \betas t$. Is it true to say that a term is
in normal form if and only if whenever $t \goesto t'$ then $t \equiv_{\alpha}
t'$?

\item What are the circumstances under which $s [t/x] [u/y] \equiv_{\alpha} s
[u/y] [t/x]$?

\item Find an equivalent in terms of the $S$, $K$ and $I$ combinators alone for
$\lamb{f\; x} f(x\; x)$.

\item Find a {\em single} combinator $X$ such that all $\lambda$-terms are
equal to a term built from $X$ and variables. You may find it helpful to
consider $A = \lamb{p} p\; K\; S\; K$ and then think about $A\; A\; A$ and $A\;
(A\; A)$.

\item Prove that any $X$ is a fixed point combinator if and only if it is
itself a fixed point of $G$, where $G = \lamb{y\; m} m(y\; m)$.

\end{enumerate}
