
	\noindent
     Genetic Programming (GP) \cite{Koza89,koza90genetically,koza92genetic}\index{genetic programming}
     is an Evolutionary Computation (EC) search strategy
     in which solutions are represented as executable parse trees.
     GP systems evolve populations of parse trees using a selection process linked
     to the performance of the associated programs on a particular problem.
     Each parse tree in a GP population is associated with a program and
     the program's execution provides the metrics used to grade the
     quality of the tree. In this thesis, we use vocabulary from genetics and we say that the
     trees are {\em genotypes}\index{genotype} while the programs associated with the trees are {\em phenotypes}\index{phenotype}.
     \newline
     \newline
     \noindent
     This work is set in the context of the GP genetic representation problem and concerns the exploitation of recent advances in type theory to extend the currently used tree-based representation schemes. It describes \ABGP, a search strategy related to GP in which the genotypes are compositions of computational blocks, assembled from a pattern derived from a second-order logic proof. The blocks (which are called {\em genes}\index{genes}) are \SF{} \cite{Gir71,Rey77}\index{\SF{}} terms. \SF{} is an extension of the simply typed $\lambda$-calculus.

     \MySection{Motivations and Contributions}
     The PhD work presented in this thesis consists of the formulation of \ABGP, the construction, implementation and testing of said application. The  primary motivation for the formulation of \ABGP{} is the need for a GP system in which the underlying programming language is naturally capable of expressing the computational tools typically available to human programmers, in particular data types, looping structures and recursive structures. This is difficult to achieve in GP where programs are typically represented as non-modular tree constructions and new programs are created by splitting existing programs (chosen non-deterministically) at random points and randomly recombining the program chunks obtained in this manner into new programs. This operation which is called  {\em crossover}, resembles the method of non-sexual propagation called grafting much more than it resembles sexual reproduction. As no information about the genotype's structure is kept or used by GP, the kind of complex genotypic sub-structures required to express data types, recursion or looping are too fragile to become stable genetic material able  to evolve and be reused.  \ABGP{} corrects this default by providing a search space partitioning system based on the structure of the genotypes similar to the species partitioning system of living organisms. In addition, by using \SF{} to encode the unit computational blocks of the system, \ABGP{} provides a method to express the computational tools typically available to human programmers as closed blocks that may be plugged into other blocks. This thesis presents the results we obtained by applying this method to a set of problems.
     \MySection{\SF{} and $\lambda$-calculi}
     \SF{} is a {\em $\lambda$-calculus}. The first ``untyped'' $\lambda$-calculus was introduced by Alonzo Church and Stephen Cole Kleene in the 1930s.  It is a formal system designed to investigate function definition and application as well as recursion.
     It has since emerged as a valuable tool in computability or recursion theory. Most programming languages are rooted in the $\lambda$-calculus \cite{Lan65}, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application. The calculus is an idealized, minimalistic programming language capable of expressing any algorithm.
     \newline
     \newline
     \noindent
     Typed $\lambda$-calculi can be seen as refinements of the untyped calculus which has limitations (see section \ref{lcalculi}), but they can also be considered the more fundamental theory, with the untyped calculus a special case with only one type. Various typed $\lambda$-calculi have been studied. Modern functional languages,
     building on $\lambda$-calculi, include Erlang, Haskell, Lisp, ML, Scheme and OCaml.
     \newline
     \newline
     \noindent
     \SF{} is an extension of the simply typed $\lambda$-calculus obtained
     by the introduction of a universal quantification operation on types.

    \subsection{Types}
    \noindent
    Types arise naturally, even starting from untyped universes, in any domain to categorize objects according to their usage and behavior
    \cite{cardelli85understanding}. Sets of objects with uniform behavior may be named and are referred to as types.
    A type system has as its major purpose to avoid embarrassing questions about representations, and to forbid
    situations where these questions might come up. In mathematics as in programming, types impose constraints which help to enforce correctness.
    \paragraph{Notation for Types:} In this work, we use the following conventions regarding types: the names of types begin with capital letters (as in $X,Y,Int,Bool$) and are enclosed in square brackets. $f$ of type $\typ{\AT{A}{B}}$ is a function that takes an object of type $\typ{A}$ as argument and returns an object of type $\typ{B}$. $f$ of type $\typ{\AT{A}{(\AT{B}{C})}}$ is a function that takes an object of type $\typ{A}$ as argument and returns an object of type $\typ{\AT{B}{C}}$. For types, precedence is to the left, so $\typ{\AT{A}{(\AT{B}{C})}}$ is equivalent to $\typ{\AT{A}{\AT{B}{C}}}$.
     \subsection{GP and Types}
     \noindent
     A problem with using GP to solve large and complex problems is the considerable size of the search space \cite{haynes95strongly}. Montana \cite{montana93strongly} illustrated how the size of the search space of possible parse trees might be in the order of $10^{27}$ parse trees even for small problems.
     A type in a metaphysical sense is a category of being. For example, a mammal is a type of things. In their traditional form, GP systems do not have type structures. GP research  suggests that the search space of untyped GP systems is often larger than needed because it contains both type-correct and type-incorrect programs \cite{montana93strongly, yu01polymorphism}. The same research experiments with typed representation in GP as a way to reduce the size of the search space. This approach was originally suggested by Koza \cite{koza1989hga} and then extended by Montana who has developed Strongly Typed Genetic Programming (STGP) \cite{montana93strongly}. There are several inter-related meanings of types. In this work, we use all of them at different times and in different contexts, so we will clarify immediately the nuances.

     \subsection{Type Theory}
     Intuitionistic type theory \cite{martinlof1984itt} is a logical system and a set theory based on the principles of mathematical constructivism. Introduced by Per Martin-L\"{o}f  in 1972, intuitionistic type theory is based on the analogy between propositions and types: a proposition is identified with the type of its proofs. This identification is usually called the Curry-Howard isomorphism\index{Curry-Howard isomorphism} \cite{degroote:chi}. The Curry-Howard isomorphism was originally formulated for propositional logic and the simply typed $\lambda$-calculus.  The types of type theory play a similar role as sets in set theory but functions definable in type theory are always computable. The most obvious application of type theory is in constructing type checking algorithms in the semantic analysis phase of compilers for programming languages. Type theory is also widely in use in theories of semantics of natural language. Intuitionistic type theory developed the notion of dependent types and directly influenced the development of the calculus of constructions \cite{coquand:cc} and the logical framework LF \cite{pfenning1991lpl}. A number of popular computer-based proof systems are based on type theory, for example NuPRL \cite{allen2000nol}, LEGO \cite{pollack1994tlp} and Coq \cite{bertot2004itp}. The work presented in this thesis extends the applicability of type theory to GP.

     \subsection{Abstract Data Types (ADT)}
     An {\em Abstract Data Type}\index{Abstract Data Type} is a description of a common representation of data. It is a building block for data structures. Abstract data types are one of the most important concepts in all programming, because they allow building representations for complex data from simpler parts. An ADT is called abstract because it can be completely specified without dealing with details of implementation.  An ADT leaves some aspects of its own structure undefined, to be provided by the user of the data type. \SF{} can express ADT (see section \ref{TypeFormat}).

    \MySection{Abstraction}
     \noindent
     {\em Abstraction}\index{abstraction} is the generalization obtained by reducing the information content of a concept in order to retain only information which is relevant for a particular purpose. For example, a red bicycle may be studied as a bicycle (removing the color information component), or as a vehicle (removing all information related to its particularities) or even more abstractly, as a human-made artifact. In humans, the capacity to abstract and to express abstractions is strongly related to learning and intelligence as it is the main tool that allows the inference of generalizations from specific occurrences. We chose \SF{} as our representation scheme for several reasons (listed in Chapter \ref{Motivation}), but mainly because of its abstraction power. There are several levels of abstractions, which we describe in this section.

     \subsection{Functional Abstraction}
     {\em Functional abstraction}\index{abstraction!functional} is the particular brand of abstraction that allows a programmer to write a function that will perform a computation generically in terms of one or more named parameters. Functional abstraction facilitates reuse as it allows a programmer to avoid having to repeatedly  write the same calculation. Once the function has been written, it can be instantiated as needed, by providing values for the parameters in each case. Functional abstraction is a key feature of essentially all programming languages \cite{Pie02}. \SF{} uses the symbol $\lambda$ to denote anonymous function abstraction. For example, given a function $f$ of type $[\AT{A}{A}]$ (a function that takes an object of type $[A]$ as argument and outputs another object of type $[A]$), we would write the \SF{} program:
     \begin{equation}\label{eqex1}
     (\lambda x^A . f(f\ x))
     \end{equation}
     as a functional abstraction of the computation that applies $f$ twice to an argument.
     \subsubsection{Higher-order Functions}
     \noindent
     {\em Higher-order functions} are functions which do at least one of the following:
     \begin{itemize}
     \item take one or more functions as an input
     \item output a function
     \end{itemize}
     The derivative in calculus is a common example, since
     it maps a function to another function. Languages that truly
     support higher-functions do not need to differentiate between lower-order and higher-order functions.
     For example, a function which takes two numbers as arguments and outputs a number has its type expressed as
     $[\AT{Number}{\AT{Number}{Number}}]$ which can be read as being the type of a
     higher-order function that takes a single number as input and returns a function as its output.
     The output function is a function that takes a single number as its only parameter and returns
     a number. This allows the formation of well-defined partial functions such as $(+\ 5)$, of
     type $[\AT{Number}{Number}]$. There is a small body of
     GP related work \cite{Yu99,yu:2004:GPTP} that suggests that program representation
     which supports higher-order functions and abstraction is more effective than the basic
     GP system when applied to some problems such as the general even parity problem. \SF{} supports higher-order functions, so the program \ref{eqex1} may be further abstracted by writing:
     \begin{equation}
     (\lambda y^{\AT{A}{A}}.\lambda x^A .y(y\ x))
     \label{eqex2}
     \end{equation}
     Program \ref{eqex2} expresses the computation of the double application of any function of type $[\AT{A}{A}]$ to an object of type $[A]$.


     \subsection{Polymorphism and Type Abstraction}
     \SF{} supports an even stronger form of abstraction. The program \ref{eqex2} is only applicable in the context of some existing type $\typ{A}$. By adding another abstraction operator, $\Lambda$, \SF{} is able to express an even more general abstraction of the computation of program \ref{eqex2}:
     \begin{equation}
     (\Lambda X . \lambda f^{\AT{X}{X}}. \lambda x^X . y(y\ x))
     \label{eqex3}
     \end{equation}
     Program \ref{eqex3} expresses its computation independently of type. This is a {\em polymorphic}\index{polymorphism} function. This  form of abstraction, which is called {\em type abstraction}, allows, for example, the definition of operations on lists of objects of any type. Type abstraction is called polymorphism in the functional programming world and ``generic programming" in the imperative programming world (where confusingly polymorphism means something else). There is a particularly strong kind of polymorphism, called {\em parametric polymorphism} \cite{Stra67}. Parametric polymorphism allows the definition of functions which have uniform behavior for all types. For example, a function $f$, defined  in English as: ``a function that takes two arguments of the same type and returns the first of these arguments" is parametric-polymorphic and its behavior is defined on all possible  arguments, including other polymorphic functions such as itself. In \SF{}, the $f$ function would have type $[\Pi X.\AT{X}{\AT{X}{X}}]$ which is the type of the functions that take two arguments of some type $[X]$ and return an object of type $[X]$ as their output. $(\Lambda X.\lambda x^X.\lambda y^X.x)$ is
     an example of a term of this type and is the direct \SF{} translation of the sentence ``Take a type $X$ as argument and two arguments, $x$ and $y$, each of type $X$ and return $x$" which is exactly the definition of the $f$ function. Instantiating a function with type variables is done via a {\em type application}\index{type application} operation (see section \ref{termFormalDef}). For example, the instantiation of $f$ with the type $[Int]$ yields the function $(\lambda x^{Int}.\lambda y^{Int}.x)$, obtained by removing the abstraction and by replacing all bound instances of the $X$ type variable by the type $Int$.  This new function might also be written in its non-normalized application form $(f\ [Int])$, and its type is $[\AT{Int}{\AT{Int}{Int}}]$.

    \MySection{Organization of the Thesis}
     This work is structured as follows:    Section \ref{BasGP} provides
      background on the general GP paradigm; Section \ref{PreviousWork} summarizes
      more specific related previous work; Once all necessary background has been presented,
      Section \ref{Motivation} motivates this work and Section \ref{SystemF} describes \SF; Section \ref{SFGP} is
      a description of the \ABGPS, the \SF{} based GP system that we built to support this research and Section \ref{Results}
      presents and describes the experiments and results obtained using \ABGP. Section
        \ref{Discussion} is the concluding discussion.
