    \noindent
    In this chapter, more advanced specific GP concepts related to our work are presented.
    Of particular relevance to this thesis is the work concerning search space partitioning and
    the work related to typed representation.
    \MySection{Fitness Landscape Analysis}
    \noindent
    Given a GP system \gpSysDef, each element of \genoSet{} represents a search point in the
    space of potential solutions to a given problem. If genotypes could be visualized in two dimensions, then a {\em fitness landscape}
    would be a three-dimensional map with the fitness of the genotypes as the height. Evolution causes populations to move along a
    fitness landscape in particular ways. The evolvability \cite{altenberg1993eeg} of an evolutionary system can be seen as movement toward {\em local peaks} in the fitness landscape. A local peak is not necessarily the highest point in the fitness landscape,
    but any small movement away from it leads to decreasing fitness.
    \begin{definition}[Evolvability (GP context)]
    {\em Evolvability}  is the ability of a population to produce variants fitter than any yet existing.
    \end{definition}
    By contrast, an {\em optimum} is a peak from which any movement will
    lead to a decrease in fitness. Many evolutionary algorithms are inherently convergent
    and stagnate after a certain number of generations. Evolutionary algorithm practitioners
    handle premature convergence using methods specifically designed to slow down this degradation of
    search capability. These include increasing the population size, tuning crossover and mutation rates,
    and dynamically adapting running parameters.  Unfortunately, these tricks often seem to only delay
    the stagnation process. \cite{GliSy00} explain premature convergence from the evolvability point
    of view: When the fitness of individuals gets higher and higher, the variation of phenotypes becomes
    more and more costly. Selection then tends to favor individuals that increasingly
    conserve their fitness and exhibit less and less phenotypic variation.
    \newline
    \newline
    \noindent
    An alternative approach to understanding how GP searches is based on the idea of dividing the search space into subspaces (called {\em schemata} \cite{Hol92,Gold89}). The original idea for schemata originates from Genetic Algorithms (GA), a branch
    of evolutionary computation that concerns itself with the evolution of solutions to problems (in contrast with GP that concerns
    itself with evolving problem solving programs). Schemata were brought up with the objective of providing a formal model for
    the effectiveness of the GA search process. The concept was later adapted to GP. A schema groups together the genotypes that have certain
    macroscopic properties in common. Using such subspaces it is possible to study how and why the genotypes in the population move from one
    schema to another. The search might be studied from the point of view of how the percentage of the population that belongs to a certain schema
    varies from generation to generation during a run.
%In both GA and GP, the concept of good genetic material is linked to short, low-order, high-fitness schemata.
    \subsection{Program Component Schema Theories}
        Component analysis concentrates on the propagation of sub-components of genotypes. In some definitions, \cite{koza:book,oreilly92troubling,whigham1995stc,whigham1996gbe,oreilly1995agp} components of a
    schema are {\em non-rooted} in the sense that the schema can potentially match components anywhere in the tree.
    The first definition of a schema in the context of GP appears in \cite{koza:book} and it applies to the classical
    version of GP. \cite{koza:book} simply defines a schema as the subset of programs of a GP population that contain a
    specified tree. For example, the schema $\{(+\ 1\ 2),(*\ times\ 10)\}$ represents all the genotypes of a population that include at least one
    occurrence of each of the sub-trees $(+\ 1\ 2)$ and $(*\ times\ 10)$. The definition specifies only the components of a schema and not their positions or number of occurrences, so the same schema may be matched in different ways by the same program. This contrasts with GA schemata in which multiple matching is not possible.
    \newline
    \newline
    \noindent
    Koza's work on schemata was formalized and refined by \cite{oreilly1995agp,oreilly92troubling}
    who derived a schema theorem for GP with fitness proportional selection and crossover. They define a schema
    as a multiset of subtrees and tree fragments. They do this by adding two items to the definition of a schema:
    \begin{enumerate}
    \item a structural dimension by including tree templates called
    {\em tree fragments} that include a ``wildcard symbol" (`\#') that can be matched by any legal expression.
    \item a number associated with the fragment (or complete legal sub-tree as this is still possible)
    indicating the number of times it must be included in the program in order to consider the program as an element of the schema.
    \end{enumerate}
    This definition of schema allows the definition of the concepts of order. Order is related to the number of
    defining symbols (non `\#'). The less defining symbols, the lower the order.  GP building blocks \cite{oreilly92troubling} would be low order, consistently compact GP schemas with consistently above average observed performance that are expected to be sampled
    at increasing or exponential rates in future generations. The average number of links needed to connect a schema's
    subtrees for all the instantiated programs is used to calculate the defining length of the schema,
    a measure equivalent to the number of crossover events that may destroy the schema. {\em Compactness} is the inverse of the metric. The
    longer the average defining length of the schema is, the higher its probability of being destroyed by crossover. From
    these definitions, \cite{oreilly92troubling} derives a schema theorem, but because their definition allows genotypes
    to match a schema in several different manners, the defining length of a schema cannot be fixed as a constant. The conclusion
    that the probability of disruption of a schema is not a constant but varies on the shape, size and composition of each genotypes
    that match it, leads to \cite{oreilly92troubling}'s suggestion that the propagation and the use of building blocks may be unattainable in GP.
%        \newline
%        \newline
%        \noindent
%        Building blocks are a natural fit for the approach that is proposed in this document.
%        In section \ref{SFBuildingBlockStrategy23}, I will revisit the GP building block hypothesis in the context
%        of this work and show why I think that the conclusions of \cite{oreilly92troubling} may not be applicable
%        to our approach.





    \MySection{Related Representation Work}
        \subsection{Constrained Syntactic Structures}\label{ConstrainedPreviousWork}
        \noindent
The classical GP specification \cite{koza92genetic} requires the definitions of the terminals and
functions of a GP system to satisfy the {\em closure} requirement.
Closure is the constraint that all the elements used in the genotype
must be of the same type and that no evaluation step will produce a
run-time error (so that for example, division by 0 is artificially
made safe). Closure is  satisfied  when ``any possible composition
of functions and terminals produces a valid executable computer
program" \cite{koza98}. From our model's point of view this translates as saying that given \gpSysDef{}, then $\genoSet=\compilableGenoSet$ holds iff \gpSysGreek{} satisfies the closure requirement. Closure usually implies
using a monomorphic programming language to code the genotypes. This
in turns implies carefully defining the terminals and functions so
as to not introduce multiple data types. Typically, general definitions for decision functions or predicates are avoided to remove the need for a Boolean type. Instead, combined constructs such as (\ref{DEFEX2}) are used.
    \begin{equation}\label{DEFEX2}
    (\mathit{if\_smaller\_than\_else}\mbox{ }x\mbox{ }y\mbox{ }u\mbox{ }v)
     \end{equation}

    \noindent
    Where $x,y,u,v$ are of the same type. This style of function definition introduces two issues:
    \begin{enumerate}
    \item In order to increase the expressive power of the language, the system's designer is
    forced to define several specialized functions instead of a few general functions that may be combined to form specialized structures.
    This makes the syntax pre-defined for the GP system heavier than needed and it introduces a bias into the GP's search strategy.
    \item It lacks flexibility. Functions defined in this way prevent the evolution of specialized problem
    specific programming constructs. For example, it would be impossible for a GP system that
    respects the closure constraint to evolve an expression such as:
    \begin{equation}
    \mathit{if\_else}((\mathit{smaller\_than}\mbox{ }x\mbox{ }y)\mbox{ }or\mbox{ }(eq\mbox{ }0\mbox{ }x)),\ldots
    \end{equation}
    \noindent
    \end{enumerate}
    Koza has realized this shortcoming and added a mechanism called
    {\em constrained syntactic structures} \cite{koza92genetic}  to relax the closure property. Constrained syntactic structures define a set of problem-specific syntactic rules specifying
    which terminals and functions are allowed to be the child nodes of every function in the program trees.
    They are a way to enforce data type constraints to only generate parse trees satisfying these constraints (``legal'' parse trees). Constrained syntactic structures are used for problems requiring data typing. They define problem-specific syntactic rules that specify which terminals and functions are allowed to be the child nodes of functions in GP program trees.

        \subsection{Context-free Grammar Approach}
           \noindent
     Also noting that the requirement of closure makes many program structures difficult to express, \cite{whigham95grammaticallybased} proposed the use of {\em context free grammars} (CFGs) to specify the structure  of the system's programs. A context free grammar describes the admissible constructs of a language. Note in the following definition that {\em non-terminals} and {\em terminals} have different meanings in GP and in CFG.
    \begin{definition}
    A CFG is a 4-tuple, $\{S,N,T,P\}$ where $S$ is the start symbol, $N$ is a set of non-terminal symbols, $T$ is the set of terminal symbols and $P$ are the production rules.
    \end{definition}
    \noindent
    In \CFGBGP{}, program trees are derivation trees and they are generated from the grammar. At the root of each derivation tree is
    the start symbol $S$. The trees are iteratively built up from the start symbol by re-writing each non-terminal symbol into one of its
    derivations. Crossover  is restricted to swapping subtrees built on the same non-terminal (in the CFG sense) symbol.
%   This is done by rewriting the axioms of the grammar, using its rewrite rules. A specialized algorithm is used to generate legal program trees. the grammar allows the user to bias the initial GP structures, and automatically ensure typing and syntax are maintained by manipulating the explicit derivation tree from the grammar
     \newline
     \newline
     \noindent
     For rules whose non-terminal can be rewritten recursively, an upper bound of the number of recursive rewritings of this rule is specified and used to limit the size of the program tree.

        \subsection{Typed GP Systems}\label{TypedPreviousWork}
            \noindent
There are two reasons to consider using a type system as a GP representation scheme: eliminating the closure constraint and narrowing the system's
      search space.
      In the classical representation scheme, the cardinality of the terminal set and the average arity
of its elements are the only factors that determine the size of
search space. It is necessary that the terminal set contains enough
symbols to be able to express a solution to the problem for which a
solution is being evolved, but increasing its size increases the
size of the search space. When functions and terminals need to be
embedded in a context-specific position in the genotype to correctly
express something, the probability of actually assembling a
correctly formed program by some pseudo-random recombination process
is very small. So, instead of adding more functions and more
terminals, we may consider increasing the expressive power of the
system by adding less specialized functions to the function set. The
problem is in ensuring that these more general structures do not
increase the probability of producing non-sensical genotypes. This
realization, lead to the exploration of representation schemes in
which the terminals and functions are  associated with a
specification indicating in which contexts they may or may not be
used. Attaching a type to each element of the GP's terminal set
ensures that computational blocks may combine only with other
computational blocks in a sensible manner. Types can be used to
specify how a block may be plugged into another block.

\subsubsection{Strongly Typed Genetic Programming (STGP)}\label{stronglyTypedGP}
In general, types are used by human programmers to rule out nonsensical computations. Types can
also be used for this purpose in the context of GP. This has the positive side-effect \cite{yu01polymorphism} of restricting the
search space to allow only programs that make sense from a run-time point of view. \cite{montana93strongly} shows how types can be used to
eliminate the closure constraint of un-typed genetic programs and to restrict the search space of genetic programs. Montana named the
method {\em Strongly Typed Genetic Programming} (STGP). In STGP, each function has a specified type for each argument and for the
value it returns. For example, $(func, \typ{\AT{A}{\AT{A}{B}}})$ defines a function that takes 2 arguments of type $\typ{A}$ and returns
an object of type $\typ{B}$. The simplest version of Strongly Typed Genetic Programming \cite{montana93strongly} may be formalized as:


\begin{definition}[Terminal Set Definition (STGP)]\label{termsetdefsimpletypes}
Let $ty$ be a set of symbols. A type $\sigma$ on $ty$  is either an element of $ty$ or a construct of the form $\AT{\alpha}{\beta}$ where $\alpha$ and $\beta$ are types on $ty$. A set of terminals on $ty$ for a GP system, \typedTerminalSet{} is a set of elements  $(f,\sigma)$  such that  $f$ is a symbol and  $\sigma$ is a type on $ty$.
\end{definition}
We remark that this definition allows the inclusion of terminals that take functions as arguments or return functions. For example, it is possible to include a terminal of type:
\begin{equation*}
\typ{\AT{(\AT{Number}{\AT{Number}{Number}})}{\AT{Number}{(\AT{Number}{Number})}}}
\end{equation*}


    \begin{table}
    \begin{tiny}
    \begin{center}
    \begin{tabular}{|l|l|}
    \hline\hline
    {\bf Type} & {\bf }\\
    \hline\hline
    Boolean & User Defined Atomic Type (UDAT) \\ \hline
    Number & User Defined Atomic Type (UDAT)\\
    \hline\hline
    {\bf Functions} & {\bf Type}\\
    \hline\hline
    $plus$ & $\BiOpTy$\\ \hline
    $minus$ & $\BiOpTy$\\ \hline
    $div$ & $\BiOpTy$\\ \hline
    $times$ & $\BiOpTy$\\ \hline
    $gt$ & $\typ{Number\rightarrow Number\rightarrow Boolean}$\\ \hline
    {\bf Terminals} & {\bf Type}\\
    \hline\hline
    $[0, \ldots, 10]$ & $\typ{Number}$ \\ \hline
    $time$ & $\typ{Number}$               \\ \hline
    $[true, false]$ & $\typ{Boolean}$               \\ \hline\hline
    \end{tabular}
    \caption{Types, functions and terminals definition for typed version
    of the terminals and functions previously defined in table \ref{tab:ter1} }
    \label{typedSymRegressionTerminals}
    \end{center}
    \end{tiny}
    \end{table}

\noindent
Table \ref{typedSymRegressionTerminals} displays the typed version of the set of functions, terminals and constants defined in Table \ref{tab:ter1}, with the exception of the $\mathit{if\_then\_else}$ function. The type $\sigma$ in $(f,\sigma)\in \typedTerminalSet$ now contains not only the arity information related to $f$, but also the names of the sets to which the arguments of $f$ should belong. Now, $f$'s arguments may be chosen from {\em only some} of the elements of $F_{ty}$, restricting the ways $f$ may be used in a genotype.  Types provide a stronger partial specification as to how $f$ may be used than the arity does. While there is now more control over how a specific operation may be used, the extra safety is paid for by an extra layer of complexity. Assembling genotypes randomly is now a more complicated affair. Below is definition of genotypes in the typed system of representation.

\begin{definition}[Genotype Definition (STGP)]\label{simpletypesgenotypes}
Let \typedTerminalSet{} be a set terminal set of elements. A {\em genotype}\index{genotype ! simply-typed representation scheme} is now a tree in which:
\begin{itemize}
\item each leaf is labeled with $(f,\sigma)$ where $(f,\sigma)\in F_{ty}$, and
\item each inner node has two children and is labeled $\epsilon$ where $\epsilon$ is a type. The left child is either an inner node labeled $\AT{\alpha}{\epsilon}$ or a leaf labeled $(f,\AT{\alpha}{\epsilon})$
and the right child is either an inner node labeled $\alpha$ or a leaf labeled $(f,\alpha)$
\end{itemize}
\end{definition}
Given the \typedTerminalSet{} described by table \ref{sideEffectTypedTerminals8990}, the
typed genotype depicted in figure \ref{perfectProgram17867} may be formed, and its type is
$\typ{Behavior}$. One of the advantages of this representation scheme is the new-found ability to partly specify
the type of the programs that are being evolved (for example, something that evaluates to a value of type $\typ{Behavior}$
rather than $\typ{Direction}$ or $\typ{Boolean}$).


    \begin{table}
    \begin{tiny}
    \begin{center}
    \begin{tabular}{|l|l|}
    \hline\hline
    {\bf Type} & {\bf }\\
    \hline\hline
    $\typ{Behavior}$ & User Defined Atomic Type (UDAT) \\ \hline
    $\typ{Direction}$ & User Defined Atomic Type (UDAT)\\ \hline
    $\typ{Boolean}$ & User Defined Atomic Type (UDAT)\\
    \hline\hline
    {\bf Functions} & {\bf Type}\\
    \hline\hline
    $move$ & $\typ{\AT{Direction}{Behavior}}$\\ \hline
    $turn$ & $\typ{\AT{Direction}{Behavior}}$\\ \hline
    $detectFood$ & $\typ{\AT{Direction}{Boolean}}$\\ \hline
    $if\_then\_else_{Behavior}$ & $\typ{\AT{Boolean}{\AT{Behavior}{\AT{Behavior}{Behavior}}}}$\\ \hline
    {\bf Terminals} & {\bf Type}\\
    \hline\hline

    $front$ & $\typ{Direction}$\\ \hline
    $left$ & $\typ{Direction}$ \\ \hline
    $right$ & $\typ{Direction}$ \\ \hline\hline
    \end{tabular}
    \caption{A set \typedTerminalSet{} for a sample side-effect GP system capable
    of expressing the genotype of figure \ref{perfectProgram17867}}
    \label{sideEffectTypedTerminals8990}
    \end{center}
    \end{tiny}
    \end{table}



    \begin{figure}
    \centering \epsfig{file=perfectProgram1.eps}
    \caption{Strongly-typed genotype for a side-effect GP system based on the terminals set of table \ref{sideEffectTypedTerminals8990}} \label{perfectProgram17867}
    \end{figure}

    \noindent
    \cite{haynes95strongly}
    applied STGP to the problem of evolving cooperation strategies in a predator prey environment.
    They reported that the solutions produced by a STGP system consistently
    outperformed the solutions produced by a standard GP system. They suggest
    that the reduced search space is the
    cause of the performance improvements. They also showed that the programs generated by STGP
    tend to be easier to understand. Following these encouraging results, \cite{haynes96type} proposes the extension of STGP with a type hierarchy mechanism and describes an application to the problem of finding all the cliques in an indirected or directed graph.


\subsubsection{Polymorphic STGP}\label{STGPissueDescription}
\noindent
The basic STGP formulation is equivalent to Koza's approach to constrained syntactic structures
(section \ref{ConstrainedPreviousWork}) and has the same limitation: The need to specify multiple functions which perform the same operation.
For example, consider the classic example: a typed branching function $\mathit{if\_then\_else}$:
\begin{equation}\label{TypedIfEx1}
    if\_then\_else:Boolean\rightarrow TYPE_1\rightarrow TYPE_1\rightarrow TYPE_1
\end{equation}
    \noindent
    The $\mathit{if\_then\_else}$ function of (\ref{TypedIfEx1}) is usable
    as a decision function between two objects of type $TYPE\_{1}$,
    but not in any other context. This implies that different versions
    of the same function need  to be defined to handle other data types.
    \newline
    \newline
    \noindent
When a language, such as Pascal, is based on the idea that functions and procedures, and hence their operands, have a unique type, it
is said to be {\em monomorphic}\index{monomorphism}. Monomorphic programming languages may be contrasted with polymorphic
languages in which some values and variables may have more than one type. {\em Polymorphism} is a language feature that
allows values of different data types to be handled using a uniform interface. A polymorphic function is a function that can
evaluate to or be applied to values of different types. A data type that can appear to be of a generalized type, for example a
list with elements of arbitrary type, is a polymorphic data type. A polymorphic type system allows the expression of computations
in which the types of some of the inputs (or of the outputs) are left unspecified. A polymorphic type system is necessary to express
aspects of computations that behave the same independently of type.
\newline
\newline
\noindent
There are two fundamentally different kinds of polymorphism, as originally described by \cite{Stra67}. If the range of usable types
is finite and the combinations must be specified individually prior to use, it is called {\em Ad-hoc polymorphism}\index{polymorphism ! ad-hoc}. The other, stronger kind of polymorphism, called parametric is the one that our research uses, and we will describe it in great details in the next two chapters.
\newline
\newline
\noindent
Ad-hoc polymorphic functions can be applied to arguments of different types, but behave differently depending on the type
of the argument to which they are applied (this is also known as function overloading). The term ``ad hoc" refers to the
fact that this type of polymorphism is a dispatch mechanism rather than a fundamental feature of the type system:
code moving through one named function is dispatched to various other functions without having to specify the exact
function being called. Overloading is simply a mechanism that allows multiple functions taking different types to be defined
with the same name; the compiler or interpreter automatically calls the right one. Ad-hoc polymorphism can be generated
by adding a variable mechanism to the type system, providing a way to decrease the specificity of the type included with each element of $F_{ty}$.
STGP supports an ad-hoc form of polymorphism. Montana doesn't provide a formalization of his type system, opting
instead for an informal description. He defines {\em generic functions}\index{functions!generic} as functions
which have a type that contains a variable and calls {\em instantiation}\index{polymorphism ! ad-hoc ! instantiation},
the operation of applying a type to a generic function to yield a non-generic function. Montana's generic functions
can only be used when a specialized terminal corresponding to the instantiation of the generic function exists.  It is
the specialized function that is embedded in the genotype. This is for practical reasons: directly including a parametric
polymorphic object in a genotype would require a system capable of evaluating all possible instantiations of the object.
Since there are an infinity of types that can be formed on any non-empty set $ty$, this is not feasible. Instead, Montana
provides some instantiation implementations and instantiates generic functions {\em before} they get included in any genotype,
after his system ensures that the implementation resulting from the instantiation of the variable exists. This greatly
complicates the underlying system implementation (which is described on page 15 of \cite{montana93strongly}). In STGP, generic functions are defined on named lists of argument types and have their return types infered when they are embedded
in a new tree not built from a crossover operation.
  %  For example, the branching function would be typed:
%    \begin{equation}\label{TypedIfEx2}
%    IFTE:BOOL\rightarrow X\rightarrow X\rightarrow X
%    \end{equation}
%    $X$ is a type variable that gets replaced by a type when a program is  constructed
%    at the beginning of the GP's run.
     After a generic function has been instantiated (by being embedded in a new tree) it
     is and behaves as a standard typed function. This is also how it is passed on to the program's
     descendants.
%    \paragraph{Table possibility table}
        Montana uses a table-lookup mechanism to produce legal
        parse trees. A type possibilities table is computed beforehand to
        specify all the types that can possibly be generated at each tree
        depth level. This table provides type constraints to
        the function selection procedure used to generate type-correct programs.
        During the creation of the initial population, each parse tree is grown top-down by
        randomly choosing functions and terminals and verifying their validity
        as possible tree nodes against the type possibility table.
    \newline
    \newline
    \noindent
    \textbf{Generic types:} Generic functions eliminate the need to specify multiple functions
    which perform the same operation on different types.
    To produce generic programs in STGP, Montana
    also implemented a {\em generic type} system. {\em Generic programs}
    are programs that have generic data types as input
    or output types. The
    generic data types are instantiated when the generic program
    is executed. Since generic data types can be instantiated with
    many different type values, the generated programs are
    generic programs.
    \newline
    \newline
    \noindent
    Unfortunately, there are three basic limitations with STGP's implementation of polymorphism :
    \begin{enumerate}
    \item {\bf Type possibilities table and table-lookup mechanism to instantiate type variables:}
    \noindent
    With STGP, when a parse tree is generated and a generic
function is considered for inclusion, the underlying GP system must
insure that the new element will make it possible to select legal
subtrees. For generic functions, the GP has to loop over all ways to
combine types from a type possibility table that is maintained
centrally by the system.
    \item {\bf Evolving polymorphic functions and structures: } Polymorphic functions are instantiated with solid
    types when included in a newly spawned tree. Crossover acts on functions and terminals that
    are not polymorphic anymore. There is no possibility of evolving
    functions defined for all types. There is no possibility of
    evolving structures such as trees or lists.
    \item {\bf Lack of support for higher-order functions: } SGTP has no
    general support for growing functions which take functions as arguments
    and/or return functions as results.
    \end{enumerate}
%
 \subsubsection{The PolyGP system}\label{PolyGP}
       \noindent
             The {\em PolyGP system} \cite{clack97performance,Yu99,yu01polymorphism}
             is also based on a type system. Used during program creation (crossover and mutation),
             the type system ensures that all programs created are type-correct. In PolyGP,
             polymorphism is implemented through the use of three different kinds of
             type variables:
             \begin{enumerate}
             \item Generic type variables are used as in STGP to specify that a generated program can accept
            inputs of any type
            \item  {\em Dummy type variables} express the polymorphism of built-in
            functions: whenever they are used in a parse tree they must be
            instantiated to some other type (not a dummy type). If a dummy type variable
            occurs more than once, then it is necessary to
            instantiate all occurrences to the same type.
           % Where there are no constraints the dummy type is
%instantiated as a new temporary type variable. This delayed binding of temporary type variables
%supports a form of polymorphism within the parse tree as it is being created. A useful invariant is that
%a dummy type variable will never occur in a parse tree.
            \item {\em Temporary type variables} are used when the type of a dummy variable can't be determined but must be instantiated to a known type. This can happen when a polymorphic function is selected to construct a program tree node.
            \end{enumerate}
            \noindent
            The main differences between the PolyGP system and the STGP system are:
            \begin{itemize}
                \item PolyGP uses a type unification algorithm rather than a table-lookup mechanism to instantiate type variables.
                \item PolyGP uses temporary type variables to support polymorphism within a program parse tree as it is being created.
                \item PolyGP uses generic type variables to represent polymorphism of
                the generated programs. However, unlike generic data
                types in STGP, generic type variables are never instantiated
                \item PolyGP's type system supports higher-order functions
                (functions that take functions as arguments and/or return functions as outputs).
            \end{itemize}


        \MySection{\RecAbGP}\label{AbstractionPreviousWork}

           \noindent
       In this section, the work related to GP and modularization is
       discussed. Modules encapsulate chunks of computation, and as
       such are a form of abstraction. Recursion is usually used in
       conjunction with modularization and is discussed in this
       section as well. There is a type of recursion used in GP called
       ``implicit recursion" which does not need a naming or module scheme
       to function, and it is presented in this section as well.


       \subsection{ADFs}
       \noindent
       {\em Automatically Defined Functions} (ADFs) \cite{koza93simultaneous} are mechanisms devised by Koza to facilitate the creation and reuse of modules. A GP system that supports ADFs tries to discover reusable subroutines and to assemble the subroutines into genotypes. Automatic function definitions are an attempt to solve problems by automatic subproblem decomposition. The modules evolved during the system's run can be called by the programs that are being simultaneously evolved during the same run. ADF implementations rely on the use of Koza's constrained syntactic
        structure mechanism (section \ref{ConstrainedPreviousWork}).
        It was found that genetic programs with
        ADFs have an advantage on some problems, particularly
        when the problem has a high level of regularity in its solution.

    \subsection{ADMs}
    \noindent
    \cite{Spect96} has proposed the use of {\em Automatically Defined Macros} (ADMs).
    An ADM is evaluated in the main program global environment, unlike an ADF where the
    evaluation is performed in its local environment.  The idea is to
    simultaneous evolve programs and their control structures. When the
    name of an ADM is called in the main program, a
    macro expansion is performed. ADMs can therefore be used to implement program
    control structure if a block of code is passed as argument to the ADM.


  %  \subsection{Abstraction in GP}
%        \noindent
%        \subsection{$\lambda$-abstraction}
%        Higher-order functions provide structure abstraction in
%        the program parse trees. The type system protects this structure
%        abstraction and helps GP to find good program structures
%        during program evolution.
%        [Yu and Clack, 1998] shows how higher-order functions can be used to support module creation and implicit recursion in GP.
%
\subsection{Recursion and GP}
 \noindent
    The first efforts to evolve programs that are able to use recursion can be found on page
    473 of~\cite{koza:book} where Koza investigates a problem-specific form of recursion to solve the
    Fibonacci sequence induction problem. Later,~\cite{Brave95} studied recursion use in GP in greater detail.
    There are currently two ways of providing
    recursion support in GP:
    \begin{enumerate}
    \item {\em Explicit recursion:} Giving names to the programs in the GP system so that they may refer to themselves
    \item {\em Implicit recursion:} Pre-defining recursion operators in the language of the system
    \end{enumerate}
    \subsubsection{Recursion by naming}\label{PGPRecursion}
     \noindent
     This approach, used in \cite{koza:book,clack97performance,Brave95} involves giving a name
     to the program that is being grown. The name is included in the language of the GP, but only
     for that program, so that it may use it as if it were a built-in function
     at any point within the parse tree.
 %    When the limit is reached, the interpreter aborts with a flag that is
%     used in the computation of the fitness value of the program.
     This requires
     additional overhead. Each program now uses a slightly different language than the other programs. Names have
     to be kept and managed. Another problem with the
     recursion by naming scheme is that special mechanisms need to be put in place
     to handle the cases where parts of programs that refer to themselves are used to construct
     a new program (with a different name) in a crossover operation.
     The number of recursive calls must be limited
     to avoid infinite loops, so the number of recursive calls has to be tabulated while the program is
     running and a system of flags has to be implemented.
     In contrast, the recursion scheme of \ABGP{} does not need to provide programs with the ability
     to call themselves in order to support recursion. As a result of this and of the termination property of
     \SF{}, it has no need
     to check for infinite loops.
     %Appendix \ref{ListDeriv2} gives an idea of how
%     such a system would naturally support recursion.
     \subsubsection{Recursion by specialized operator}
     \noindent
     In \cite{yu:1998:rlaGP98}, another approach for recursion in PolyGP is proposed.
     It is called {\em implicit recursion}, and it exploits PolyGP's support for higher-order functions.
     The idea is to implement the use of recursion using three pre-defined higher-order functions
     instead of explicit recursive calls. The three functions all work with lists, which are supported
     by PolyGP. They are:
     \begin{enumerate}
     \item {\em MAP:} applies the first argument, a monadic operator function which takes one argument, \
     to each element of the second argument (a list) to produce a list of the results.
     In this work, we show how this can be done (using \SF{}) without defining an
     external function.
     %We note that the method described in Appendix \ref{ListDeriv1} works for other structures such as trees.
     \item {\em FOLD:} places the first argument, a function which takes two arguments, between each of the
    items in the list.
    \item {\em FILTER:} applies the first argument, a predicate operator (a
    function which returns True or False), to each element in
    the second argument (a list) to produce a list containing
    items which satisfy the predicate operator.
    \end{enumerate}
    \noindent
    Increasing the number of pre-defined functions that are manipulated by the
    GP system increases its search space and bloats its language with programming
    constructs that are not directly related to the problem that is being solved.

    \MySection{Conclusion}
    In this section other GP systems based on a
    typed representation scheme were presented. In the next chapter we will describe why we found the typed approach appealing and we list the features of the current typed representations schemes for GP that we tried to improve. Then, in section \ref{SFGP}, we will describe our answer to the representation issues
    that we will have highlighted and we will discuss the impact that our solution has on the search space of the system and in particular, we will show that our representation scheme provides an obvious, mathematically natural way to partition the GP search space that takes into account the structures of the genotypes as well as the components from which they are built.
