\newcommand{\hasstp}{\gg}
\newcommand{\caseofpi}[3]{\lim{case<#3>}\ #1\ \lim{of \{}\ #2\ \lim{\}}}
\newcommand{\stp}[2]{\langle #1, #2 \rangle}
\newcommand{\stparr}{\shortrightarrow}
\newcommand{\stpsub}{\mapsto_\pi}

\newcommand{\criticalterm}{\gg}

\section{Trimming the search space}
\label{sec:searchspace}

 For each goal, several different proof steps are applicable, and each proof step may be applicable in different parts of a term. As an example, for the goal
 \li{(x + y)} \li{+ (u + v)} \li{= (x + u)} \li{+ (y + v)}, there are seventeen different possibilities, i.e., goal factoring,
  induction for each of the four variables, generalization for each of the six subterms,
  as well as case analysis for each of the of the six subterms. 
  
In general Zeno will try any applicable step, backtracking if a step does not lead to a proof. We have however developed several heuristics for reducing the number of applicable steps.


\subsection{Deterministic steps - \lim{[eql]}, \lim{[con]}, \lim{[def]}}

These three steps have the highest priority; Zeno applies them whenever it can do so. Whenever Zeno generates a new goal it applies any function definitions it can (\li{[def]}), then checks whether the two sides of the consequent are syntactically equal (\li{[eql]}), or whether the antecedents contain any contradictions  (\li{[con]}). If neither is the case the other proof steps will then be tried.

This eager application of function definitions could cause an infinite loop when presented with a function that does not terminate for a certain input, thus restricting Zeno to operating only over total functions.


\subsection{Controlling \li{[ind]}, \li{[cse]} and \li{[gen]} with critical terms}

The \emph{critical term} within our goal guides the decision of {\em which step} to apply, and {\em how} to apply it. It makes the choice so as to ensure function definitions will be applicable later on.

\begin{figure}
\[
\inferrule{
}{
\lim{x} \criticalterm \lim{x}
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseof{\tau_2}{\vect{A}} \\
\tau_2 \in \tau_1 \\
\tau_2 \criticalterm \tau_3
}{
\tau_1 \criticalterm \tau_3
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseof{\tau_2}{\vect{A}} \\
\tau_2 \notin \tau_1
}{
\tau_1 \criticalterm \tau_2
}
\]
\caption{Critical term of a term}
\label{fig:criticalterm}
\end{figure}

A critical term $\tau_c$ of a term $\tau$ is given by the relationship $\tau \criticalterm \tau_c$ defined in \fig{criticalterm}, where $\in$ stands for the sub-term relationship, i.e. $\lim{g x} \in \lim{f (g x)}$. A critical term is one we would have to replace with a constructor term in order to apply a function definition.\footnote{ Note that as for the  \li{[def]}  steps, the calculation of critical terms may not terminate when we consider non-terminating functions.} Therefore, if the critical term is a variable, Zeno tries performing induction on it, and if it is another term, Zeno tries performing case-splitting on it. The usage of critical terms to direct these steps means that the proof is moving  towards being able to apply a function definition later on.

In \fig{criticaltermex} we give two examples of the derivations of critical terms. The first example is \li{(x + y)} \li{+ z} which has the same critical term as \li{x + y}, since this is a sub-term. The critical term of \li{x + y} is  \li{x}, since this is a case-analysed variable. The second example, \li{max x y}, has the critical term \li{x <= y}, since evaluation of \li{max x y} will need to perform case-analysis on \li{x <= y}, and \li{x <= y}  is not a sub-term of \li{max x y}.

\begin{figure}
\[
\inferrule{
\inferrule{
\inferrule{
}{
\lim{x} \criticalterm \lim{x}
}
\\\\
\lim{x} \in \lim{x + y} \\
\lim{x + y} \leadsto \caseof{\lim{x}}{\vect{A}}
}{
\lim{x + y} \criticalterm \lim{x}
}
\\\\
\lim{x + y} \in \lim{(x + y)}\ \lim{+ z} \\
\lim{(x + y)}\ \lim{+ z} \leadsto \caseof{\lim{(x + y)}}{\vect{A}}
}{
\lim{(x + y)}\ \lim{+ z} \criticalterm \lim{x}
}
\]
\\
\[
\inferrule{
\lim{x <= y} \notin \lim{max x y} \\
\lim{max x y} \leadsto \caseof{\lim{(x <= y)}}{\vect{A}}
}{
\lim{max x y} \criticalterm \lim{x <= y}
}
\]
\caption{Finding the critical terms of \li{(x + y)} \li{+ z} and \li{max x y}}
\label{fig:criticaltermex}
\end{figure}

Every term has no more than one critical term. Terms without critical terms are those which do not have at the outermost level a function  whose definition could be applied e.g., a constructor term like \li{S x}, or those which can be evaluated fully to a new term. The former we can ignore whereas the latter should not occur, since we have already performed any available \li{[def]} step.

The critical terms of $\tau_1 = \tau_2$ are the union of the critical terms of $\tau_1$ and  $\tau_2$. The critical terms of the
 goal  $\equality_1\ \lim{:-}\ \equality_2, ..., \equality_n$ are the union of the  critical terms of $\equality_1$ ... $\equality_n$. For example the critical terms of \li{max x y = x :- x <= y = False} consists of those from \li{max x y}, from \li{x}, from \li{x <= y} and from \li{False}; this gives us two critical terms in total, i.e., \li{x} and \li{x <= y}.


\subsection{Usable critical terms}

When we generate a critical term (see \fig{criticalterm}) we call the path of this symbolic execution the \emph{critical path}. A critical term is only \emph{usable} if this its critical path has not already been evaluated down this branch of the proof tree. When we use a critical term we store its critical path down this proof branch. This technique controls the unrolling of function definitions and gives Zeno a small finite search space to look through for each proof. Unfortunately the exact details of this method are out of the scope of this paper but we hope to present them after further research.

If a goal has any usable critical terms then we can apply an induction step or a case-split step, induction if the critical term is a variable, and a case-split if it is not. This method was chosen as it drives Zeno towards the application of function definitions. 

\begin{description}
% \subsubsection*
\item[Controlling Induction]
% \li{[ind x =>} $\tau$\li{]}]

Zeno tries induction only on a variable \li{x} appearing among the usable critical terms of the current goal. For example, in the first line of the proof of \li{x + 0 = x} in \fig{proofaddzero}, variable \li{x} is the critical term of both sides. Therefore, the  only step Zeno can take at this point is induction on \li{x}; this step allows the application of  the definition of \li{(+)} as the next step.

% \subsubsection*
\item[Controlling Case-Splitting]
%{Applying \li{[cse} $\tau$ \li{=>} $\tau'$\li{]}]

Zeno tries a case-split only on a usable critical term that is not a variable.
%, that is to say the application of one term to another, we should try and 
% apply a case-split on it.
 For example, in the the first line of   \fig{proofmaxzero}, the critical term of  \li{max x y} is \li{x <= y}, and therefore Zeno tries  a case-split on \li{x <= y}. After the case split, the function definition of \li{max} is applicable down either branch.


\item[Controlling Generalization]
%{Applying \li{[gen} $\tau$ \li{ => x]}]

Zeno tries generalisation only on a term containing a variable which is a critical term of the goal. In our generalisation example in \fig{revrevgenex} the goal \li{rev (rev xs' ++ (x : []))} \li{= x : rev (rev xs')} has the critical term \li{xs'}. The term \li{rev xs'} contains this critical term so we can apply generalisation to it.

\end{description}


\subsection{Dynamic checking of sub-properties}

Some proof steps yield a new goal which may be false without necessarily making the original goal false (\li{[gen]}, \li{[fac]}, \li{[hyp]}), i.e., the new goal implies the old but this is not iff. Every time Zeno takes such a proof step it will quickly check for counter-examples to this new property. This stops Zeno going down ``false paths'' in a proof search and is taken from the ACL2s system. Zeno also applies this dynamic checking to the original goal to generate disproofs. We generate these test values using our critical term technique, making our approach similar to SmallCheck\cite{smallcheck}, in that we use symbolic execution to generate small values to test with, but while SmallCheck takes a depth of recursion to search to we use our usability technique to yield a small finite search space of values. This is different to the approach of ACL2s which generates a constant number of completely random values, much more like the tool QuickCheck\cite{quickcheck}. 


\subsection{Only apply \li{[hcn]} if we can follow with \li{[gen]}}

One interesting heuristic we have discovered is that it seems to be only helpful to move an inductive hypothesis into the goal conditions through a \li{[hcn]} step if this introduces a common sub-term which we can then generalise. In our example in \fig{verifyisort} we introduced the common sub-term \li{isort ys} which we could then generalise. This heuristic greatly restricts the applicability of \li{[hcn]} while also helping us ground any universally quantified hypothesis variables within this common sub-term.


%\subsection{Why separate inductive hypotheses?}

%It is important to discuss why we have chosen to keep inductive hypotheses in the background rather than always add them to the list of goal conditions. One reason for this is that Zeno does not support deep universal quantifiers, so we would not be able to keep our universally quantified variables in our hypothesis. Another is that goal conditions are used to apply function definitions (\li{[def]}), infer tautologous cases (\li{[icn]}), and prove our goal through inconsistency (\li{[con]}), but they are not used as general rewrite rules in the same way that inductive hypotheses are. Finally we use the heuristic that once an inductive hypothesis is used it can be removed, which is not the case for goal conditions.


\begin{comment}

\subsection{Strict pairs - controlling \li{[cse]} and \li{[ind]}}

As we said earlier, a \emph{strict pair} is composed of a \emph{strict term} and a \emph{strictness path}. The \emph{strict term} within our goal guides the decision of {\em which step} to apply, and {\em how} to apply it. It will be chosen so as to ensure function definitions will be applicable later on. The \emph{strictness path} is used to restrict when a \emph{strict term} can be used so that function definitions are not applied unnecessarily, reducing our search space.

To support strict pairs we must augment our {\bf HC} grammar to {\bf HC}$^\pi$ using the changes in \fig{hcpi}, where $\pi$ is a strictness path, \li{p} is a strict pair and \li{c} is a unique identifier for \li{case} statements. To convert code in {\bf HC} to {\bf HC}$^\pi$ you must label each \li{case} statement with a unique \li{c} value, and convert all variables \li{x} to variables with no stored strictness paths i.e., $\lim{x} \mapsto \lim{x[]}$, for simplicity though we will use \li{x} for \li{x[]}.

\begin{figure}[bhtp]
\small
\begin{align*}
& \lim{p} & ::= & \quad \epsilon \quad | \quad \lim{c} \stparr \lim{p} \\
& \pi & ::= & \quad \langle \tau, \lim{p} \rangle \\
\\
& \tau & ::= & \quad \lim{x<p}^* \lim{>} \quad | \quad \lim{f} \quad | \quad \lim{K} \quad | \quad \lim{(}\tau_1\ \tau_2\lim{)} \\
& Expr & ::= & \quad \tau \quad | \quad \lim{case<c>}\ \tau\ \lim{of \{}\ Alt\ (\lim{;}\ Alt)^*\ \lim{\}} \\
\end{align*}
\caption{Augmenting \textbf{HC} to \textbf{HC}$^\pi$}
\label{fig:hcpi}
\end{figure}

A strict pair $\pi$ of a term $\tau$ is given by the relationship $\tau \hasstp \pi$ defined in \fig{strictpair}, where $\in$ stands for the sub-term relationship, i.e. $\lim{g x} \in \lim{f (g x)}$. Its strict term is the one we would have to replace with a constructor term in order to apply a function definition.\footnote{Note that as for the \li{[def]} steps, the calculation of strict pairs may not terminate when we consider non-terminating functions.} Therefore, if the strict term is a variable, Zeno tries performing induction on it, and if it is another term, Zeno tries performing case-splitting on it. The usage of critical terms to direct these steps means that the proof is moving towards being able to apply a function definition later on. The strictness path generated represents the path this symbolic execution took in order to generate the strict term.

In \fig{strictpairex} we give two examples of the derivations of strict pairs, whereby we have given the single case-of in every function the same identifier (\li{c} value) as the name of the function itself. The first example is \li{(x + y)} \li{+ z} which has the same strict term as \li{x + y}, since it is a case-analysed sub-term, but with \li{+} added to the path. The strict pair of \li{x + y} is $\langle \lim{x}, \lim{+} \stparr \epsilon \rangle$, since this is a case-analysed variable. The second example, \li{max x y}, has the strict pair $\langle \lim{x <= y}, \lim{max} \stparr \epsilon \rangle$, since evaluation of \li{max x y} will need to perform case-analysis on \li{x <= y}, and \li{x <= y} is not a sub-term of \li{max x y}.

The strict term of a strict pair can be used by Zeno if the strictness path is not a supersequence of, or equal to, any strictness path within the strict term. Terms contain strictness paths in that terms contain variables and variables in {\bf HC}$^\pi$ keep a list of strictness paths. If a path is a supersequence of an existing one then we have already explored this pattern in our proof and do not need to explore it again. We do not consider strict pairs that have empty strictness paths as these were not generated through a function call so will not allow a function definition application if inducted upon.

\begin{figure}
\[
\inferrule{
}{
\lim{x} \hasstp \stp{\lim{x}}{\epsilon}
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseofpi{\tau_2}{\vect{A}}{c} \\
\tau_2 \in \tau_1 \\
\tau_2 \hasstp \stp{\tau_3}{\lim{p}}
}{
\tau_1 \hasstp \stp{\tau_3}{\lim{c} \stparr \lim{p}}
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseofpi{\tau_2}{\vect{A}}{c} \\
\tau_2 \notin \tau_1
}{
\tau_1 \hasstp \langle \tau_2, \lim{c} \stparr \epsilon \rangle
}
\]
\caption{Strict pair of a term}
\label{fig:strictpair}
\end{figure}

\begin{figure}
\[
\inferrule{
\inferrule{
\inferrule{
}{
\lim{x} \hasstp \stp{\lim{x}}{\epsilon}
}
\\\\
\lim{x} \in \lim{x + y} \\
\lim{x + y} \leadsto \caseofpi{\lim{x}}{\vect{A}}{+}
}{
\lim{x + y} \hasstp \stp{\lim{x}}{\lim{+} \stparr \epsilon}
}
\\\\
\lim{x + y} \in \lim{(x + y)}\ \lim{+ z} \\
\lim{(x + y)}\ \lim{+ z} \leadsto \caseofpi{\lim{(x + y)}}{\vect{A}}{+}
}{
\lim{(x + y)}\ \lim{+ z} \hasstp \stp{\lim{x}}{\lim{+} \stparr \lim{+} \stparr \epsilon}
}
\]
\\
\[
\inferrule{
\lim{x <= y} \notin \lim{max x y} \\
\lim{max x y} \leadsto \caseofpi{\lim{(x <= y)}}{\vect{A}}{max}
}{
\lim{max x y} \hasstp \stp{\lim{x <= y}}{\lim{max} \stparr \epsilon}
}
\]
\caption{Finding the strict pairs of \li{(x + y)} \li{+ z} and \li{max x y}}
\label{fig:strictpairex}
\end{figure}


\subsection{Substitution preserving strictness paths}
\label{sec:stpsub}

Many steps in our proof search perform term substitution (\li{[cse]}, \li{[ind]}, \li{[gen]}, \li{[hyp]}). If this substitution was performed na\"{i}vely it would replace all strictness paths from every variable it substitutes, rendering our method useless. We must augment our definition of substitution to preserve any strictness paths from any term it replaces. This new definition of substitution ($\stpsub$) is given in \fig{stpsub}.

\begin{figure}
\begin{displaymath}
\begin{array}{l l c l}
& \lim{add}^\pi\; : \;  \wp(\lim{p}) \rightarrow \tau \rightarrow \tau &&  \lim{all}^\pi : \tau \rightarrow \wp(\lim{p}) \\
\\
& \lim{add}^\pi\quad \vect{\lim{p}}_1\;\ \lim{x<}\vect{\lim{p}}_2\lim{>} \quad & = & \quad \lim{x<}\vect{\lim{p}}_1 \cup \vect{\lim{p}}_2\lim{>} \\
& \lim{add}^\pi\quad \vect{\lim{p}}\quad \lim{K} \quad & = & \quad \lim{K} \\
& \lim{add}^\pi\quad \vect{\lim{p}}\quad \lim{f} \quad & = & \quad \lim{f} \\
& \lim{add}^\pi\quad \vect{\lim{p}}\quad (\tau_1 \cdot \tau_2) \quad & = & \quad (\lim{add}^\pi\ \vect{\lim{p}}\ \tau_1) \cdot (\lim{add}^\pi\ \vect{\lim{p}}\ \tau_2) \\
\\
& \lim{all}^\pi\quad \lim{x<}\vect{\lim{p}}\lim{>} \quad & = & \quad \vect{\lim{p}} \\
& \lim{all}^\pi\quad \lim{K} \quad & = & \quad \emptyset \\
& \lim{all}^\pi\quad \lim{f} \quad & = & \quad \emptyset \\
& \lim{all}^\pi\quad (\tau_1 \cdot \tau_2) \quad & = & \quad (\lim{all}^\pi\ \tau_1) \cup (\lim{all}^\pi\ \tau_2)\\
\\
& \tau_1[\tau_2 \stpsub \tau_3] \quad & = & \quad \lim{add}^\pi\ (\lim{all}^\pi\ \tau_1)\ \tau_3 \\
& & & \quad \text{where}\ \tau_1 = \tau_2\ \text{ignoring variable strictness paths} \\
& (\tau_1 \cdot \tau_2) [\tau_3 \stpsub \tau_4] \quad & = & \quad \tau_1[\tau_3 \stpsub \tau_4] \cdot \tau_2[\tau_3 \stpsub \tau_4] \\
& \tau_1 [\tau_2 \stpsub \tau_3] \quad & = & \quad \tau_1
\end{array}
\end{displaymath}
\caption{Substitution preserving strictness paths ($\stpsub$)}
\label{fig:stpsub}
\end{figure}

\subsection{Example proof}

\begin{figure}[h]
\begin{lstlisting}[basicstyle=\ttfamily\footnotesize]
data List = [] | (:) T List

(++) :: List -> List -> List
xs ++ ys = case<++> xs of
  { [] -> ys ;
    x : xs' -> x : (xs' ++ ys) }

rev :: List -> List
rev xs = case<rev> xs of 
  { [] -> [] ; 
    x : xs' -> rev xs' ++ (x : []) }
\end{lstlisting}
\caption{Definition of list append and reverse in {\bf HC}$^\pi$}
\label{fig:revappdef}
\end{figure}

To demonstrate the usefulness of strict pairs we will use the proof that reversing a list twice returns the original list i.e., \li{rev (rev as)} \li{ = as}. For this we will use the definition of list append and reverse from \fig{revappdef}, using the Haskell infix notation of \li{:} for list cons, \li{++} for list append, along with a monomorphic list type of some arbitrary \li{T}. This proof is listed step by step in \fig{revrevprf} but will be explained here. We have used \li{p1} and \li{p2} to represent the strictness paths $\lim{rev} \stparr \lim{rev} \stparr \epsilon$ and $\lim{rev} \stparr \lim{++} \stparr \epsilon$ respectively. For simplicity we have ignored those branches of the proof for the inductive base cases i.e., the empty list, as these can be solved trivially.

Given the goal property \li{rev (rev as)} \li{= as} we get the strict pairs $\langle \lim{as}, \lim{rev} \stparr \lim{rev} \stparr \epsilon \rangle$ and $\langle \lim{as}, \epsilon \rangle$ from \li{rev (rev as)} and \li{as} respectively. $\langle \lim{as}, \epsilon \rangle$ can be ignored as it has an empty path so our first step is to perform induction on \li{as}, where every new variable created has the path $\lim{rev} \stparr \lim{rev} \stparr \epsilon$ (\li{p1}) added. After applying function definitions in step 2 we can recheck for strict pairs, getting $\langle \lim{bs<p1>}, \lim{rev} \stparr \lim{++} \stparr \lim{rev} \stparr \epsilon \rangle$, but this is not usable, as $\lim{rev} \stparr \lim{++} \stparr \lim{rev} \stparr \epsilon$ is a supersequence of \li{p1}, so we cannot apply induction or case splitting here. In fact, the \emph{only} applicable step at this point is to use our induction hypothesis.

After step 3 we still have no usable strict pairs, but we do have a common sub-term \li{rev bs}, so we can generalise this to some new \li{cs}. Notice that we have used our strictness path preserving substitution from Section \ref{sec:stpsub} so our new \li{cs} gains the strictness path \li{p1}. Now when we look for strict pairs we get $\langle \lim{cs[p1]}, \lim{rev} \stparr \lim{++} \stparr \epsilon \rangle$ and as $\lim{rev} \stparr \lim{++} \stparr \epsilon$ is not a supersequence of \li{p1} this is a usable strict pair. Now after doing induction on \li{cs} and applying the definition of \li{rev} we have no more strict pairs, but can apply the inductive hypothesis and function definitions to complete the proof.

This proof is a particularly good example as you can see how our strict pair heuristic forces us to use the exact steps that we need, no useless steps are taken at any point.


\begin{figure}[h]
\begin{lstlisting}[basicstyle=\ttfamily\scriptsize]
[goal] rev (rev as) = as
Has strict pair <as, p1>

[ind as => b<p1> : bs<p1>] 
  rev (rev (b<p1> : bs<p1>)) 
    = b<p1> : bs<p1>
  with rev (rev bs) = bs

[def] 
  rev (rev bs<p1> ++ (b<p1> : []))
    = b<p1> : bs<p1> 
No usable strict pairs!
    
[hyp rev (rev bs) = bs]
  rev (rev bs<p1> ++ (b<p1> : [])) 
    = b<p1> : rev (rev bs<p1>)
No usable strict pairs!
    
[gen rev bs => cs]
  rev (cs<p1> ++ (b<p1> : [])) 
    = b<p1> : rev cs<p1>
Has strict pair <cs, p2>
    
[ind cs => d[p2] : ds[p2]] 
  rev ((d<p1, p2> : ds<p1, p2>) ++ (b<p1> : []))
    = b<p1> : rev (d<p1, p2> : ds<p1, p2>)
  with forall b . rev (ds ++ (b : [])) = b : (rev ds)
  
[def]
  rev (ds<p1, p2> ++ (b<p1> : [])) ++ (d<p1, p2> : [])
    = b<p1> : (rev ds<p1, p2> ++ (d<p1, p2> : []))
No usable strict pairs!
    
[hyp rev (ds ++ (b : [])) = b : (rev ds)]
  (b<p1> : rev ds<p1, p2>) ++ (d<p1, p2> : [])
    = b<p1> : (rev ds<p1, p2> ++ (d<p1, p2> : []))  

[def] then [eql] to finish.

Proven: rev (rev as) = as
        rev (cs ++ (b : [])) = b : rev cs
\end{lstlisting}
\caption{Proving \li{rev (rev as)} \li{= as}}
\label{fig:revrevprf}
\end{figure}

\end{comment}

\begin{comment}

 It consists of three stages:

  In stage one, it tries ``simpler'', ``deterministic'' steps. Such simple steps are contradiction,   equality, and applying function definitions. These steps either terminate the proof immediately, or in the case of applying function definitions, they allow the program to ``guide'' the proof.
  
  In stage two, after trying these simpler steps, our heuristic attempts factoring (which replaces a larger goal by several simpler ones), or application of an induction hypothesis. Again, these two steps move the proof forwards.
   
    In stage three,  the heuristic tries the more ``complex'',  ``non-deterministic'' steps, i.e,  induction,   generalization, and case-splitting. In order to determine which of these to apply, and in which part of the goal, it makes use of the  \emph{critical term}s. A critical term is either a variable which appears in the goal, or a term which is not a sub-term of the goal, but whose value will need to be determined in order to calculate  the goal. Thus, the critical term guides the proof steps to make it possible to apply the function definitions in a later step.
    % allows us to trim the search space of our other steps through the analysis of function definitions.

% SD removed the following, ad I am not sure it first the flow, and I am not sure it is true,
% ie easy to find an example of losing a proof this way
% and I do not want to challenge the clever reviewer.
%\sidenote Our heuristic is based on our observation of inductive proofs. We have not aproven that  formal proofs that our method is no less complete with them. That is to say we do not know that we are not losing any proofs by applying them, but we know that we have not yet found a proof lost this way.

\subsection{Stage one - \li{[eql]}, \li{[con]} and \li{[def]}}

These three steps have the highest priority; Zeno applies them whenever it can do so.
 % SD removed the folowing as we said it earlier
 % \li{[eql]} and \li{[con]} are important as they will terminate a proof branch 
 %and stop our search. 
 Whenever Zeno generates a new goal it checks whether the two sides of the consequent are syntactically equal (\li{[eql]}), or whether the antecedents contain any contradictions  (\li{[con]}). If neither is the case, Zeno  applies any function definitions it can to the goal (\li{[def]}).

\sidenote This eager application of function definitions 
% is a very useful heuristic which eliminates a huge amount of our search space.
% SD removed the following, as I thibk it is not true
%It also removes the need for a more complex heuristic to guide this application, 
%such as rippling\cite{rippling}. 
%Unfortunately this behaviour 
could cause an infinite loop when presented with a function that does not terminate for a certain input, thus restricting Zeno to operating only over total functions.


\subsection{Stage two - \li{[fac]}, \li{[hyp]}}   

 
Zeno tries  to apply factoring (\li{[fac]}) whenever possible - no further heuristic is needed. Zeno also tries applying an inductive hypothesis (\li{[hyp} $\equality$\li{]}) whenever  possible, but it applies each induction hypothesis at most once for each of the branches. For  data constructors with more than one recursive argument, such as a binary tree,  we  generate as many inductive hypotheses as recursive arguments, and thus are at liberty where in the proof we use each of these hypotheses.

\sidenote A potential issue with the usage of inductive hypotheses is when not all of their universally quantified variables have been instantiated. This might occur if we were to apply a hypothesis such as \li{True = leq x' y} from left to right, leaving \li{y} free, if we assume \li{x'} to be our inductive variable. Our current technique is rather naively to try every variable from our goal property in turn for these variables so this bears further research.

\subsection{Stage three -  Critical terms and \li{[ind]}, \li{[cse]}, \li{[gen]}}

As we said earlier, the  \emph{critical term} within our goal guides the decision of {\em which step} to apply, and {\em how} to apply it. It makes the choice so as to ensure function definitions will be applicable later on.

\begin{figure}
\[
\inferrule{
}{
\lim{x} \criticalterm \lim{x}
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseof{\tau_2}{\vect{A}} \\
\tau_2 \in \tau_1 \\
\tau_2 \criticalterm \tau_3
}{
\tau_1 \criticalterm \tau_3
}
\]
\\
\[
\inferrule{
\tau_1 \leadsto^* \caseof{\tau_2}{\vect{A}} \\
\tau_2 \notin \tau_1
}{
\tau_1 \criticalterm \tau_2
}
\]
\caption{Critical term of a term}
\label{fig:criticalterm}
\end{figure}

A critical term $\tau_c$ of a term $\tau$ is given by the relationship $\tau \criticalterm \tau_c$ defined in \fig{criticalterm}, where $\in$ stands for the sub-term relationship, i.e. $\lim{g x} \in \lim{f (g x)}$. A critical term is one we would have to replace with a constructor term in order to apply a function definition.\footnote{ Note that as for the  \li{[def]}  steps, the calculation of critical terms may not terminate when we consider non-terminating functions.} Therefore, if the critical term is a variable, Zeno tries performing induction on it, and if it is another term, Zeno tries performing case-splitting on it. The usage of critical terms to direct these steps means that the proof is moving  towards being able to apply a function definition later on.

In \fig{criticaltermex} we give two examples of the derivations of critical terms. The first example is \li{add (add x y)} \li{z} which has the same critical term as \li{add x y}, since it is a case-analysed sub-term. The critical term of \li{add x y} is \li{x}, since this is a case-analysed variable. The second example, \li{max x y}, has the critical term \li{leq x y}, since evaluation of \li{max x y} will need to perform case-analysis on \li{leq x y}, and \li{leq x y}  is not a sub-term of \li{max x y}.

\begin{figure}
\[
\inferrule{
\inferrule{
\inferrule{
}{
\lim{x} \criticalterm \lim{x}
}
\\\\
\lim{x} \in \lim{add x y} \\
\lim{add x y} \leadsto \caseof{\lim{x}}{\vect{A}}
}{
\lim{add x y} \criticalterm \lim{x}
}
\\\\
\lim{add x y} \in \lim{add (add x y)}\ \lim{z} \\
\lim{add (add x y)}\ \lim{z} \leadsto \caseof{\lim{(add x y)}}{\vect{A}}
}{
\lim{add (add x y)}\ \lim{z} \criticalterm \lim{x}
}
\]
\\
\[
\inferrule{
\lim{leq x y} \notin \lim{max x y} \\
\lim{max x y} \leadsto \caseof{\lim{(leq x y)}}{\vect{A}}
}{
\lim{max x y} \criticalterm \lim{leq x y}
}
\]
\caption{Finding the critical terms of \li{add (add x y)} \li{z} and \li{max x y}}
\label{fig:criticaltermex}
\end{figure}

Every term has no more than one critical term. Terms without critical terms are those which do not have at the outermost level a function  whose definition could  be applied (such as a constructor term), or those which can be evaluated fully to a new term. The former we can ignore, whereas the latter should not occur in step three, since we have already performed any \li{[def]} step.

The critical terms of $\tau_1 = \tau_2$ is the union of the critical terms of $\tau_1$ and  $\tau_2$. The critical terms of the
 goal  $\equality_1 :- \equality_2, ..., \equality_n$ is the union of the  critical terms of $\equality_1$ ... $\equality_n$. For example the critical terms of \li{max x y = x :- leq x y = False} consists of those from \li{max x y}, from \li{x}, from \li{leq x y} and from \li{False}; this gives us two critical terms in total, i.e., \li{x} and \li{leq x y}.
 

The critical terms determine the next steps as follows.
\begin{description}
% \subsubsection*
\item[Induction]
% \li{[ind x =>} $\tau$\li{]}]

Zeno tries induction on any variable \li{x} appearing among the critical terms of the current goal. For example, in the first line of the proof of \li{add x Zero = x} in \fig{proofaddzero}, variable \li{x} is the critical term of both sides. Therefore, the  only step Zeno can take at this point is induction on \li{x}; this step  allows the application of  the definition of \li{add} as the next step.

% \subsubsection*
\item[Case Analysis]
%{Applying \li{[cse} $\tau$ \li{=>} $\tau'$\li{]}]

Zeno tries a case-split on any  critical term that is not a variable.
%, that is to say the application of one term to another, we should try and 
% apply a case-split on it.
 For example, in the the first line of   \fig{proofmaxzero}, the critical term of  \li{max x y} is \li{leq x y}, and therefore Zeno tries  a case-split on \li{leq x y}. After the case split, the function definition of \li{max} is applicable.

% \subsubsection*
\item[Generalization]
%{Applying \li{[gen} $\tau$ \li{ => x]}]

Zeno tries to generalise any term which appears more than once in the goal, and which contains a variable which is a critical term of that goal. For example, the goal \li{add (add x y)} \li{Zero = add x y} has variable   \li{x} as its critical term, and the term  \li{add x y} appears more than once. Therefore, Zeno could apply the step \li{[gen add x y => z]}.

\end{description}

\end{comment}