%
% you should only have one "documentclass" line.  the following lines
% are samples that give various options.  the nofrontmatter option is
% nice because it suppresses the title and signature pages when you want
% to focus only on the main body of the thesis
%
% Friday April 10 2010 Ray Hylock <ray-hylock@uiowa.edu>
% documentclass options:
%   abstractpage            if you want to add an internal abstract (optional)
%   ackpage                 if you would like to add an acknowledgements page (optional)
%   algorithms              if you want a list of algorithms (optional)
%   appendix                if you have an appendix (optional)
%   copyrightpage           if you wish to copyright your thesis (optional)
%   dedicationpage          if you wish to make a dedication (optional)
%   epigraphpage            if you would like to add an epigraph to the beginning of your thesis (optional)
%   examples                if you want a list of examples (this uses the ntheorem package)
%   exampleslemmas          if you want a combined list of examples and lemmas (this uses the ntheorem package) (optional)
%   examplestheorems        if you want a combined list of examples and theorems (this uses the ntheorem package) (optional)
%   exampleslemmastheorems  if you want a combined list of examples, lemmas, and theorems (this uses the ntheorem package) (optional)
%   figures                 if you have any figures (this is required if you have even one figure)
%   lemmas                  if you want a list of lemmas (this uses the ntheorem package) (optional)
%   lemmastheorems          if you want a combined list of lemmas and theorems (this uses the ntheorem package) (optional)
%   nofrontmatter           suppresses the title and signiture pages for working on the body
%   tables                  if you have any tables (this is required if you have even one table)
%   theorems                if you want a list of theorems (this uses the ntheorem package) (optional)
%   phd                     if phd student; this will add the doctoral abstract (mandatory for PhD and DMA thesis candidates only)
%

% full options
%\documentclass[phd,abstractpage,copyrightpage,dedicationpage,epigraphpage,ackpage,figures,tables,lemmas,appendix]{uithesis}

% common options
%\documentclass[phd,dedicationpage,ackpage,figures,tables,appendix]{uithesis}

% example
\documentclass[phd,appendix]{uithesis}

%=============================================================================
% User packages
%=============================================================================
\usepackage{bookmark}		% [recommended] for PDF bookmark generation
\usepackage{blindtext} 	% example text generation
\usepackage[english]{babel}
\usepackage{amssymb} 
\usepackage{comment} 
\usepackage{manfnt}
\usepackage{stmaryrd}
\usepackage{proof} 
\usepackage{xcolor} 
\usepackage{amsmath, amsthm}
\usepackage{diagrams}

\newarrowfiller{dasheq} {==}{==}{==}{==}
\newarrow {Mapsto} |--->
\newarrow {Line} -----
\newarrow {Implies} ===={=>}
\newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
\newarrow {Onto} ----{>>}
\newarrow {Dashto}{}{dash}{}{dash}{>}
\newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\definecolor{light-gray}{gray}{0.86}
\newcommand{\gray}[1]{\colorbox{light-gray}{#1}}
\newcommand{\mgray}[1]{\colorbox{light-gray}{$#1$}}
\newcommand{\ind}[0]{\mathsf{Ind}} 
\newtheorem{proposition}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}
  
% My own macros
\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 
\newcommand{\lam}[2]{\lambda #1 . #2}
\newcommand{\fpi}{\textbf{F}^\Pi}
\newcommand{\evals}[2]{#1 \downarrow #2}
\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newcommand{\ep}[0]{\epsilon} 
\newcommand{\nil}[0]{\mathsf{nil}} 
\newcommand{\cons}[0]{\mathsf{cons}} 
\newcommand{\vecc}[0]{\mathsf{vec}} 
\newcommand{\suc}[0]{\mathsf{S}} 
\newcommand{\pred}[0]{\mathsf{Pred}} 
\newcommand{\app}[0]{\mathsf{app}} 
\newcommand{\add}[0]{\mathsf{add}} 
\newcommand{\interp}[1]{\llbracket #1 \rrbracket} 
\newcommand{\intern}[1]{\llbracket #1 \rrbracket^{-1}} 
\newcommand{\self}[0]{\mathbf{S}} 
\newcommand{\selfstar}[0]{\mathsf{Selfstar}} 
\newcommand{\systemg}[0]{\mathfrak{G}} 
\newcommand{\fomega}[0]{\mathbf{F}_{\omega}} 
\newcommand{\nat}[0]{\mathsf{Nat}} 
\newcommand{\cc}[0]{\mathbf{CC}} 

%=============================================================================
% prelude
%=============================================================================

\title{Lambda Encodings in Type Theory}
\author{Peng Fu}
\dept{the Department of Computer Science}

% multipleSupervisors=true for two advisors
\setboolean{multipleSupervisors}{false}
\advisor{Associate Professor Aaron Stump}
% for multiple advisors; change <value> to line up the names
%\setboolean{multipleSupervisors}{true}
%\advisor{Advisor 1\\\hspace{<value>mm}Advisor 2...}
%
% edit the names below to have your committee members names appear
% on the signature page.  memberOne should be your advisor.
%
\memberOne{Aaron Stump}
\memberTwo{Cesare Tinelli}
\memberThree{Kasturi R. Varadarajan}
\memberFour{Ted Herman}
\memberFive{Douglas W. Jones}
\submitdate{July 2014}
\copyrightyear{2014}

\Abstract{
  Lambda encodings (such as Church encoding, Scott encoding and Parigot encoding) are methods to represent data in lambda calculus. 
  Curry-Howard correspondence relates the formulas
  and proofs in intuitionistic logics to the types and programs in 
  typed functional programming languages. Roughly speaking, Type theory (Intuitionistic Type Theory) formulates the intuitionistic logic in the style of typed functional programming language.
This dissertation investigates the mechanisms to support lambda encodings in type theory. Type
theory, for example, Calculus of Constructions ($\cc$) does not directly support inductive data because the induction principle for the inductive data is proven to be not derivable. Thus inductive data together with inductive principle are added as primitive to $\cc$, leading to several nontrivial extensions, e.g. Calculus of Inductive Constructions. In this dissertation, we explore alternatives to incorporate inductive data in type theory. We propose to consider adding an abstraction construct to the intuitionistic type to support lambda-encoded data, while still be 
able to derive the corresponding induction principle. The main benefit of this approach is
that we obtain relatively simple systems, which are easier to analyze and implement. 
  %% This dissertation investigates an abstraction construct called iota-binder, which is used to support lambda encodings in
 %%  type theory. Two different interpretations of the iota-binder
 %%  are presented, resulting in two different formalizations. Both formalizations are proved to
 %%  be logically consistent and type preserving, which are standard properties for the type
 %%  theory. 
 %%  The dissertation first shows an attempt to a more expressive type theory by incorporating
 %%  several meta-level concepts as primitives in type. However, the resulting type system has complicated metatheoretic property and does not support lambda encoded data. Then a type theory based on interpreting iota-binder as a type construct and global positive recursive definitions is described. It has nice metatheoretical property and support Church and Parigot encodings.   On the
 %%  other hand, a type thoery which interprets iota-binder as a set-forming construct does not require global positive recurisve definitions, and has the ability to reason about Scott encodings. 
 %%  Finally, a preliminary implementation based on the view that iota-binder is a set-forming construct is described and potential future improvements are discussed.   
  }

%\dedication{Dedication here (optional)}

%\epigraph{Epigraph here (optional)}

%\acknowledgements{Acknowledgements here (optional)}

\begin{document}

\frontmatter

%=============================================================================
\chapter{Introduction}
%=============================================================================

Inductively defined data type (inductive data), together with the \textit{pattern matching} mechanism, are commonly used in theorem proving and functional programming. Most typed functional programming languages and theorem provers (Haskell, OCaml, Agda \cite{Bove:2009}, Coq \cite{Coq}, TRELLYS \cite{kimmell2012}, \cite{casinghino2014}) support them as primitives. Usually, the concepts of inductive data and program are separated, one can only perform pattern matching on inductive data. In lambda calculus however, there are no distinctions between program and data. For example, for Church numeral $2$, it can be used as a higher order function that takes in a function $f$ and a data $b$ as arguments, then applying $f$ to $b$ twice.   %% On the other hand, it is well known that inductive data can be encoded in lambda calculus using Church encoding \cite{Church:1985}, Scott encoding \cite{CHS:72} or Parigot encoding \cite{parigot1988programming}. 

From the programming language design perspective, inductive datatype and pattern matching increase the complexity of desgin, analysis and implementation of the language. For example, the pattern matching \textit{case}-expression is considered the most complicated part of the Haskell core language \footnote{\url{http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/CoreSynType}}. %% For theoretical study, including primitive data type will make type safety such as type preservation hard to verify, see VanInwegen's type preservation report \cite{vaninwegen1996machine} for an example. 
Despite this complication, there are two main reasons that the language designers choose primitive data type over lambda encoding. 
\begin{enumerate}
\item Defining function by recursion seems more natural compare to defining function by iteration. For example, defining subdata accessor with pattern matching is almost trivial while it is a challenging programming task for Church encoding scheme \cite{Church:1985}. 
\item  Primitive data type and pattern matching fit well with Hindley-Milner polymorphic type inference (\cite{hindley1969principal}, \cite{milner1978theory}), which is a key component for most static typed functional languages. With Scott encoding and Parigot encoding scheme, it is not clear how to directly achieve decidable type inference.
\end{enumerate}

\noindent We counter the first reason with Scott encoding scheme. It is well know that primitive data type and pattern matching can be reduced to Scott encoded data and recursive definitions in a direct way (\cite{Mogensen94}, \cite{CHS:72}) and subdata accessor can be defined easily with Scott encoding using recursion. For the second reason, we can use a surface language for type inference while use lambda calculus with recursive definitions as the untyped core language. Since type inference/checking never interfere with the actual execution of the program, it only affects how the program is written, once a program is accepted by the type checker, we can translate it to lambda calculus and execute it. So we think that the primitive data and pattern matching in a functional language can be reasonably reduced to lambda calculus with Scott encoding, which simplifies the execution model for the language. We implement these ideas in Gottlob (see Chapter \ref{final}), which empirically shows that these ideas are reasonable. 

\begin{comment}
t is inefficient to retrieve subdata from inductive data with Church encoding. For example, with Church-encoded numerals, it takes linear time to compute the predecessor of a numeral. While with pattern matching, it only takes constant time \cite{Girard:1989}. The inefficiency to retrieve subdata is not an issue for Scott encoding and Parigot encoding, since each of these data contains its own subdata. Fix point combinator is needed to perform computation with Scott numerals and it is not obvious to define terminating recursion without it. This however is not an issue for Parigot encoding, which combines the features of Church and Scott encoding, namely, easy to obtain subdata and can perform terminating recursion without appealing to the fix point combinator. So in theory, efficiency to retrieve subdata should not be an issue for Scott encoding and Parigot encoding. 
\end{comment}
%% So the functionality of inductive data and pattern matching can be emulated entirely with lambda encodings. 

If one wants to design an interactive theorem prover based on intuitionistic type theory \`a la  Martin-L\"of \cite{martin:1984}, then it is desirable to interpret the inhabitant of the type $D \to D$ as a total function on inductive datatype $D$. This is hard to achieve with Scott encoding, since each Scott-encoded data contains a piece of its subdata, one would need recursive type definition to 
type these data and operations. It is well known every type is inhabited once we admit unrestrictive recursive type definition\footnote{Certain restrictions are possible to retain totality, see \cite{parigot88}, \cite{Raffalli:1994} and \cite{matthes1999extensions}.}. Church encoding may be more suitable for the intuitionistic typing, it is already typable in System \textbf{F}. Besides the efficiency issue we mentioned before, there are three problems that prevent Church encoding from being adopted in interactive theorem provers based on intuitionistic types. 
\begin{enumerate}
\item  One can not construct a proof of $0 \not = 1$ with Church encoding \cite{Werner:92}.
\item Induction principle is not derivable in extensions of System \textbf{F} such as Calculus of Construcitons ($\cc$) \cite{geuvers94}.
\item Computing type from data is not possible with Church encoding.
\end{enumerate}

\noindent For the first problem, we propose to change the notion of contradiction, and we show how to prove $0 \not = 1$ with this new notion of contradiction. For the second problem, we propose a new type construct called \textit{self types} to derive induction principle. We will cover these two topics in depth in Chapter \ref{selftype}. For the third problem, we think it is a fundamental problem for Church encoding in intuitionistic type theory due to the Girard's paradox \cite{Girard:1989}: in order to compute type from Church numerals, we would need to impredicative polymorhism at kind level, which is known to be inconsistent. One common practice to avoid this
kind of problem is adopting infinite predicative hierarchy, which is beyond the scope of this dissertation.  


\begin{comment}
When unrestricted recursive definition is added, we lose the intuitionistic notion of functional type expresses totality, namely, the type $D \to D$ will means a partial function with domain and codomain $D$ \cite{Winskel:1993}. One can adopt a variety of restrictions and techniques (\cite{parigot1988programming}, \cite{Raffalli:1994}, \cite{matthes1999extensions}) to restore totality, but then we can not get a uniform interpretion of $D \to D$ as an intuitionistic formula. 
From the philosophy perspective, assumming datatype and pattern matching as primitive is
undesirable, in the sense that when we are asked `` what is the number 1 ?'', we have no answer, since its existence is an atomic assumption. But this is not to say this \textit{extrinsic} point of view on data is not working, on the contrary, it works very well, in fact, most parts of mathematics (the notion of sets, groups, etc.) are based on this extrinsic point of veiw. In the context of lambda calculus, the ability to encode inductive datatype within lambda calculus in a sense \textit{reduces} the ontology of inductive datatypes to the ontology of lambda calculus. Assuming the ontology of lambda calculus, now we can answer the question of ``what is the number n?'', it just a lambda term that when applied to term $f$ and term $a$, it got \textit{beta-reduce} to $\underbrace{f ( f ( f...(f}_{n} a)...))$. So we now can explain the meaning of a number $n$ as doing something repeatly $n$ many times. We call this \textit{intrinsic} point of view.

This dissertation investigates an abstraction scheme called \textit{iota-binder}, which enables both Scott and Church encoding in intuitionistic type theory, while maintaining the Curry-Howard 
correspondence. Two formalizations of the type system are presented in order to study iota-binder. The first one is called system $\self$, which veiws iota-binder as a type construct. The types of $\self$ have a meaningful interpretation over $\fomega$, which is a variant of Girard's system $\mathbf{F}$ (second order intuitionistic propositional logic) with limited type level computations. The metatheory for system $\self$ is heavy due to the present of restricted mutually recursive definitions. The difficulties we encounter in designing system $\self$ inspire us to develop system $\systemg$, in which iota-binder is viewed as a set-forming abstraction.  System $\systemg$ is an improvement over $\self$ in the following senses: 

\begin{itemize}
  \item It supports both Church and Scott encoding without requiring any recursive definitions.  Thus $\systemg$ is simpler than $\self$ in terms of formulation and metatheory, but conceptually much richer. 
\item It is built strictly around Curry-Howard correspondence, and corresponds to second order intuitionistic predicate logic \`a la Takeuti \cite{takeuti1975proof}. So every ``type'' in $\systemg$ is a second order fomula. 
  \item It supports operational reasoning about lambda calculus. So both the totality and the termination behavior of a numeral function can be infered with $\systemg$.
    
    \item The concept of polymorphic dependently-typed programming can be expressed within $\systemg$. 
     
\end{itemize}
\noindent The results we obtains in the study of system $\systemg$ suggest that lambda encodings in intuitionistic type theory is achievable and can potentially simplify the 
desgin of the functional programming langauge. Moreover, operational reasoning about lambda calculus and polymorphic dependently-typed programming can be expressed within $\systemg$, which implies that $\systemg$ can be served as the foundation for an interactive theorem prover which supports reasoning about general programs. 
\end{comment}

The dissertation first describes fundamental concepts (Chapter \ref{pre}) such as lambda encodings, abstract rewrite system and confluence. Then we discuss the confluence problem for lambda-mu calculus (Chapter \ref{lambdamu}). In Chapter \ref{internalization}, we show a limited way to construct expressive type theory based on the notion of \textit{internalization}. System $\self$ is presented in Chapter \ref{selftype}, we introduce \textit{self type} construct and use it
to derive induction principle. Metatheorems such as consistency and type preservation are proved. In Chapter \ref{comprehension}, System $\systemg$ is presented, which is based on interpreting the iota-binder as set abstraction. Unlike System $\mathbf{S}$, $\systemg$ does not require
recursive definition to describe induction principle, this simplifies the meta-theoretic property of $\systemg$. We show that $\systemg$ is consistent and we demonstrate some applications and some special properties of $\systemg$. Finally, Chapter \ref{final} discusses the design and implemented features of Gottlob. The logic of Gottlob is an extension of $\systemg$. Future improvements of Gottlob are also discussed.    

\chapter{Preliminaries}
\label{pre}
In this Chapter, we first introduce abstract reduction system. Then, we review three lambda
encoding schemes, namely, Church encoding, Scott encoding and Parigot encoding. Finally, we 
discuss the confluence property, which is a key property for abstract reduction system, including lambda calculus. 

\section{Abstract Reduction System}

\begin{definition}
 An abstract reduction system $\mathcal{R}$ is a tuple $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, where $\mathcal{A}$ is a set and $\to_i$ is a binary relation(called reduction) on $\mathcal{A}$ indexed by a finite nonempty set $\mathcal{I}$.   
\end{definition}

In an abstract reduction system $\mathcal{R}$, we write $a \to_i b$ if $a,b \in \mathcal{A}$ satisfy the relation $\to_i$. For convenient, $\to_i$ denotes a subset of $\mathcal{A}\times \mathcal{A}$ such that $(a,b) \in \to_i$ if $a \to_i b$. 

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the reflexive transitive closure of $\to_i$ is written as $\twoheadrightarrow_i$ or $\stackrel{*}{\to}_i$, is defined by: 
\begin{itemize}
\item $m \twoheadrightarrow_i m$. 
\item $ m \twoheadrightarrow_i n$ if $m \to_i n $.
\item $ m \twoheadrightarrow_i l$ if $m \twoheadrightarrow_i n, n \twoheadrightarrow_i l $.
\end{itemize}
  
\end{definition}

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the convertibility relation $=_i$ is defined as the equivalence relation generated by $\to_i$:   
\begin{itemize}
\item $ m =_i n$ if $m \twoheadrightarrow_i n $.
\item $ n =_i m$ if $m =_i n $. 
\item $ m =_i l$ if $m =_i n, n =_i l$.
\end{itemize}

\end{definition}

\begin{definition}
 We say $a$ is \textit{reducible} if there is a $b$ such that $a \to_i b$. So $a$ is in $i$-\textit{normal form} if and only if $a$ is not reducible. We say $b$ is a normal form of $a$ with respect to $\to_i$ if $a \twoheadrightarrow_i b$ and $b$ is not reducible. $a$ and $b$ are joinable if there is $c$ such that $a \twoheadrightarrow_i c$ and $b \twoheadrightarrow_i c$. An abstract reduction system is strongly normalizing if there are no infinite
reduction path.
\end{definition}

\section{Lambda Encodings}
We use $x,y,z,s,n,x_1, x_2, ...$ to denote individual variable, $t,t', a,b, t_1, t_2, ... $ to denote term, $\equiv$ to denote syntactic equality. $[t'/x]t$ to denote substituting the variable $x$ in $t$ for $t'$. The syntax and reduction for lambda calculus is given as following.

\begin{definition}[Lambda Calculus]

\

\noindent Term  $t \ ::= \ x \ | \ \lambda x.t \ | \ t\  t'$ 

\noindent Reduction  $(\lambda x.t)t' \to_{\beta} [t'/x]t$ 
\end{definition}

\noindent For example, $(\lambda x.x\ x)(\lambda x.x\ x)$, $\lambda y.y$ are concrete
terms in lambda calculus.  For a term $\lambda x.t$, we call $\lambda$ the \textit{binder}, $x$ is \textit{binded }, called \textit{bind variable}. If a variable is not binded, we say it is a \textit{free} variable. We will treat terms up to $\alpha$-equivalence, meaning, for any
term $t$, one can always rename the binded variables in $t$. So for example, $\lambda x.x\ x$ is
the same as $\lambda y.y\ y$, and $\lambda x.\lambda y.x\ y$ is the same as $\lambda z.\lambda x .z\ x$. $(\lambda x.\lambda y.x\ y)\underline{((\lambda z.z)z_1)} \to_{\beta} \underline{(\lambda x.\lambda y.x\ y)z_1} \to_{\beta} \lambda y.z_1\ y$ is a valid reduction sequence in lambda calculus. Note that for reader's convenient we underline the part we are going to carry out the reduction(we will not do this again) and we call the underline term \textit{redex}. For a comprehensive introducton on lambda calculus, we refer to \cite{Barendregt:1985}. 
    
\subsection{Church Encoding}

\begin{definition}[Church Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s (n\ s\ z)$ 

\end{definition}

\noindent So $1 \ := \mathsf{S}\ 0 \equiv (\lambda n.\lambda s.\lambda z. s (n\ s\ z))(\lambda s.\lambda z. z) \to_{\beta} \lambda s.\lambda z. s ((\lambda s.\lambda z. z) s\ z) \to_{\beta} \lambda s.\lambda z. s\ z$.  Note that the last part of above reductions occur underneath the lambda abstractions. Similarly $2\ :=  \suc \ (\suc\ 0) \to_\beta^* \lambda s.\lambda z. s \ s\ z$. 

Informally, we can interpret lambda term as both data and function, so instead of thinking Church numeral $2$ as  
data, one can think of it as a higher order function $h$, which take in a function $f$ and a data $a$
as arguments, then apply the function $f$ to $a$ two times. We define \textit{iterator} $\mathsf{It}\ n\ f\ t \ := n\ f \ t$. So $\mathsf{It}\ 0 \ f\ t =_{\beta} t $ and $\mathsf{It}\ (\mathsf{S}\ u) \ f\ t  =_{\beta} f (\mathsf{It}\ u \ f\ t) $. Then we can use iterator to define $\mathsf{Plus} \ n\ m := \mathsf{It}\ n\ \mathsf{S}\ m$.

\subsection{Scott Encoding}
\begin{definition}[Scott Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s\ n$ 

\end{definition}

We can see $1 \ := \lambda s.\lambda z. (s\ 0)$, $2 \ := \lambda s.\lambda z. (s\ 1)$. 
We are going to define a notion of \textit{recursor}. We first give a
version of the \textit{fix point operator} $\mathsf{Fix} := \lambda f.(\lambda x.f\ (x\ x)) (\lambda x.f\ (x\ x))$. The reason it is called fix point operator is when it applied to a lambda expression, it give a
fix point of that lambda expression(recall informally each lambda expression is both data and function).
So 

$\mathsf{Fix} \ g \to_{\beta} (\lambda x.g\ (x\ x)) (\lambda x.g\ (x\ x)) \to_{\beta} g ((\lambda x.g\ (x\ x))\ (\lambda x.g\ (x\ x)) ) =_{\beta} g\ (\mathsf{Fix}\ g) $.

\noindent Now we can define recursor: $\mathsf{Rec}\ := \ \mathsf{Fix}\ \lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v$. We get $\mathsf{Rec}\ 0\ f\ v {\twoheadrightarrow_{\beta}} v$ and $\mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v {\twoheadrightarrow_{\beta}} f\ (\mathsf{Rec}\ n\ f\ v)\ n$. In a similar fashion, one can define $\mathsf{Plus} \ n\ m\ := \mathsf{Rec} \ n \ (\lambda x.\lambda y.\mathsf{S}\ x)\ m$. 

The predecessor function can be easily defined as $\mathsf{Pred}\ n\ :=  \mathsf{Rec}\ n\ (\lambda x.\lambda y.y)\ 0$. It only takes constant time (w.r.t. the number of beta reduction steps) to calculate the predessesor. But this function is tricky to define with Church encoding, one need to first define recursor with iterator, then use recursor to define $\mathsf{Pred}$. To calculate $\mathsf{Pred}\ n$ with Church encoding, one has to perform at least $n$ steps, so it takes linear time \cite{Girard:1989}. 
\subsection{Parigot Encoding}

\begin{definition}[Parigot Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s \ n\ (n\ s\ z)$ 

\end{definition}

Parigot encoding can be seen as a mixture of Church and Scott encoding, each data contains its own subdata and it support a form of iteration similar to Church encoding. For example, we can
define $\mathsf{Pred}\ n\ :=  n\ (\lambda x.\lambda y.y)\ 0$ and $\mathsf{Plus} \ n\ m :=  n\ (\lambda x.\mathsf{S})\ m$. We do not need full recursion to compute with Parigot numerals and we
can retrieve subdata in constant time. 


\section{Confluence}
\label{Conf}
%(5-6 pages)

\begin{definition}
\label{c-r}
  Given an abstract reduction system $(\mathcal{A}, \{ \to_i\}_{i\in \mathcal{I}})$, let $\to$ denote $\bigcup_{i\in \mathcal{I}} \to_i$, let $=$ denote the equivalence relation generated by $\to$.
\begin{itemize}
\item Confluence: For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then there exist $d \in \mathcal{A}$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

\item Church-Rosser: For any $a,b \in \mathcal{A}$, if $a = b$, then there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$.

\end{itemize}
\end{definition}

\noindent The two properties above can be expressed by following diagrams:

\
\begin{center}
\begin{tabular}{lll}
\begin{diagram}[size=1.5em,textflow]
 & & a & & \\
 & \ldOnto & & \rdOnto &  \\
 b & &  &  & c \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & d & & \\
\end{diagram}

&

&
\begin{diagram}[size=1.5em,textflow]
 a & & = &  & b \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & c & & \\
\end{diagram}

\end{tabular}
\end{center}

\begin{lemma}
\label{Conf-CR}
  An abstract reduction system $\mathcal{R}$ is confluent iff it is Church-Rosser.
\end{lemma}
\begin{proof}
  Assume the same notation as defintion \ref{c-r}. 

 ``$\Leftarrow$'': Assume $\mathcal{R}$ is Church-Rosser. For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then this means $b = c$. By Church-Rosser, there is a $d \in \mathcal{A}$, such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

``$\Rightarrow$'': Assume $\mathcal{R}$ is Confluent. For any $a,b \in \mathcal{A}$, if $a = b$, then we show there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$ by induction on the generation of $a = b$:  

If $a \twoheadrightarrow b \Rightarrow a = b$, then let $c$ be $b$.

If $b = a \Rightarrow a = b$, by induction, there is a $c$ such that $b \twoheadrightarrow c$ and $a \twoheadrightarrow c$. 

If $a = d, d = b \Rightarrow a = b$, by induction there is a $c_1$ such that $a \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_1$; there is a $c_2$ such that $d \twoheadrightarrow c_2$ and $b \twoheadrightarrow c_2$. So now 
we get $d \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_2$, by confluence, we have a $c$ such that $c_1 \twoheadrightarrow c$ and $c_2 \twoheadrightarrow c$. So $a \twoheadrightarrow c_1 \twoheadrightarrow c$ and $b \twoheadrightarrow c_2 \twoheadrightarrow c$. This process is illustrated by the following diagram:

\begin{diagram}[size=1.5em,textflow]
 a &            & = &            & d &           & =  &           & b &  \\
   & \rdDashtoo &   & \ldDashtoo &   & \rdDashtoo &   & \ldDashtoo & & \\
   &            & c_1 &          &   &            & c_2 &            & & \\
   &            &  & \rdDashtoo         &   &    \ldDashtoo        &  &            & & \\
   &            &     &          & c  &            &  &            & & \\
\end{diagram}

\end{proof}

The definition of $=$ depends on $\twoheadrightarrow$, the definition of $\twoheadrightarrow$ depends on $\to$, 
confluence is often easier to prove compare to Church-Rosser, in the sense that it is easier to anaylze $\twoheadrightarrow$ compare to $=$. Now let us see some consequences of confluence. 

\begin{corollary}
  If $\mathcal{R}$ is confluent, then every element in $\mathcal{A}$ has at most one normal form.
\end{corollary}
\begin{proof}
  Assume $a \in \mathcal{A}$, $b,c$ are two diferent normal forms for $a$. So we have $a \twoheadrightarrow b$
and $a \twoheadrightarrow c$, by confluence, there exist a $d$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. But $b,c$ are normal form, this implies $b$ and $c$ are the same as $d$, which contradicts that they are two different normal form. 
\end{proof}

\begin{definition}
  For an abstract reduction system $\mathcal{R}$, it is trivial if 
for any $a , b \in \mathcal{A}$, $a = b$.
\end{definition}

\begin{corollary}
  If $\mathcal{R}$ is confluent and there are at least two different normal forms, then $\mathcal{R}$ is
not trivial.
\end{corollary}

\section{Tait-Martin L\"of's Method}
\label{tait-martin}    
We want to show lambda calculus as an abstract reduction system is confluent. We present
a method of proving confluence in abstract reduction system, which is due to W. Tait and P. Martin-L\"of(reported in \cite{Barendregt:1985}). Then we show how we
can apply this method to show lambda calculus is confluent. 

\begin{definition}[Diamond Property]
  Given an abstract reduction system $(\mathcal{A}, \{ \to_i\}_{i\in \mathcal{I}})$, it has diamond property if:

 For any $a, b, c \in \mathcal{A}$, if $a \to b$ and $a \to c$, then there exist $d \in \mathcal{A}$ such that $b \to d$ and $c \to d$.

\begin{diagram}[size=1.5em,textflow]
 & & a & & \\
 & \ldTo & & \rdTo &  \\
 b & &  &  & c \\
 & \rdDashto & & \ldDashto &  \\
 & & d & & \\
\end{diagram}


\end{definition}

\begin{lemma}
  If $\mathcal{R}$ has diamond property, then it is confluent.
\end{lemma}
\begin{proof}
By simple diagam chasing suggested below:


\begin{diagram}[size=1.5em,textflow]
   &            &     &          & a  &            &  &            & & \\
   &            &     & \ldTo &   & \rdTo &   &  & & \\
   &            & c_1 &          &   &          & c_2 &            & & \\
   &  \ldTo     &     & \rdDashto        &   &       \ldDashto    &  &   \rdTo      & & \\
  e &            &     &            & d &           &   &           & b &  \\
   & \rdDashto &   & \ldDashto &   & \rdDashto &   & \ldDashto & & \\
   &            & c_1 &          &   &            & c_2 &            & & \\
   &            &  & \rdDashto         &   &    \ldDashto        &  &            & & \\
   &            &     &          & c  &            &  &            & & \\
\end{diagram}
\end{proof}

\begin{lemma}
\label{Subeq}
  If exist some $\to_i$, $\to \subseteq \to_i \subseteq \twoheadrightarrow$ and $\to_i$ satisfies diamond property, then 
$\to$ is confluent.
\end{lemma}
\begin{proof}
  Since $\to \subseteq \to_i \subseteq \twoheadrightarrow$ implies $\twoheadrightarrow \subseteq {\twoheadrightarrow_i} \subseteq {\twoheadrightarrow}$, so $\twoheadrightarrow_i = {\twoheadrightarrow}$. And the diamond property of $\to_i$ implies $\to_i$ is confluence, thus implies the confluence of $\to$. 
\end{proof}

Sometimes $\to$ may not satisfy diamond property, then one can look for the possibility to construct an intermediate reduction $\to_i$ such that it has diamond property. That is exactly what we will do for lambda calculus. Beta reduction itself does not satsify diamond property, for example, $(\lambda x.((\lambda u.u)\ v)\ ((\lambda y.y\ y)\ z) \to_{\beta} (\lambda x.((\lambda u.u)\ v))\  (z\ z)$ and $(\lambda x.((\lambda u.u)\ v)\ ((\lambda y.y\ y)\ z) \to_{\beta} (\lambda u.u)\ v$. And one can not join $(\lambda u.u)\ v$ and $(\lambda x.((\lambda u.u)\ v))\  (z\ z)$ in one step. But one can see they are still joinable, but not joinable in one step. This leads to the notion of parallel reduciton. 

\begin{definition}[Parallel Reduction]
\

\

  \begin{tabular}{llll}


\infer{ t \Rightarrow_{\beta} t}{}

&

\infer{\lambda x.t \Rightarrow_{\beta} \lambda x.t'}{t \Rightarrow_{\beta} t'}

&
\infer{t_1 t_2 \Rightarrow_{\beta} t_1' t_2'}{t_1 \Rightarrow_{\beta} t_1' & t_2 \Rightarrow_{\beta} t_2'}

&

\infer{(\lambda x.t_1) t_2 \Rightarrow_{\beta} [t_2'/x]t_1' }{t_1 \Rightarrow_{\beta} t_1' & t_2 \Rightarrow_{\beta} t_2'}

\\
\end{tabular}
\end{definition}

Intuitively, parallel reduction allows us to contract many beta redex(or not contracting at all) in one step, under this notion of 
one step reduction, we can obtain diamond property for $\Rightarrow_{\beta}$. 

\begin{lemma}
\label{Par:sub}
  If $ t_1 \Rightarrow_{\beta} t_1'$ and $ t_2 \Rightarrow_{\beta} t_2'$, then $[t_2/x]t_1 \Rightarrow_{\beta} [t_2'/x]t_1'$. 
\end{lemma}
\begin{proof}
  By induction on the derivation of $ t_1 \Rightarrow_{\beta} t_1'$. We will not prove this here.
\end{proof}

\begin{lemma}
\label{Par}
  $\Rightarrow_{\beta}$ satisfies diamond property.
\end{lemma}
\begin{proof}
  Assume $t \Rightarrow_{\beta} t_1$ and $t \Rightarrow_{\beta} t_2$, we need to show
there exists a $t_3$ such that $t_1 \Rightarrow_{\beta} t_3$ and $t_2 \Rightarrow_{\beta} t_3$.
We prove this by induction on the derivation of $t \Rightarrow_{\beta} t_1$. 

\begin{itemize}
\item  \textbf{Case}:  \infer{ t \Rightarrow_{\beta} t}{}

  Simply let $t_3$ be $t$. 


\item \textbf{Case}:  
\infer{\lambda x.t' \Rightarrow_{\beta} \lambda x.t''}{t' \Rightarrow_{\beta} t''}

In this case $t$ is of the form $\lambda x.t'$, where $t' \Rightarrow_{\beta} t''$; $t_1$ is of the form $\lambda x.t''$. $t_2$ must be of the form $\lambda x.t'''$, where $t' \Rightarrow_{\beta} t'''$. Thus by induction, we have a $t_3'$ such that $t'' \Rightarrow_{\beta} t_3'$ and $t''' \Rightarrow_{\beta} t_3'$. Thus let $t_3$ be $\lambda x.t_3'$, we get $t_1 \equiv \lambda x.t'' \Rightarrow_{\beta} \lambda x.t_3' \equiv t_3$ and $t_2 \equiv \lambda x.t'''\Rightarrow_{\beta} \lambda x.t_3' \equiv t_3$. 

\item \textbf{Case}:  
\infer{(\lambda x.t_4) t_5 \Rightarrow_{\beta} [t_4'/x]t_5' }{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

In this case $t$ is of the form $(\lambda x.t_4) t_5$,  $t_1$ is of the form $[t_5'/x]t_4'$,  $t_4 \Rightarrow_{\beta} t_4' $ and $ t_5 \Rightarrow_{\beta} t_5'$. 

If $t_2$ is of the form $(\lambda x.t_4'') t_5'' $, where $t_4 \Rightarrow_{\beta} t_4''$ and $t_5 \Rightarrow_{\beta} t_5''$ . Thus by induction, we have a $t_6$ such that $t_5'' \Rightarrow_{\beta} t_6$ and $t_5' \Rightarrow_{\beta} t_6$. And same by induction, there is a $t_7$ such that $t_4'' \Rightarrow_{\beta} t_7$ and $t_4' \Rightarrow_{\beta} t_7$.  Thus let $t_3$ be $[t_6/x]t_7$, we get $t_1 \equiv [t_5'/x]t_4'  \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$(by lemma \ref{Par:sub}) and $t_2 \equiv (\lambda x.t_4'') t_5'' \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$. 

If $t_2$ is of the form $[t_5''/ x]t_4'' $, where $t_4 \Rightarrow_{\beta} t_4''$ and $t_5 \Rightarrow_{\beta} t_5''$ . Thus by induction, we have a $t_6$ such that $t_5'' \Rightarrow_{\beta} t_6$ and $t_5' \Rightarrow_{\beta} t_6$. And same by induction, there is a $t_7$ such that $t_4'' \Rightarrow_{\beta} t_7$ and $t_4' \Rightarrow_{\beta} t_7$.  Thus let $t_3$ be $[t_6/x]t_7$, by lemma \ref{Par:sub}, we get $t_1 \equiv [t_5'/x]t_4'  \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$ and $t_2 \equiv [t_5''/ x]t_4'' \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$. 

\item \textbf{Case}:  
\infer{t_4 t_5 \Rightarrow_{\beta} t_4' t_5'}{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

Similar to the arguments above. 
\end{itemize}
\end{proof}

\begin{lemma}
\label{Par:eq}
  $\to_{\beta} \subseteq \Rightarrow_{\beta} \subseteq \twoheadrightarrow_{\beta}$. 
\end{lemma}

\begin{theorem}
  $\to_{\beta}$ reduction is confluent.
\end{theorem}
\begin{proof}
  By lemma \ref{Subeq}, lemma \ref{Par} and lemma \ref{Par:eq}.
\end{proof}

\section{Hardin's Interpretation Method}
Sometimes it is inevitable to deal with reduction systems that contains more than one reduction, for example, $(\Lambda, \{ \to_{\beta}, \to_{\eta}\})$. Confluence problem for this kind of system require some nontrivial efforts to prove. Hardin's interpretion method \cite{Hardin:1989} provide a way to deal with some of those reduction systems. 

\begin{lemma}[Interpretation lemma]
\label{interp}
Let $\to $ be $ \to_1 \cup \to_2$, 
$\to_1$ being confluent and strongly normalizing. We denote by $\nu(a)$ the $\to_1$-normal form of $a$. Suppose that there is some relation $\to_i$ on $\to_1$ normal forms satisfying:

\

$\to_i \subseteq \twoheadrightarrow$, and $a \to_2 b $ implies $ \nu(a)   {\twoheadrightarrow_i}    \nu(b)$ $(\dagger)$

\

\noindent Then the confluence of $\to_i$ implies the confluence of $\to$.
\end{lemma}

\begin{proof}
 So suppose $\to_i$ is confluent. If $a  {\twoheadrightarrow}  a'$ and $a  {\twoheadrightarrow}  a''$. So by ($\dagger$), $\nu(a)  {\twoheadrightarrow_i}  \nu(a')$ and $\nu(a)  {\twoheadrightarrow_i}  \nu(a'')$. Notice that $t  {\to_1^*}  t'$ implies $\nu(t) = \nu(t')$(By confluence and strong normalizing of $\to_1$). By confluence of $\to_i$, there exists $b$ such that $\nu(a')  {\twoheadrightarrow_i}  b$ and $\nu(a'')  {\twoheadrightarrow_i}  b$. Since $\to_i, \to_1 \subseteq \twoheadrightarrow$, we got $a' {\twoheadrightarrow}   \nu(a')  {\twoheadrightarrow}  b$ and $a'' {\twoheadrightarrow}   \nu(a'')  {\twoheadrightarrow}  b$. Hence $\to$ is confluent.

\begin{diagram}[size=2em,textflow]
& & & & a & & & &\\
& & &\ldLine &  & \rdLine & & &\\
& &\ldLine & &  & & \rdLine & &\\
&\ldOnto & & & \dDashtoo^1 &  &  & \rdOnto & \\
a'& &  & & \nu(a)  &  & & & a'' \\
&\rdDashtoo^1 &  &\ldDashtoo^i &   & \rdDashtoo^i  &  & \ldDashtoo^1 &\\
& & \nu(a')  &  &   &  & \nu(a'') & & \\
& &  & \rdDashtoo^i  &  & \ldDashtoo^i &   &  & \\
& &  &  & b  &   &  & & \\
\end{diagram}

\end{proof}

Hardin's method reduce the confluence problem of $\to_1 \cup \to_2$ to $\to_i$, given the confluence and strong
normalizing of $\to_1$, this make it possible to apply Tait-Martin-L\"of's method to prove confluence of $\to_i$. 


\chapter{Confluence of Lambda-Mu Calculus}
\label{lambdamu}
In this Chapter, we will investigate the confluence problem of extending lambda calculus
with local definitions, we called it $\lambda \mu$ calculus, the $\mu$ represents the usual 
let-rec binding availables in functional programming languages. It is desirable to know the confluence property of $\lambda \mu$  since it may be needed to prove type preservation for the type systems based on $\lambda\mu$ and it implies the equality reasoning is consistent. We give the formulation of
$\lambda \mu$ first (Section \ref{Local}). We discuss why traditional approaches to confluence fail on $\lambda \mu$ in Section \ref{fail}. Finally, we show how to use interpretation method to prove the confluence for a restrictive version of $\lambda\mu$ calculus (we called it \textit{local} lambda-mu calculus) in Section \ref{conf:local}. 

\section{Lambda-Mu Calculus}
%(6-7 pages)
\label{Local}
\begin{definition}[Syntax]

\

\noindent \textit{Terms} $t \ :: = \ x \ |  \ \lambda x.t \ | \ t t'  \ | \ \mu t$

\noindent \textit{Local Definitions}/\textit{Closures} $\mu \ ::= \M{x}{t}{\mathcal{I}}$
\end{definition}

%\noindent $\mathcal{I}$ is a finite nonempty index set.
\begin{definition}[Free Variables]

  \
  
\noindent  $\mathrm{FV}(x) := x$.
  
\noindent  $\mathrm{FV}(\lambda x.t) := \mathrm{FV}(t) / x$. 
  
\noindent  $\mathrm{FV}(t\ t') := \mathrm{FV}(t) \cup \mathrm{FV}(t')$
  
\noindent  $\mathrm{FV}(\mu t) := (\mathrm{FV}(t) - \mathrm{dom}(\mu)) \cup \mathrm{FV}(\mu)$
  
\noindent  $\mathrm{FV}(\M{x}{t}{\mathcal{I}}) := (\bigcup_{i \in \mathcal{I}}\mathrm{FV}(t_i)) - \{x_i\}_{i \in \mathcal{I}}$  
  
\end{definition}

%% Each closure $\mu$ is \textit{independent}, i.e. the free variables in $\mu$ are not allow to be bounded by any other closures (note that they can be bounded by lambda-binders). For example, we do not allow terms like $\{ \delta \mapsto \lambda y. \alpha (\mathsf{S} y)\}( \{\alpha \mapsto \lambda x. \delta (\mathsf{S} x)\} \alpha)$, however, we can have $\{ \delta \mapsto \lambda y. \alpha (\mathsf{S} y), \alpha \mapsto \lambda x. \delta (\mathsf{S} x)\} \alpha$. By the independence restriction, for term $\{..., x_i \mapsto \mu t,...\} t'$,  $\mathrm{FV}_{\Gamma}(\mu) \cap \{ x_1,..., x_n\} = \emptyset$.

\begin{definition}[Beta-Reductions]
\fbox{$t \to_{\beta} t'$} 

\

\begin{tabular}{llll}
  
\infer{ (\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

 \infer{ \mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto t_i) \in \mu}

&
 
\infer{ \lambda x.t \to_{\beta} \lambda x.t'}{ t \to_{\beta}t' }

&

\infer{ t t' \to_{\beta} t'' t'}{ t \to_{\beta}t''}

\\
\\

 \infer{ t\ t' \to_{\beta} t\ t''}{ t'\to_{\beta}t''}

 &
 
\infer{ \mu t \to_{\beta} \mu t'}{ t \to_{\beta}t' }

&

\infer{ \mu t \to_{\beta} \mu' t}{ \mu \to_{\beta} \mu'}

\end{tabular}

\end{definition}
\noindent Note that $\mu \to_{\beta}\mu'$ is a shorthand for there is exactly one $x_i \mapsto t_i \in \mu$ and
$t_i \to_\beta t_i'$, and $\mu'$ is same as $\mu$ except $x_i \mapsto t_i' \in \mu'$. Similarly, we have shorthand for $ \mu \to_{\mu}\mu'$. 

\begin{definition}[Mu-Reductions]
\fbox{$ t \to_{\mu} t'$} 

\

\begin{tabular}{lll}

 \infer{ \mu t \to_{\mu} t}{\mathrm{dom}(\mu) \# \mathrm{FV}(t)}

&

 \infer{ \mu(\lambda x.t) \to_{\mu} \lambda x.\mu t}{}

&

\infer{ \mu(t_1 t_2)  \to_{\mu} (\mu t_1 ) (\mu t_2)}{}

\\
\\

 \infer{ \lambda x.t \to_{\mu} \lambda x.t'}{ t \to_{\mu} t'}

&
\infer{ t\ t' \to_{\mu} t\ t''}{ t'\to_{\mu} t''}

&
\infer{ t\ t' \to_{\mu} t''\  t'}{ t \to_{\mu} t''}

\\
\\

\infer{ \mu t \to_{\mu} \mu t'}{ t \to_{\mu}t' }

&

\infer{ \mu t \to_{\mu} \mu' t}{ \mu \to_{\mu}\mu'}


\end{tabular}

\end{definition}

\

\noindent \textit{Mutual substitutions} within the local definition is not possible in $\lambda\mu$, because of Ariola and Klop \cite{Ariola:1997}'s non-confluence example: 

$\{ \delta \mapsto \underline{\lambda y. \alpha (\mathsf{S} y)}, \alpha \mapsto \lambda x. \delta (\mathsf{S} x)\} \alpha \to \{ \delta \mapsto \lambda y. \delta \mathsf{S} (\mathsf{S} y), \alpha \mapsto \lambda x. \delta (\mathsf{S} x)\} \alpha $ 

 $\{ \delta \mapsto \lambda y. \alpha (\mathsf{S} y), \alpha \mapsto \underline{\lambda x. \delta (\mathsf{S} x)}\} \alpha \to \{ \delta \mapsto \lambda y. \alpha (\mathsf{S} y), \alpha \mapsto \lambda x. \alpha  \mathsf{S} (\mathsf{S} x)\} \alpha$. 

\noindent It seems natural to allow mutual substitutions. We consider mutal substitutions overly eager. Because in the above non-confluence example, only $\alpha$ is being used, namely, occurs in the body. So there is no need to reduce the $\alpha$ in the definiens of the $\delta$ if one is ``lazy'' enough. 

Another possible formulation of mu-reduction is $ (\mu t \ \mu t') \to \mu (t\ t')$ instead of pushing $\mu$ inside of a term as we do. The potential drawback is in the case where $(\mu (\lambda x.t)) t'$, where there is no $\mu$ inside $t'$, it is now a stuck term. One could add another
rule to repair this situation: $(\mu (\lambda x.t)) t' \to \mu ([t'/x]t)$. Then one would need
another rule to deal with the case where $(\mu_2 \mu_1 (\lambda x.t)) t'$ etc. 

\section{A Fail Attempt to Prove Confluence of Lambda-Mu Calculus}
\label{fail}
We want to point out that directly applying Tait-Martin L\"of's method (Section \ref{tait-martin}, Chapter \ref{pre}) will not work for lambda-mu calculus. For example, let $\Rightarrow$ be
a direct parallelization of $\to_{\beta}$ and $\to_\mu$. Then 

$\mu ((\lambda x.x)\ t) \Rightarrow \mu' t'$, where $\mu \Rightarrow \mu', t \Rightarrow t'$.   

$\mu ((\lambda x.x)\ t) \Rightarrow (\mu' (\lambda x. x))\ (\mu' t)$, where $\mu \Rightarrow \mu'$.

\noindent We can not bring back $(\mu' (\lambda x.x))\ (\mu' t)$ and $ \mu' t'$ in
one $\Rightarrow$ step. Thus the $\Rightarrow$ does not have the diamond property. 


Since we know that the $\to_\mu$ reduction is convergent, then we would hope to use Hardin's
interpretation lemma to reduce the confluence proof of $\to_{\beta} \cup \to_{\mu}$ to $\to_{\beta\mu}$, which is a reduction defined on \textit{mu-normal} term, and then apply Tait-Martin L\"of's method to show confluence of $\to_{\beta\mu}$. We fail on the second step, namely, applying Tait-Martin L\"of's method to show confluence of $\to_{\beta\mu}$. We will introduce several definitions before we discuss the reason we fail.  

\begin{lemma}
  $\to_{\mu}$ is strongly normalizing and confluent.
\end{lemma}
\begin{proof}
  The number of $\mu$-redex is decreasing by the $\to_{\mu}$-reduction. We
  can use local confluence to prove confluence. 
\end{proof}

\begin{definition}[$\mu$-Normal Forms]
  
\

\noindent Normal Term $n \ :: = \ x \ | \   \vec{\rho} x_i \ | \ \lambda x.n \ | \ n\ n'$, where $x_i \in \mathrm{dom}(\rho_i)$. 

\noindent Normal Local Definitions $\rho \ :: = \M{x}{n}{\mathcal{I}}$

\end{definition}

\noindent We use $\vec{\mu}t$ to denote $\mu_1...\mu_n t$, $\vec{\rho}t$ denote $\rho_1...\rho_n t$. 

\begin{definition}[$\mu$-Normalize Funciton]
We define function $\nu$ that maps a term to its $\mu$-normal form.

\

\begin{tabular}{ll}

 $ \nu(x) \ : = \  x$

& $\nu(\lambda y.t)\ : = \ \lambda y.\nu(t)$

\\

 $\nu(t_1 t_2)\ : = \ \nu(t_1) \nu(t_2)$

& 
 $ \nu(\vec{\mu}y) \ := y$ if $y \notin \mathrm{dom}(\vec{\mu})$.

\\
 $ \nu(\vec{\mu}y) \ := \nu(\vec{\mu}) y$ if $y \in \mathrm{dom}(\mu_i)$.

&  
 $\nu(\vec{\mu}(t t')) \ :=  \nu(\vec{\mu} t) \nu( \vec{\mu}t')$
\\
 $\nu(\vec{\mu}( \lambda x.t)) \ := \lambda x.  \nu(\vec{\mu}t)$.
&
$\nu (x \mapsto t, \mu)\ := x \mapsto \nu(t), \nu(\mu)$.
\\
\end{tabular}

\end{definition}

\begin{definition}[$\beta$ Reduction on $\mu$-normal Forms]
\fbox{$n \to_{\beta\mu} n'$} 

\
  
\begin{tabular}{llll}

\infer{ n \to_{\beta \mu} \nu(t)}{ n \to_{\beta}t}
&
\infer{ \lambda x.n \to_{\beta \mu} \lambda x.n'}{ n \to_{\beta \mu} n' }

&

\infer{ n\ n' \to_{\beta \mu} n\ n''}{ n' \to_{\beta \mu} n'' }

&

\infer{ n\ n' \to_{\beta \mu} n''\  n'}{ n \to_{\beta \mu} n'' }


\end{tabular}

\end{definition}

\noindent Intuitively, $\to_{\beta\mu}$ first $\to_{\beta}$ reduce a term and then apply the $\nu$
function to the contractum. 

\begin{definition}[Parallelization]
\fbox{$ n \Rightarrow_{\beta\mu} n'$} 

\

  \begin{tabular}{lll}

  
\infer{ n \Rightarrow_{\beta \mu} n}{}

&
 \infer{ \vec{\rho} x_i \Rightarrow_{\beta\mu} \nu(\vec{\rho} n_i)}{ (x_i \mapsto n_i) \in \rho_i}

&

\infer{ (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu} \nu([n_1'/x]n_2')}{  n_1\Rightarrow_{\beta\mu} n_1' &  n_2\Rightarrow_{\beta\mu} n_2'}

\\
\\


\infer{ \lambda x.n \Rightarrow_{\beta\mu} \lambda x.n'}{  n \Rightarrow_{\beta\mu}n' }

&

\infer{ n\ n' \Rightarrow_{\beta\mu} n''\ n'''}{ n' \Rightarrow_{\beta\mu} n''' &  n \Rightarrow_{\beta\mu}n'' }

&

 \infer{ \vec{\rho} x_i \Rightarrow_{\beta\mu} \vec{\rho'} x_i}{ \vec{\rho} \Rightarrow_{\beta\mu} \vec{\rho'}}


\end{tabular}

\end{definition}

\noindent Note that $ \vec{\rho} \Rightarrow_{\beta\mu}\vec{\rho'}$ is a shorthand for for all $\rho_i$, and for all $x_i \mapsto n_i \in \rho_i$, we have $ n_i \Rightarrow_{\beta\mu} n_i'$, and $\rho_i'$ is consisted of $x_i \mapsto n_i'$.

The next step would be to show $\Rightarrow_{\beta\mu}$ has diamond property so that we can conclude the confluence of $\to_{\beta\mu}$. However, $\Rightarrow_{\beta\mu}$ does not have the diamond property due to the following counter-example:

Let $\mu$ denote $\{x \mapsto (\lambda y.y) \ z, z \mapsto \lambda q.q\}$. 

$\{x \mapsto (\lambda y.y) \ z, z \mapsto \lambda q.q\}\ x \Rightarrow_{\beta\mu} \{x \mapsto z, z \mapsto \lambda q.q\}\ x$

$\{x \mapsto (\lambda y.y) \ z, z \mapsto \lambda q.q\}\ x \Rightarrow_{\beta\mu} (\lambda y.\mu y)\ (\mu z)$.

\noindent We can not join $(\lambda y.\mu y)\ (\mu z)$ and $\{x \mapsto z, z \mapsto \lambda q.q\}\ x$ in one $\Rightarrow_{\beta\mu}$ step. So at this point even though intuitive it seems like
lambda-mu calculus is confluent, we have not found an adequate proof yet. 

\section{Confluence of Local Lambda-Mu Calculus }
\label{conf:local}
In this section we are going to prove confluence of a restrictive version of lambda-mu calculus, namely, \textit{local} lambda-mu calculus. For local lambda-mu, given $\M{x}{t}{N} t$, we require for any $ 1 \leq i \leq n $, the set of free variables of $t_i$, $\mathrm{FV}(t_i) \subseteq \mathrm{dom}(\mu) = \{x_1,..., x_n\}$ and we do not allow reduction, definitional substitution, substitution inside the definitions. 

\begin{definition}[Beta-Reductions]\fbox{$ t \to_{\beta} t'$} 

\begin{tabular}{lll}
\infer{(\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

 \infer{\mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto t_i) \in \mu}

&

\infer{\lambda x.t \to_{\beta} \lambda x.t'}{t \to_{\beta}t' }

\\
\\

\infer{t t' \to_{\beta} t'' t'}{t \to_{\beta}t''}

&

 \infer{t t' \to_{\beta} t t''}{t'\to_{\beta}t''}

&

\infer{\mu t \to_{\beta} \mu t'}{t \to_{\beta}t' }

\end{tabular}

\end{definition}

\begin{definition}[Mu-Reductions]\fbox{$ t \to_{\mu} t'$} 

\begin{tabular}{lll}

 \infer{\mu t \to_{\mu} t}{\mathrm{dom}(\mu) \# \mathrm{FV}(t)}

&

 \infer{ \mu(\lambda x.t) \to_{\mu} \lambda x.\mu t}{}

&

\infer{ \mu(t_1 t_2)  \to_{\mu} (\mu t_1 ) (\mu t_2)}{}

\\
\\

 \infer{\lambda x.t \to_{\mu} \lambda x.t'}{t \to_{\mu} t'}

&
\infer{t t' \to_{\mu} t t''}{t'\to_{\mu} t''}

&
\infer{t t' \to_{\mu} t'' t'}{t \to_{\mu} t''}

\\
\\

\infer{\mu t \to_{\mu} \mu t'}{t \to_{\mu}t' }

\end{tabular}

\end{definition}

\begin{lemma}
  $\to_{\mu}$ is strongly normalizing and confluent.
\end{lemma}

\begin{definition}[$\mu$-Normal Forms]
  
\

\noindent $n \ :: = \ x \ | \   \mu x_i \ | \ \lambda x.n \ | \ n\ n'$

\end{definition}

\noindent We require $x_i \in \mathrm{dom}(\mu)$. 

\begin{definition}[$\mu$-Normalize Funciton]

\

\begin{tabular}{ll}

 $ \nu(x) \ : = \  x$

& $\nu(\lambda y.t)\ : = \ \lambda y.\nu(t)$

\\

 $\nu(t_1 t_2)\ : = \ \nu(t_1) \nu(t_2)$

& 
 $ \nu(\vec{\mu}y) \ := y$ if $y \notin \mathrm{dom}(\vec{\mu})$.

\\
 $ \nu(\vec{\mu}y) \ := \mu_i y$ if $y \in \mathrm{dom}(\mu_i)$.

&  
 $\nu(\vec{\mu}(t t')) \ :=  \nu(\vec{\mu} t) \nu( \vec{\mu}t')$
\\
 $\nu(\vec{\mu}( \lambda x.t)) \ := \lambda x.  \nu(\vec{\mu}t)$.

\\
\end{tabular}

\end{definition}


\begin{definition}[$\beta$ Reduction on $\mu$-normal Forms]

\
  
\begin{tabular}{llll}

\infer{n \to_{\beta \mu} \nu(t)}{n \to_{\beta}t}
&
\infer{\lambda x.n \to_{\beta \mu} \lambda x.n'}{n \to_{\beta \mu} n' }

&

\infer{n n' \to_{\beta \mu} n n''}{n' \to_{\beta \mu} n'' }

&

\infer{n n' \to_{\beta \mu} n'' n'}{n \to_{\beta \mu} n'' }

\end{tabular}

\end{definition}


\begin{definition}[Parallelization]

\

\begin{tabular}{lll}

\infer{ n \Rightarrow_{\beta \mu} n}{}

&

 \infer{\mu x_i \Rightarrow_{\beta\mu} \nu(\mu t_i)}{(x_i \mapsto t_i) \in \mu}

&

\infer{(\lambda x.n_1) n_2 \Rightarrow_{\beta\mu} \nu([n_1'/x]n_2')}{  n_1\Rightarrow_{\beta\mu} n_1' & n_2\Rightarrow_{\beta\mu} n_2'}

\\
\\


\infer{\lambda x.n \Rightarrow_{\beta\mu} \lambda x.n'}{n \Rightarrow_{\beta\mu}n' }

&

\infer{n n' \Rightarrow_{\beta\mu} n'' n'''}{n' \Rightarrow_{\beta\mu} n''' & n \Rightarrow_{\beta\mu}n'' }
\end{tabular}
\end{definition}

\begin{lemma}
  $\to_{\beta\mu} \subseteq \Rightarrow_{\beta\mu} \subseteq \to_{\beta\mu}^*$.
\end{lemma}

\begin{lemma}
\label{norm:sub}
If $n_2 \Rightarrow_{\beta\mu} n_2'$, then $\nu([n_2/x]n_1) \Rightarrow_{\beta\mu} \nu([n_2'/x]n_1)$.
\end{lemma}
\begin{proof}
\noindent  By induction on the structure of $n_1$. 

\noindent \textbf{Base Cases}: $n_1= x$, $n_1 = \mu x_i$, Obvious. 

\noindent \textbf{Step Case}: $n_1= \lambda y.n$. We have $ \nu(\lambda y.[n_2/x]n) \equiv \lambda y.\nu([n_2/x]n) \stackrel{IH}{\Rightarrow_{\beta\mu}} \lambda y.\nu([n_2'/x]n) \equiv \nu(\lambda y.[n_2'/x]n)$.

\noindent \textbf{Step Case}: $n_1= n n'$. We have $ \nu([n_2/x]n [n_2/x]n') \equiv \nu([n_2/x]n) \nu([n_2/x]n')\stackrel{IH}{\Rightarrow_{\beta\mu}} \nu([n_2'/x]n) \nu([n_2'/x]n')\equiv \nu([n_2'/x]n[n_2'/x]n)$.

\end{proof}

\noindent We use $\dot{\overrightarrow{\mu}}$ to denote zero or more $\mu$s.
\begin{lemma}
\label{norm:iden}
 $\nu(\nu(t)) \equiv \nu(t)$ and $\nu([\nu(t_1)/y] \nu(t_2)) \equiv \nu([t_1/y]t_2)$. 
\end{lemma}
\begin{proof}
We only prove the second equality here. We identify $t_2$ as $\dot{\overrightarrow{\mu_1}}t_2'$,
where $t_2'$ does not contains any closure at head position. We proceed by induction on the structure of $t_2'$:

\noindent \textbf{Base Cases}: For $t_2' = x$, we use $\nu(\nu(t)) \equiv \nu(t)$. 

\noindent \textbf{Step Cases}: If $t_2' = \lambda x.t_2''$, then 

$\nu(\dot{\overrightarrow{\mu_1}}(\lambda x.[t_1/y]t_2'')) \equiv \lambda x.\nu(\dot{\overrightarrow{\mu_1}}([t_1/y]t_2'')) \equiv \lambda x.\nu(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2'''))$, 

\noindent where $t_2''$ as $\dot{\overrightarrow{\mu_2}} t_2'''$ and $t_2'''$ does not have any closure at head position. Since $t_2'''$ is structurally smaller than $\lambda x.t_2''$, by IH, $\nu(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2''')) \equiv \nu([t_1/y](\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2''')) \equiv \nu([\nu(t_1)/y] \nu(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2'''))$. Thus $\lambda x.\nu(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2''')) \equiv \lambda x. \nu([\nu(t_1)/y] \nu(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2'''))$. So $\nu([t_1/y]\dot{\overrightarrow{\mu_1}}(\lambda x.t_2'')) \equiv \nu( [\nu(t_1)/y] \nu(\lambda x.\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2''')) \equiv \nu( [\nu(t_1)/y] \nu(\lambda x.\dot{\overrightarrow{\mu_1}}t_2'')) \equiv$

\noindent $ \nu( [\nu(t_1)/y] \nu(\dot{\overrightarrow{\mu_1}}(\lambda x.t_2'')))$

For $t_2' = t_a t_b$,  we can argue similarly as above.

\end{proof}

\begin{lemma}
\label{key}
If $n_1 \Rightarrow_{\beta\mu} n_1'$ and $ n_2 \Rightarrow_{\beta\mu} n_2'$, then $\nu([n_2/x]n_1) \Rightarrow_{\beta\mu} \nu([n_2'/x]n_1')$.
\end{lemma}
\begin{proof}

\noindent We prove this by induction on the derivation of $  n_1 \Rightarrow_{\beta\mu} n_1'$.
  
\begin{itemize}

\item \textbf{Base Case:} \infer{  n \Rightarrow_{\beta \mu} n}{}

\noindent By the lemma \ref{norm:sub}.

\item \textbf{Base Case:} \noindent \infer{  \mu x_i\Rightarrow_{\beta\mu} \nu(\mu t_i)}{x_i \mapsto t_i \in \mu}

\noindent Because $y \notin \mathrm{FV}(\mu x_i)$ and $\mu$ is local. 

\item \textbf{Step Case:} \infer{  (\lambda x.n_a) n_b \Rightarrow_{\beta\mu} \nu([n_a'/x]n_b')}{  n_a\Rightarrow_{\beta\mu} n_a' &   n_b\Rightarrow_{\beta\mu} n_b'}


\noindent We have $  \nu((\lambda x.[n_2/y]n_a) [n_2/y] n_b) \equiv (\lambda x.\nu([n_2/y]n_a)) \nu([n_2/y] n_b)$

$ \stackrel{IH}{\Rightarrow_{\beta\mu}} \nu([\nu([n_2'/y] n_b')/x]\nu([n_2'/y] n_a')) \equiv \nu([n_2'/y]([n_b'/x]n_a'))$. The last equality is by lemma \ref{norm:iden}.


\item \textbf{Step Case:}  \infer{  \lambda x.n \Rightarrow_{\beta\mu} \lambda x.n'}{  n \Rightarrow_{\beta\mu}n' }

\noindent We have $  \nu(\lambda x.[n_2/y]n) \equiv \lambda x.\nu([n_2/y]n) \stackrel{IH}{\Rightarrow_{\beta\mu}} \lambda x.\nu([n_2'/y]n') \equiv \nu(\lambda x.[n_2'/y]n') $

\item \textbf{Step Case:} \infer{  n_a n_b \Rightarrow_{\beta\mu} n_a'n_b'}{   n_a\Rightarrow_{\beta\mu} n_a' &   n_b\Rightarrow_{\beta\mu} n_b'}

\noindent We have $  \nu([n_2/y]n_a [n_2/y] n_b) \equiv \nu([n_2/y]n_a) \nu([n_2/y] n_b)$

$ \stackrel{IH}{\Rightarrow_{\beta\mu}} \nu([n_2'/y] n_a') \nu([n_2'/y] n_b')\equiv \nu([n_2'/y](n_a'n_b'))$.
\end{itemize}
\end{proof}

\begin{lemma}[Diamond Property]
\label{diamond}
  If $ n \Rightarrow_{\beta\mu} n'$ and $ n \Rightarrow_{\beta\mu} n''$, then there exist $n'''$ such that $ n'' \Rightarrow_{\beta\mu} n'''$ and $ n' \Rightarrow_{\beta\mu} n'''$. So $\to_{\beta\mu}$ is confluent.
\end{lemma}
\begin{proof}
  \noindent By induction on the derivation of $  n \Rightarrow_{\beta\mu} n'$. 
  \begin{itemize}


\item \textbf{Base Case:} \infer{  n \Rightarrow_{\beta \mu} n}{} and \infer{  \mu x_i\Rightarrow_{\beta\mu} \nu(\mu t_i)}{}

\noindent Obvious. 

\item \textbf{Step Case:} \infer{  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu} \nu([n_1'/x]n_2')}{   n_1\Rightarrow_{\beta\mu} n_1' &  n_2\Rightarrow_{\beta\mu} n_2'}

\noindent Suppose $  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu}(\lambda x.n_1'') n_2''$, where $  n_1 \Rightarrow_{\beta\mu}n_1''$ and $  n_2 \Rightarrow_{\beta\mu} n_2''$. By lemma \ref{key} and IH, we have $  \nu([n_1'/x]n_2') \Rightarrow_{\beta\mu} \nu([n_1'''/x]n_2''')$. We also have $  (\lambda x.n_1'') n_2''\Rightarrow_{\beta\mu} \nu([n_1'''/x]n_2''')$, where $  n_1'' \Rightarrow_{\beta\mu}n_1'''$ and $  n_1' \Rightarrow_{\beta\mu}n_1'''$ and $  n_2' \Rightarrow_{\beta\mu} n_2'''$ and $  n_2' \Rightarrow_{\beta\mu}n_2'''$.

\noindent Suppose $  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu}\nu([n_2''/x]n_1'') $, where $  n_1 \Rightarrow_{\beta\mu}n_1''$ and $  n_2 \Rightarrow_{\beta\mu} n_2''$. By lemma \ref{key} and IH, we have $  \nu([n_1'/x]n_2') \Rightarrow_{\beta\mu} \nu([n_1'''/x]n_2''')$ and $  \nu([n_1''/x]n_2'') \Rightarrow_{\beta\mu} \nu([n_1'''/x]n_2''')$.

\noindent The other cases are either similar to the one above or easy.
  \end{itemize}
\end{proof}

\noindent One may also use Takahashi's method~\cite{Takahashi95} to prove the lemma above. We will not explore that here.

\begin{lemma}
\label{vec}
$\nu(\vec{\mu}\vec{\mu}t) \equiv \nu(\vec{\mu}t)$ and $\nu(\vec{\mu} ([t_2/x]t_1)) \equiv \nu( [\vec{\mu} t_2/x]\vec{\mu} t_1)$
\end{lemma}

\dbend
\begin{lemma}
\label{Interp}
If $a \to_{\beta} b$, then $ \nu(a)\to_{\beta\mu} \nu(b)$.
\end{lemma}
\begin{proof}
\noindent  We prove this by induction on the derivation(depth) of $  a \to_{\beta} b$. We list a few non-trial cases:


\begin{itemize}

\item \textbf{Base Case:} \infer{  \mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto t_i) \in \mu}

\noindent We have $  \nu(\mu x_i) \equiv \mu x_i \to_{\beta\mu} \nu(\mu  t_i)$.

\item \textbf{Base Case:} \infer{  (\lambda x.t)t' \to_{\beta} [t'/x]t}{}

\noindent We have $  \nu((\lambda x.t)t') \equiv (\lambda x.\nu(t))\nu(t') \to_{\beta\mu} \nu([\nu(t)/x]\nu(t')) \equiv \nu([t'/x]t)$.

\item \textbf{Step Case:} \infer{  \lambda x.t \to_{\beta} \lambda x.t'}{  t \to_{\beta}t' }

\noindent By IH, we have $  \nu(\lambda x.t)  \equiv  \lambda x.\nu(t)  \stackrel{IH}{\to_{\beta\mu}} \lambda x.\nu(t') \equiv \nu(\lambda x.t') $. 

\item \textbf{Step Case:} \infer{  \mu t \to_{\beta} \mu t'}{t \to_{\beta}t' }

\noindent We want to show $  \nu(\mu t) \to_{\beta\mu}  \nu(\mu t') $. If $\mathrm{dom}(\mu)\# \mathrm{FV}(t)$, then $  \nu(\mu t) \equiv \nu(t) \stackrel{IH}{\to_{\beta\mu}}  \nu(t') \equiv \nu(\mu t') $. Of course, here we assume beta-reduction does not introduce any new variable.

\noindent If $\mathrm{dom}(\mu)\cap \mathrm{FV}(t) \not = \emptyset$, then identify $t$ as $\dot{\overrightarrow{\mu_1}}t''$, where
$t''$ does not contain any closure at head position. We do case analyze on the structure of $t''$: 
\begin{itemize}

\item \textbf{Case.} $t''=x_i \in \mathrm{dom}(\dot{\overrightarrow{\mu_1}})$ or $x_i \notin \mathrm{dom}(\dot{\overrightarrow{\mu_1}})$, these cases will not arise.

\item\textbf{Case.} $t'' = \lambda y.t_1$, then it must be that $ t' = \dot{\overrightarrow{\mu_1}}(\lambda y.t_1')$ where $ t_1 \to_{\beta} t_1'$. So 
we get $   \mu \dot{\overrightarrow{\mu_1}} t_1 \to_{\beta} \mu \dot{\overrightarrow{\mu_1}}t_1'$. By IH(depth of $ \mu \dot{\overrightarrow{\mu_1}} t_1 \to_{\beta} \mu \dot{\overrightarrow{\mu_1}}t_1'$ is smaller), we have $  \nu(\mu \dot{\overrightarrow{\mu_1}}t_1) \to_{\beta\mu} \nu(\mu \dot{\overrightarrow{\mu_1}}t_1')$. Thus $  \nu(\mu\dot{\overrightarrow{\mu_1}}(\lambda y.t_1)) \equiv \lambda y.\nu(\mu\dot{\overrightarrow{\mu_1}} t_1) \to_{\beta\mu} \lambda y.\nu(\mu\dot{\overrightarrow{\mu_1}} t_1') \equiv \nu(\mu\dot{\overrightarrow{\mu_1}} (\lambda y.t_1'))$. 

\item \textbf{Case.} $t'' = t_1 t_2$ and $t' = \dot{\overrightarrow{\mu_1}}(t_1' t_2)$, where $ t_1 \to_{\beta} t_1'$. We have  $  \mu\dot{\overrightarrow{\mu_1}} t_1 \to_{\beta } \mu\dot{\overrightarrow{\mu_1}} t_1'$. By IH(depth of $\mu\dot{\overrightarrow{\mu_1}} t_1 \to_{\beta } \mu\dot{\overrightarrow{\mu_1}} t_1'$ is smaller),
$  \nu(\mu\dot{\overrightarrow{\mu_1}} t_1) \to_{\beta \mu} \nu(\mu\dot{\overrightarrow{\mu_1}} t_1')$. Thus $  \nu(\mu\dot{\overrightarrow{\mu_1}}(t_1 t_2)) \equiv \nu(\mu\dot{\overrightarrow{\mu_1}} t_1) \nu(\mu\dot{\overrightarrow{\mu_1}} t_2) \to_{\beta \mu} \nu(\mu\dot{\overrightarrow{\mu_1}} t_1') \nu(\mu \dot{\overrightarrow{\mu_1}}t_2) \equiv \nu(\mu\dot{\overrightarrow{\mu_1}}(t_1' t_2))$.
For $t'' = t_1 t_2'$, where $  t_2 \to_{\beta} t_2'$, we can argue similarly. 

\item \textbf{Case.} $t'' = (\lambda y.t_1)t_2$ and $t' = \dot{\overrightarrow{\mu_1}}([t_2/y]t_1)$. We have $\nu(\mu\dot{\overrightarrow{\mu_1}} ((\lambda y.t_1)t_2)) \equiv (\lambda y.\nu(\mu\dot{\overrightarrow{\mu_1}} t_1)))\nu(\mu \dot{\overrightarrow{\mu_1}}t_2)  \to_{\beta\mu} \nu( [\nu(\mu \dot{\overrightarrow{\mu_1}}t_2)/y] \nu(\mu \dot{\overrightarrow{\mu_1}} t_1)) \equiv \nu([\mu\dot{\overrightarrow{\mu_1}} t_2/y] \mu \dot{\overrightarrow{\mu_1}}t_1) \equiv \nu(\mu \dot{\overrightarrow{\mu_1}} [t_2/y]t_1)$(lemma \ref{vec}).
\end{itemize}
\end{itemize}
\end{proof}

\begin{theorem}
  $\to_{\beta} \cup \to_{\mu}$ is confluent. 
\end{theorem}
\begin{proof}
  We know by diamond property of $\Rightarrow_{\beta\mu}$, $\to_{\beta\mu}$ is confluent. Since
$\to_{\mu}$ is strongly normalizing and confluent, and by lemma \ref{Interp} and Hardin's 
interpretation lemma(lemma \ref{interp}), we conclude $\to_{\beta} \cup \to_{\mu}$ is confluent. 
\end{proof}



\chapter{An Attempt to Expressive Type Theory Through Internalization}
\label{internalization}
This Chapter introduces the concept of \textit{internalization structure}, which
can be used to incorporate certain relations into $\fpi$, a variant of system
\textbf{F}, while maintaining termination of the new system. We will call this
process of incorporation \textit{internalization}, $\fpi$ the \textit{base
system} and the new system after the incorporation the \textit{internalized
system}. We first specify the syntax, and then the semantics of $\fpi$ via the
Tait-Girard reducibility method (Section~\ref{sec:base}). We then define 
internalization structure (Section~\ref{sec:intstruct}). We show that we can obtain a terminating internalized system from an internalization structure (Section~\ref{sec:intsys}).
As motivating examples, we demonstrate how our framework can be
applied to internalize subtyping, full-beta term equality 
and term-type inhabitation relations (Section~\ref{sec:example}). Finally, we discuss some of the difficulties in Section~\ref{sec:related}.


\section{Backgrounds}
 Type systems often incorporate auxiliary judgments in their typing relations.
For example,
the subsumption rule for subtyping:

\[
\infer[\textit{sub}]{\Gamma \vdash t:T'}{\Gamma \vdash t:T & T<:T'}
\]
Likewise, the conversion rule for type-equivalence:  


\[
\infer[\textit{conv}]{\Gamma \vdash t:T'}{\Gamma \vdash t:T & T \equiv T'}
\]

\noindent We propose a framework for incorporating the meta-level relations such
as $<:$ and $\equiv$ as types in the type system, and shows that such extensions yield 
terminationing systems under the call-by-name reduction. We call the deduction systems producing auxiliary judgments \textit{metasystems}, and refer to the typing rules that modify types based such metasystem judgments as \textit{automatic conversion rules}. We will also consider cut-down type systems without automatic conversion rules, which are called \textit{base systems}. For instance, the subtyping and the type-equivalence derivation systems are the metasystems, and rules \textit{sub} and \textit{conv} are the automatic conversion rules. We will discuss an variant of System \textbf{F}, which is the base system for our extensions.


Type structure can be used to reflect metasystem
judgments. Indeed this has been done in several languages: 
equality sets in Martin-L\"of type theory enable reasoning about equality relations~\cite{nordstrom90:programming-tt}; Sj\"oberg and Stump's $T^{\texttt{vec}}$ uses types to reflect call-by-value term equality in the presence of divergence~\cite{sjoberg+10}, and
the AuraConf language uses proofs of type $e\ \mathbf{isa}\ t$
to indicate expression $e$ may be cast to type $t$~\cite{vau09}.

%% If we view the base system as a kind of constructive logic, then once we internalized metasystem judgments as types, we can reason about these judgments using the logic provided by the base system. %% Moreover, using features of the internalized system we can derive
%% new, admissible judgments that may not derivable in either the base system or the metasystem alone.  Later we will see how the internalized system
%% makes this possible.





%The internalization framework is based on
%Tait-Girard reducibility
%~\cite{Girard1989}.
%Here all types are given an interpretation as sets of terms,
%and such \textit{reducibility
%sets} possess certain semantic properties.
%That each typable term is a member of the appropriate reducibility set
%corresponds to type soundness in set-theoretic interpretation of
%types~\cite{Mit96}.
%So to give interpretations to metasystem judgments,
%it is natural to identify each judgment with a set-theoretic
%relation between reducibility sets.  Take subtyping as an example; we
%interpret subtype judgment \(<:\) as the mathematical subset relation
%\(\subseteq\)
%on reducibility sets.  This follows a natural 
%view of subtyping proposed by Rehof~\cite{Rehof1996}. 



\section{The Base system $\fpi$}
\label{sec:base}
\noindent Internalization builds off of base system $\fpi$,
a variant of system \textbf{F}.
\begin{definition}[Syntax and Reductions]

  \
  
\noindent Types $ T \ ::=\ B \ | \ X  \ |  \ \Pi x:T.T \ | \ \forall X.T$

 \noindent Terms $t,u \ ::= \mathsf{axiom} \ | \ x \ | \ (t \ t) \ | \ \lambda x.t$

\noindent Contexts $\mathcal{C}\ ::= \ \cdot \ | \ \mathcal{C} \ t$

\noindent Values $v \ ::=\ \lambda x.t \ | \ \mathsf{axiom}$

\noindent Reductions $\mathcal{C}[(\lambda x.t) \ t'] \leadsto \mathcal{C}[[t'/x]t]$
  
\end{definition}

\noindent Note that we use call-by-name reduction strategy.

\begin{definition}[Kinding]
 \fbox{$\Gamma \vdash \mathsf{OK}$} 
  
  \begin{tabular}{ccccc}
		
    \infer{\cdot \vdash \mathsf{OK}}{}

&
\ \ 
&
 \infer{\Gamma,X  \vdash \mathsf{OK}}{\Gamma \vdash \mathsf{OK}}

&
\ \ 
&
 \infer{\Gamma, x:T \vdash \mathsf{OK}}{\Gamma \vdash \mathsf{OK} \ & \  \mathrm{FVar}(T)\subseteq \mathrm{dom}(\Gamma)}


  \end{tabular}

\end{definition}

\noindent $\mathrm{FVar}(T)$ means the set of free type variables and free 
term variables in type $T$. $\mathrm{dom}(\Gamma)$ means the domain of the context,
i.e., $e \in \mathrm{dom}(\Gamma)$ iff $e$ is either a type variable such that
$\Gamma\equiv \Gamma_1,e,\Gamma_2$, or a term variable such that $\Gamma\equiv \Gamma_1,e:T,\Gamma_2$.

\begin{definition}[Typing]  \fbox{$\Gamma \vdash t: T$}

\begin{tabular}{ll}
		
\infer[{\textit{$\Pi$-intro}}]{\Gamma\vdash \lam{x}{t} : \Pi x:T_1.T_2}{\Gamma,x:T_1 \  \vdash t:T_2}

& 
    \infer[{\textit{Var}}]{\Gamma\vdash x : T }{(x:T) \in \Gamma & \Gamma \vdash \mathsf{OK} } 


\\

\\

\infer[{\textit{$\Pi$-elim}}]{\Gamma\vdash t_1\ t_2 : [t_2/x]T_2}
      {\Gamma\vdash t_1 : \Pi x:T_1.T_2 \  &
       \Gamma\vdash t_2 : T_1 }
       
&

\infer[{\textit{$\forall$-intro}}]{\Gamma \vdash t:\forall X.T}{\Gamma,X \vdash t:T}

\\
\\

\infer[{\textit{$\forall$-elim}}]{\Gamma \vdash t:[T'/X]T}{\Gamma \vdash t:\forall X.T & \mathrm{FVar}(T') \subseteq \mathrm{dom}(\Gamma)}

  \end{tabular}

\end{definition}




\noindent The differences between \textbf{F} and $\fpi$ are as follows: 

\begin{enumerate}
  \item $\fpi$ is parametrized by a finite set $B$ of constant types and
        it contains constant terms like $\mathsf{axiom}$. Later, we will use $\mathsf{axiom}$ to inhabit special types.
  
  \item The notion of value is extended by including constant terms.
  
  \item $\fpi$ uses dependent product $\Pi$ instead of arrow $\to$ as the
  function type constructor anticipating the use of types that mention terms.  

\end{enumerate}

\noindent A word about the use of call-by-name reduction is warranted.  The main
result of this paper is normalization for systems derived by
internalization from the base system $\fpi$.  Strong normalization does
not hold for all such systems, as we show by example in
Section~\ref{sec:exampleSubtyping}.  So (weak) normalization is all
that we can obtain.  An interesting result of our investigation into
internalization is that normalization with respect to call-by-name
reduction imposes fewer requirements on internalization structures
than with call-by-value reduction.  Specifically, the
$\lambda$-abstraction case of the proof of Theorem~\ref{typesoundness}
goes through more directly using call-by-name reduction; with
call-by-value reduction, dependent typing imposes additional
restrictions. 


\subsection{Interpretation of Types in $\fpi$}

\noindent Reducibility is a well-known technique for proving the normalization
of type systems such as \textbf{F}. In this paper, we use it to
interpret $\fpi$'s types. Reducibility will
both provide intuition for $\fpi$'s semantics and yield a normalization result.

\begin{definition}
A reducibility candidate $\mathcal{R}$ is a set of terms that satisfies the following conditions:

\noindent \textbf{CR 1} If $t \in \mathcal{R}$,then $t \in \mathcal{V}$, where $\mathcal{V}$ is the set of closed terms that that reduces to a value. 

\noindent \textbf{CR 2} If $t \in \mathcal{R} \ and \ t \leadsto \ t'$, then $t' \in \ \mathcal{R}$.

\noindent \textbf{CR 3} If $t$ is a closed term, $\ t \leadsto \ t' \ and \ t' \in \ \mathcal{R}$, then $t \in \ \mathcal{R}$.

\end{definition}  


\begin{definition}
  \label{envfun}
 Let $\mathfrak{R}$ be the set of all reducibility candidates. Let $\textit{TVar}$ be the set of all type variables. Let $\phi$ be a finite function with $\mathrm{dom}(\phi)\subseteq \textit{TVar}$ and $\mathrm{range}(\phi) \subseteq \mathfrak{R}$. If $\mathrm{dom}(\phi)=\{ X_1,X_2,...X_n\}$, then we usually write $\phi$ as $[\mathcal{R}_1/X_1,...\mathcal{R}_n/X_n]$. 
  
\end{definition}
 
\begin{definition}[Interpretation of Types]

  \
  
\noindent $t \in \interp{B}_\phi$ iff $t \in \mathcal{R}_B$, where $\mathcal{R}_B \in \mathfrak{R}$.

\noindent $t \in \interp{X}_\phi$ iff $t \in \phi(X)$.

\noindent $t \in \interp{\Pi x:T_1.T_2}_\phi$ iff $t \in \mathcal{V}$ and $(\forall u \in \interp{T_1}_\phi \Rightarrow (t \ u) \in \interp{[u/x]T_2}_\phi)$.

\noindent $t \in \interp{\forall X.T}_\phi$ iff $\forall \mathcal{R} \in \mathfrak{R}, t \in  \interp{T}_{\phi[\mathcal{R}/X]}$. 
\end{definition}

\noindent Note that constant types $B$ and their interpretations $\mathcal{R}_B$ are
left unspecified; these may be filled in later. For any $\interp{T}_\phi$, let $\mathrm{FV}(T)$ be the set of free type variable in $T$.we assume $\mathrm{FV}(T) \subseteq \mathrm{dom}(\phi)$. 

\subsection{Type Soundness}
\label{sec:tpsnd}

\noindent The theorem below shows that any typable closed term is
normalizing, and can be shown in a standard way using Tait-Girard
reducibility (cf.~\cite{Girard:1989}).  Several properties of the
interpretation of types are required, which can all be proved by
induction on the structure of types in $\fpi$. 


\begin{lemma}
\label{lemma:fpinterptype}
\noindent $\interp{T}_\phi \in \mathfrak{R}$, in the other words, the interpretation of a type is indeed a reducibility candidate.
\end{lemma}
\begin{comment}
\begin{lemma}
\label{lemma:subst}
\noindent Let $\textit{Sub}$ be the set of all capture avoiding term-level substitutions with a domain of term variables and a range of terms that are in $\mathcal{V}$. $\forall \sigma \in \textit{Sub},\interp{\sigma T}_\phi=\interp{T}_\phi$.
\end{lemma}

\noindent Since $\fpi$ essentially is system \textbf{F}, it does not contain terms in the types, we have $\sigma T \equiv T$ in $\fpi$, thus this lemma is true. 

%\begin{lemma}[Dependent lemma]
%\label{lemma:dep}
%\noindent If $t \leadsto t'$, then $\interp{[t/y]T}_\phi = \interp{[t'/y]T}_\phi$.
%\end{lemma}
%
%\noindent While this is a trivial consequence of
%Lemma~\ref{lemma:subst} for the base system, it will require more
%significant proof for the internalized system below.
\begin{lemma}[Substitution lemma]
\label{lemma:substui}
$\interp{[T'/X]T}_\phi=\interp{T}_{ \phi[\interp{T'}_\phi /X]}$.
\end{lemma}
\end{comment} 
\begin{definition}
We define the set $[\Gamma]$ of well-typed substitutions $(\sigma,\delta)$ w.r.t. $\Gamma$ as follows:
$
\begin{array}{lllll}
\infer{(\emptyset,\emptyset) \in [.]}
      { }
      &\ \  &
\infer{(\sigma,\delta \cup \{ (X,\mathcal{R})\}) \in [\Gamma,X]}
      {(\sigma,\delta) \in [\Gamma] & \mathcal{R} \in \mathfrak{R}}

&\ \ &
\infer{(\sigma \cup \{ (x,t)\},\delta) \in [\Gamma,x:T]}
      {(\sigma,\delta) \in [\Gamma] & t \in \interp{\sigma T}_\delta}
\end{array}
$

\end{definition}

\begin{theorem}[Type Soundness] 
\label{typesoundness}
If $\Gamma \vdash t:T$, then $\forall (\sigma,\delta) \in [\Gamma], 
(\sigma \ t) \in \interp{\sigma T}_\delta$.
\end{theorem}

\section{Internalized Structure}
\label{sec:intstruct}
An internalization structure is a triple
$(D,E,\mathcal{I})$.
\textit{Reflective relational sentences} $D$ 
define the syntax of metasystem propositions and identify valid metasystem
judgments. \textit{Elimination relation} $E$ defines automatic conversion rules 
based on judgments from $D$. Finally, \textit{interpretation} $\mathcal{I}$ defines semantics for reflective relational sentences as relations over the sets of terms in the base system.
All internalization structures require that
$D$ and $E$ are \emph{sound}. As a central result of our work, we show that any sound
internalized system constructed from an internalization
structure is guaranteed to be terminating. Internalization is based on internalization structure. The internalization structure contains the information of how to construct reflective relational sentences and how these reflective relational sentences interact with the base system. It also gives the meaning of the reflective relational sentences through the interpretation of types in the base system. Once we obtain a sound internalization structure, we can then begin the process of internalization by first incorporating the reflective relational sentences as types, then add two new typing rules to deal with these reflective relational sentences. 
 
\subsection{Reflective Relational Sentence-$D$}

\noindent We define the kind of judgments or relations that could be integrated into the base system. Essentially these are the relations on the terms 
and types from the base system. 

\begin{definition}
\noindent Let \textit{signature} $\Sigma \subseteq \mathbf{Symbols} \times \mathbb{N} \times \mathbb{N}$, where $\mathbf{Symbols}$ means a set of relation symbols, and $\mathbb{N}$ is the set of natural numbers. $R^{n\times m} \in \Sigma$ means $R \in \mathbf{Symbols}$ and the arity of $R$ is $n+m$. 

\end{definition}
\begin{definition}
\noindent \textit{A relational sentence on the basic system} is a syntactic
object of form
$R^{(n\times m)}(t_1,...,t_n,T_1,...,T_m)$, where $t,T$ are defined in $\fpi$ and
$R^{(n\times m)} \in \Sigma$.
\end{definition}

\begin{definition}
\label{reflectivesen}
\noindent Let $\mathfrak{A}$ be the set of all relational sentences.
A set of \textit{reflective relational sentences $D$} is a subset of all
relational sentences, i.e. $D \subseteq \mathfrak{A}$.

\end{definition}

 Reflective relational sentences are used to formalize a metasystem's
derivable judgments.  When we define specifically how to
recognize the reflective relational sentences from relational sentences, we
obtain a kind of metasystem. This metasystem need not be recursive. 

\subsection{Elimination Relation-$E$}

\noindent An elimination relation is a syntactic constraint used to specify how the metasystem influences the base system. We will appeal to an elimination 
relation when we add the elimination rule to the base system for the reflective relational sentences. Since the elimination relation is used after internalizing
reflective relational sentences as types, we need to extend the definition of types and the context accordingly.

\begin{definition}
\label{extendtype}
\noindent We define \textit{extended types} and \textit{extended contexts} as follows:

\noindent $\mathbf{RTypes} \ A \ ::= \ R^{(n\times m)}_1(t_1,...,t_n,T_1,...,T_m) \ | \ ... \ | \  R^{(n\times m)}_l(t_1,...,t_n,T_1,...,T_m)$

\noindent $\mathbf{ETypes}\ S\ ::= A \ | \ B \ | \ X  \ |  \ \Pi x:S.S \ | \ \forall X.S$

\noindent $\mathbf{EContext}\ \Delta\ ::=\ \cdot \ | \ \Delta, x:S \ | \ \Delta,X$ 

\end{definition}

 %% We can see that the extended types and the extended context defined above are really extended in the sense of adding the \textit{relational sentences} as new types.
\begin{definition}

\noindent We specify an elimination relation $E$ by: \\ $E \subseteq \mathbf{EContext} \times \mathbf{Terms} \times \mathbf{Terms} \times \mathfrak{A} \times \mathbf{ETypes} \times \mathbf{ETypes}$.

\end{definition}

\noindent For example, when we consider the specific internalization structure for subtyping below, we will define an elimination relation where $(\Delta,t,t',T<T',T,T') \in E$ holds iff in extended context $\Delta$, $t$ has the type $T$, $t'$ has the type $T<T'$, and we can change the type of $t$ to $T'$. 

\subsection{Interpretation-$\mathcal{I}$}

\noindent We defined the interpretation of types of $\fpi$ before. Since interpretation of types is a set of terms and the reflective relational
sentences are relations about between terms and types in $\fpi$, it is natural to understand the meaning of these reflective relational sentences as set-theoretic
relations between interpretation of types. Take subtyping as an example; we
interpret subtype judgment $<:$ as the subset relation
$\subseteq$ on interpretation of types\footnote{This is also observed by \cite{Rehof1996}}. Interpretation-$\mathcal{I}$ is defined to capture this intuition. Later we will relate
interpretation-$\mathcal{I}$ to reflective relational sentences and elimination relation through two soundness properties.

\begin{definition}
\noindent  Let $\mathfrak{R}$ be the set of all reducibility candidates as defined in $\fpi$. We define an interpretation of $R^{(n\times m)}$--$\mathcal{I}_{R^{(n\times m)}}$ to be $\mathcal{I}_{R^{(n\times m)}} \subseteq \mathbf{Terms}^n \times \mathfrak{R}^m$. 
\end{definition}

\subsection{Soundness Properties}

\noindent Now that we have defined all parts of an internalization structure, we can formulate two soundness properties for an internalization structure. Since one of the soundness properties is related to the extended types, we first define the interpretation for extended types. Then we identify the soundness properties. 

\begin{definition} Let $\phi$ be an environment function w.r.t. type $S$, which is defined in the same way as definition~\ref{envfun} except we extend it to type $S$. 
Let $\mathcal{A}$ be the set of closed terms that normalize at $\mathsf{axiom}$. 
The interpretation of types $\interp{S}_\phi$ is defined inductively as follows:
\begin{itemize}

\item $t \in \interp{B}_\phi$ iff $t \in \mathcal{R}_B$.

\item $t \in \interp{R^{(n\times m)}(t_1,...,t_n,T_1,...,T_m)}_\phi$ iff $t \in \mathcal{A}$ and $(t_1,...,t_n,\interp{T_1}_\phi,...,\interp{T_m}_\phi) \in \mathcal{I}_{R^{(n\times m)}}$. 

\item $t \in \interp{X}_\phi$ iff $t \in \phi(X)$.


\item $t \in \interp{\Pi x:S_1.S_2}_\phi$ iff $t \in \mathcal{V} $ and $(\forall u \in \interp{S_1}_\phi \Rightarrow (t \ u) \in \interp{[u/x]S_2}_\phi)$.


\item $t \in \interp{\forall X.S}_\phi$ iff $\forall \mathcal{R} \in \mathfrak{R}, t \in  \interp{S}_{\phi[\mathcal{R}/X]}$.


\end{itemize}
\end{definition}

\noindent We define $(\sigma,\delta) \in [\Delta]$ in the same way as $(\sigma,\delta) \in [\Gamma]$, except with extended contexts and extended types.

\begin{definition} We say a tuple $\langle D,E,\mathcal{I}\rangle$ is an internalization structure iff it satisfies the following soundness properties:

\noindent Soundness of reflective relational sentences:

\noindent If $R^{(n\times m)}(t_1,...,t_n,T_1,...,T_m) \in D$, then $\forall \phi, \sigma, (\sigma t_1,...,\sigma t_n,\interp{\sigma T_1}_\phi,...,\interp{\sigma T_m}_\phi) \in \mathcal{I}_{R^{(n\times m)}}$.

\noindent Soundness of the elimination relation:

\noindent Suppose $(\Delta,t,t',R^{(n\times m)}(t_1,...,t_n,T_1,...,T_m),S,S') \in E$, $(\sigma,\delta) \in [\Delta]$, $\sigma(t) \in \interp{\sigma S}_\delta$ and $R^{(n\times m)}(t_1,...,t_n,T_1,...,T_m)\in D$.  Then $\sigma(t) \in \interp{\sigma S'}_\delta$.  

\end{definition}

 \textit{Soundness of reflective relational sentences} means that the
reflective relational sentences are %sound according to the
a conservative approximation of
interpretation-$\mathcal{I}$. \textit{Soundness of the elimination relation}
will imply that the elimination rule for internalized systems 
respects the Girard-Tait type interpretation and is semantically compatible
with substitutions that arise duing CBN evaluation.

\section{Internalized System}
\label{sec:intsys}

\noindent We have defined the internalization structure--$(D,E,\mathcal{I})$. Using an internalization structure, we can construct
a new system--we call it the internalized system--from the internalization structure and $\fpi$. The term syntax and operational semantics of internalized system are the same as $\fpi$, while the syntax of types and contexts are the $\mathbf{RTypes},\mathbf{ETypes},\mathbf{EContexts}$ in definition~\ref{extendtype}. The  well-formed extended context $\Delta \vdash \mathbf{OK}$ is defined just as before except using $\mathbf{EContexts}$. 

\begin{definition}
\fbox{$\Delta \vdash t : S$}  

\

\begin{tabular}{ccc}

&
  
\infer[{\textit{A-intro}}]{\Delta \vdash \mathsf{axiom} : A}
      { A \in D & \mathrm{FVar}(A) \subseteq \mathrm{dom}(\Delta) &\Delta \vdash \mathsf{OK}}

&
      \\
      \\

      &
      
      \infer[{\textit{Var}}]{\Delta\vdash x : S }{\Delta(x) = S & \Delta \vdash \mathsf{OK} } 

  &
\\

\\
&

\infer[{\textit{A-elim}}]{\Delta \vdash t:T'}{\Delta \vdash t:T & \Delta \vdash t':A & E(\Delta,t,t',A,T,T')}

       
&

\\
\\

&

\infer[{\textit{$\Pi$\_intro}}]{\Delta\vdash \lam{x}{t} : \Pi x:S_1.S_2}
      {\Delta,x:S_1 \  \vdash t:S_2}

&
\\
\\
&
\infer[{\textit{$\Pi$\_elim}}]{\Delta\vdash t_1\ t_2 : [t_2/x]S_2}
      {\Delta\vdash t_1 : \Pi x:S_1.S_2 \  &
       \Delta\vdash t_2 : S_1 }


&
      
 \\
 \\
 
 &
 
\infer[{\textit{$\forall$\_intro}}]{\Delta \vdash t:\forall X.S}{\Delta,X \vdash t:S}

&

\\
\\

&
\infer[{\textit{$\forall$\_elim}}]{\Delta \vdash t:[S'/X]S}{\Delta \vdash t:\forall X.S & [S'/X]S \in \mathbf{ETypes} & \mathrm{FVar}(S') \subseteq \mathrm{dom}(\Delta)}
&
\end{tabular}

\end{definition}	
\noindent  We can see that the new type assignment system contains two new rules:$A$-intro and $A$-elim. The $A$-intro rule is used to introduce reflective relational sentences as types in the internalized system, while the $A$-elim rule is for using the reflective relational sentences to change the type of a term accordingly. The theorem below guarantees that the internalized system generated from $\fpi$ and internalization structure is \textit{terminating}, which is the central result of internalization. 

\begin{theorem}[Type Soundness]
\label{soundness}
If $( D,E,\mathcal{I})$ is an internalization structure and $\Delta \vdash t:S$, then $\forall (\sigma,\delta) \in [\Delta], 
(\sigma \ t) \in \interp{\sigma S}_\delta$.
\end{theorem}


\begin{corollary}\label{cor:IntSysNorm}
If $\cdot \vdash t:S$, then $t \in \mathcal{V}$.
\end{corollary}

\noindent Because typing contexts may introduce spurious assumptions, some open contexts
may assign a type to a diverging term.
Section~\ref{sec:exampleSubtyping}
shows gives an example.  This is an expected outcome of reasoning from invalid
premises.  Indeed Corollary~\ref{cor:IntSysNorm} may be strengthened to allow
contexts where all variables are classified by inhabited types.  


\section{Examples}
\label{sec:example}

\noindent In previous section, we capsule our development of internalized system as constructing a sound internalization structure. Now let us see how we can apply our formalization of internalization
to internalize subtyping, full-beta term equality and term-type inhabitation
relations as types. First, we specify an instance of $\fpi$. Namely, we
instantiate constant types as $B::=\top \ | \ \bot$. Additionally, we define
$\interp{\bot}_\phi:=\emptyset, \interp{\top}_\phi:= \mathcal{V}$. 

Recall that internalization works as follows. We first define the set
of reflective relational sentences that contains all the derivable judgments
from subtyping, full-beta term equality and term-type inhabitation. Then
we define the elimination relation and interpretation. We show our definition
interpretation structure is sound. Finally we present
the internalized system as the result of internalization.   
We will follow this recipe in the sequel.

\subsection{Subtyping}
\label{sec:exampleSubtyping}

\noindent We need to instantiate the three parts of internalization structure-$\langle D,E,I \rangle$. First, we specify $\Sigma :=\{<^{0+2}\}$. Then
we know all the reflective relational sentences should be in the form $T_1<T_2$. We identify reflective relational sentences $D$ as follows:

\begin{definition}
  \fbox{$T < T' \in D$}
  
  \
  
\begin{tabular}{lll}
\infer{ T<\top  \in D}{}

&

\infer{ \bot<T  \in D}{}

&
\infer{ X<X  \in D}{}

\\
\\

\infer{\forall X.T_1 <\forall X.T_2  \in D}{T_1 <T_2  \in D}

&

\infer{ \Pi x:T_1.T_2 <\Pi x:T_1'.T_2'  \in D}{ T_1' <T_1  \in D & T_2 <T_2' \in D}

 \\
  \end{tabular}
\end{definition}

 \noindent We can see that the way we identify $D$ is similar to the way we write subtyping rules. Now we define $(\Delta,t,t',T< T',T,T') \in E$. The meaning of this elimination relation is that if $t$ has type $T$ in context $\Delta$ and $t'$ has type $T<T'$, then $t$ can also has
the type $T'$. We define: $\mathcal{I}_{<}:=\{( \mathcal{R}_1,\mathcal{R}_2)\ | \mathcal{R}_1  \subseteq  \mathcal{R}_2\}$. We can see that $\mathcal{I}_{<}$ capture all the subset relations on reducibility candidates. The following two lemmas make sure we obtain a sound internalization structure from $( D,E,\mathcal{I}_{<})$ we defined above. 

\begin{lemma}[Soundness of the Reflective Relational Sentence]
\label{soundsub}
\noindent If $(T < T') \in D$, then $ \forall \sigma,\forall \phi,(\interp{\sigma T}_\phi,\interp{\sigma T'}_\phi) \in \mathcal{I}_{<}$.
\end{lemma}

\begin{proof} Since $\interp{\sigma T}_\phi=\interp{T}_\phi$, we just need to show: If $(T < T') \in D$, then $ \forall \phi,(\interp{T}_\phi,\interp{ T'}_\phi) \in \mathcal{I}_{<}$. We will prove this by induction on the structure of $T$.
  \begin{itemize}

\item \textbf{Case}: $T=\top$ or $T=\bot$

\noindent By inversion, it holds. 

\item \textbf{Case}: $T=X$

\noindent By inversion, we know $T'=X \ or \ \top$, again, it is the case.

\item \textbf{Case}: $T=\Pi x:T_1.T_2$

\noindent By inversion, $T'=\top$ or $T'=\Pi x:T_1'.T_2'$. Let us consider $T'=\Pi x:T_1'.T_2'$. In this case, by inversion, $ T_1'<T_1 \in D, T_2<T_2'\in D$. By IH, we have $ \interp{ T_1'}_\phi \subseteq \interp{T_1}_\phi$. Again, by IH, we have $ \interp{T_2}_\phi \subseteq \interp{ T_2'}_\phi$. For any $u \in \interp{ T_1'}_\phi \subseteq \interp{ T_1}_\phi$, if $t \in \interp{\Pi x: T_1. T_2}_\phi$, we have $tu \in \interp{[u/x] T_2}_\phi= \interp{ T_2}_\phi \subseteq \interp{ T_2'}_\phi$. So $t \in \interp{\Pi x:T_1'.T_2'}_\phi$.

\item \textbf{Case}: $T=\forall X.T$

\noindent By inversion, $T'=\top$ or $\forall X.T'$. So let's consider $T'=\forall X.T'$. By inversion, we 
know $ T<T' \in D$. We know for $t \in \interp{\forall X.  T}_\phi$, $\forall \mathcal{R} \in \mathfrak{R},t \in \interp{ T}_{\phi[\mathcal{R}/X]}$. By IH, $\interp{ T}_{\phi[\mathcal{R}/X]} \subseteq \interp{ T'}_{\phi[\mathcal{R}/X]}$. So $t \in \interp{\forall X.T'}_\phi$.

 \end{itemize}
\end{proof}

\begin{lemma}[Soundness of the Elimination Relation]

\noindent If $(\Delta,t,t',T_1 < T_2,T_1,T_2) \in E$, $(\sigma,\delta) \in [\Delta]$ and $\sigma(t) \in \interp{\sigma T_1}_\delta=\interp{T_1}_\delta$ and $T_1 < T_2 \in D$, then $\sigma(t) \in \interp{\sigma T_2}_\delta=\interp{T_2}_\delta$.
\end{lemma}


\noindent The subtyping setting also provides an example that it is possible to have diverging term under open 
terms and full-beta reduction in internalized system. It is possible to derive
$y: (\top < (\top \to \top)) \vdash (\lambda x.xx)(\lambda x.xx):\top$
using the underivable fact 
$\top < (\top \to \top)$ and derivable $(\top \to \top)<\top$
to establish an isomorphism between types $\top$ and $\top \to \top$.
Sticking to closed terms means we need not worry about this derivation 
directly.  And call-by-name evaluation ensures that
$\cdot \vdash \lambda y. (\lambda x.xx)(\lambda x.xx): (\top < (\top \to \top)) \to \top$
does not reduce. In contrast, full reduction would loop.


\subsection{Term Equality and Term-Type Inhabitation}

\noindent We can go even further to explore the internalization structure. We
add two more relation symbols to signature so that
$\Sigma=\{\downarrow^{(2+0)},<^{(0+2)},\triangleleft^{(1+1)}\}$. For
simplicity, we usually do not specify the arity. Thus the relational sentences
%on the base system now look like
have form: $t_1 \downarrow t_2, T_1<T_2,$ and 
$t \triangleleft T$ for base-system $t$ and $T$.
%where $t,T$ is defined in the base system and $\downarrow,<,\triangleleft
%\in \Sigma$. 

Now we are ready to specify more reflective relational sentences. We 
%identify additional 
define $\triangleleft$ reflective relational sentences by the following
condition:

 
\begin{center}
$t \triangleleft T \in D$ iff $ \forall \phi, t \in \interp{T}_\phi$
\end{center}
 
\noindent Notice that this definition is not algorithmic, which is fine since
our framework does not require decidability for the set $D$ for reflective
relational sentences.

The $\triangleleft$ symbol allows us to give
``morally correct'' types to terms which cannot otherwise be checked.
In practice, such terms are created when extracting
computational content from mechanically checked proofs. As a concrete example,
the Coq proof assistant uses an expressive language to define
functional programs and exports that code to OCaml for efficient compilation.
Resulting OCaml programs do not go wrong, but must use 
\texttt{Obj.magic:}$ \alpha \to \beta$ to pass ML's weaker
type system.  Likewise, AuraConf~\cite{vau09} uses a type constructor
resembling $\triangleleft$ to inform the type checker about the concealed
types of opaque ciphertexts.  Note that weaker variants of $\triangleleft$
may be possible when, as in the case of extracted proofs, there is a
conservative procedure for checking semantic type inclusion,
$t \triangleleft_{alt} T \in D$ iff $\mathit{Oracle(t,T)}$.
(We do not consider such variants further.)

We define $t_1 \downarrow t_2 \in D$ by the following rules:

\begin{definition}
 \fbox{$t \downarrow t' \in D$} 
 
\begin{tabular}{ccc}
\infer{t \downarrow t \in D}{}

&

\infer{(\lambda x.t)t' \downarrow [t'/x]t \in D}{}

&
\infer{t_1 \  t \downarrow t_2 \ t \in D}{t_1 \downarrow t_2 \in D}


\\

\\

\infer{\lambda x.t_1 \downarrow \lambda x.t_2 \in D}{t_1 \downarrow t_2\in D}

&
\infer{t \ t_1 \downarrow t \ t_2 \in D}{t_1 \downarrow t_2 \in D}

&

\infer{t_1 \downarrow t_3 \in D}{t_1 \downarrow t_2 \in D & t_2 \downarrow t_3\in D}

\\
\\

\infer{t_2 \downarrow t_1\in D}{t_1 \downarrow t_2\in D}

&


&
  \end{tabular}
  
\end{definition}
  
\noindent The rules above are the same as how we define the conversion in lambda calculus.
In this case, the syntax of extended types (as defined by the internalization framework) is:
 
\begin{center}
$\mathbf{EType} \ S::=\top \ | \ \bot \ | \ X  \ |  \ \Pi x:S.S \ | \ \forall X.S \ | \ t_1 \downarrow t_2 \ | \ T_1 <T_2 \ | \ t \triangleleft T$
\end{center}

\noindent The additional elimination relations are:
\begin{center}
  

$
\begin{array}{l}
(\Delta,t,t',t_1 \downarrow t_2,[t_1/x](t_3 \downarrow t_4),[t_2/x](t_3 \downarrow t_4)) \in E. \\
  \\
(\Delta,t,t',t \triangleleft T',T,T') \in E
\end{array}
$
\end{center}

\noindent The additional interpretations $\mathcal{I}_{\downarrow},\mathcal{I}_{\triangleleft}$ are:

\begin{itemize}
\item $\mathcal{I}_{\downarrow} \subseteq \mathbf{Terms} \times \mathbf{Terms}$ defined by $\mathcal{I}_{\downarrow}:=\{(t_1,t_2) \ | \ t_1 \downarrow t_2 \in D\}$.

\item $\mathcal{I}_{\triangleleft} \subseteq \mathbf{Terms} \times \mathfrak{R}$ defined by $\mathcal{I}_{\triangleleft}:=\{(t, \mathcal{R}) \ | \ t \in \mathcal{R}\}$.
\end{itemize}
 
\noindent We have now defined the three parts of the internalization structure. We need to show that this structure is sound. For that purpose, we have following lemmas.

\begin{lemma}[Soundness of the Reflective Relational Sentence]
\

\begin{itemize}
\item If $(t_1 \downarrow t_2) \in D$, then $ \forall \sigma, (\sigma t_1,\sigma t_2) \in \mathcal{I}_{\downarrow}$.
\item If $(t \triangleleft T) \in D$, then $\forall \sigma, \forall \phi,(\sigma t, \interp{\sigma T}_\phi) \in \mathcal{I}_{\triangleleft}$.

\end{itemize}
\end{lemma}

\begin{proof} 
\noindent If $(t_1 \downarrow t_2) \in D$, we have $\forall \sigma, (\sigma t_1,\sigma t_2) \in D$. This is because we define the $t \downarrow t'$ relation same as the conversion in lambda calculus and this is one of its properties. Thus $(\sigma t_1,\sigma t_2) \in \mathcal{I}_{\downarrow}$ by definition of $\mathcal{I}_{\downarrow}$.

\noindent If $(t \triangleleft T) \in D$, by definition, we have $\forall \phi, t \in \interp{T}_\phi$. Since $t$ is closed, $\forall \sigma, \sigma t\equiv t$. And we have $\interp{\sigma T}_\phi=\interp{T}_\phi$. So $\forall \phi, \forall \sigma, \sigma t \in \interp{\sigma T}_\phi$. 
Thus $\forall \sigma, \forall \phi,(\sigma t, \interp{\sigma T}_\phi) \in \mathcal{I}_{\triangleleft}$.

\end{proof}

\begin{lemma}[Soundness of the Elimination Relation]

  \

\begin{itemize}
\item If $(\Delta,t,t',t_1 \downarrow t_2,[t_1/x](t_3 \downarrow t_4),[t_2/x](t_3 \downarrow t_4)) \in E$, $(\sigma,\delta) \in [\Delta]$ and $\sigma(t) \in \interp{\sigma [t_1/x](t_3 \downarrow t_4)}_\delta$ and $t_1 \downarrow t_2 \in D$, then $\sigma(t) \in \interp{\sigma [t_2/x](t_3 \downarrow t_4)}_\delta$.

\item If $(\Delta,t,t',t \triangleleft T',T,T') \in E$, $(\sigma,\delta) \in [\Delta]$ and $\sigma(t) \in \interp{\sigma T}_\delta$ and $t \triangleleft T' \in D$, then $\sigma(t) \in \interp{\sigma T'}_\delta$.
\end{itemize}
\end{lemma}

\begin{proof}
\noindent We have $\sigma(t) \in \interp{\sigma [t_1/x](t_3 \downarrow t_4)}_\delta$, thus $\sigma(t) \in \mathcal{A}$ and $(\sigma [t_1/x]t_3) \downarrow (\sigma [t_1/x]t_4) \in D$. Since $t_1 \downarrow t_2 \in D$, then we have $(\sigma [t_2/x]t_3) \downarrow (\sigma [t_2/x]t_4) \in D$. This is also followed by the property of $t \downarrow t'$. So $\sigma(t) \in \interp{\sigma [t_2/x](t_3 \downarrow t_4)}_\delta$. 

\noindent By \textit{soundness of reflective relational sentences}, $t \triangleleft T' \in D$ implies $\forall \phi, \sigma t=t \in \interp{T'}_\phi$. So it is the case.

\end{proof}

\noindent So the structure  $( D,E,\mathcal{I}_<, \mathcal{I}_{\downarrow},\mathcal{I}_{\triangleleft})$ we have defined is  a sound internalization structure. Let us see some instances of \textit{A-elim} rule and \textit{A-intro} rule for the internalized system based on this internalization structure:

\begin{center}
$
\begin{array}{c}

\infer[{\textit{A-intro}}]{\Delta \vdash \mathsf{axiom} : t_1 \downarrow t_2}
      { t_1 \downarrow t_2 \in D & \mathrm{FVar}(t_1 \downarrow t_2) \subseteq \mathrm{dom}(\Delta) &\Delta \vdash \mathsf{OK}}

\\
\\
\infer[{\textit{A-elim}}]{\Delta \vdash t:[t_2/x](t_3 \downarrow t_4)}{\Delta \vdash t:[t_1/x](t_3 \downarrow t_4) & \Delta \vdash t':t_1 \downarrow t_2 & }

\\
\\

\infer[{\textit{A-intro}}]{\Delta \vdash \mathsf{axiom} : t \triangleleft T'}
      { t \triangleleft T' \in D & \mathrm{FVar}(t \triangleleft T') \subseteq \mathrm{dom}(\Delta) &\Delta \vdash \mathsf{OK}}

\\
\\

\infer[{\textit{A-elim}}]{\Delta \vdash t:T'}{\Delta \vdash t:T & \Delta \vdash t':t \triangleleft T'}

\\

\end{array} 
$
\end{center}
\noindent We can see that our elimination rule for $\downarrow$ realizes a more general form of transitivity. For example, if we have a term with a type $[t_2/y](t_1 \downarrow y)$ and $t_2 \downarrow t_3 \in D$, then we can assign this term a new type $[t_3/y](t_1 \downarrow y)$ by the elimination rule.

%\subsection{CBN-Normalization}
%
%\noindent We define $t! \in D$ as follows:
%
%
%\infer{v! \in D}{}
%
%\
%
%\infer{t! \in D}{t \stackrel{*}{\leadsto} v}
%
%\
%
%\noindent We define $\mathcal{I}_!=\{t | t! \in D \}$.
%
%\
%
%\noindent Soundness of reflective relational sentence: If $t! \in D$, then $\forall \sigma \in \textit{Sub}, \sigma t \in \mathcal{I}_!$.
%
%\
%
%\noindent Proof: If $t$ is a value, then $\sigma t$ is also a value, so $\sigma t! \in D$, thus $\sigma t \in \mathcal{I}_!$. If $t$ is not a value, then $\exists v, t \stackrel{*}{\leadsto} v$. By compatibility, we have $\sigma t \stackrel{*}{\leadsto} \sigma v$. Since $\sigma v \in \mathbf{Values}$, we have $\sigma t! \in D$. Thus $\sigma t \in \mathcal{I}_!$.
%
%\
%
%\noindent Since we don't have a elimination rule for $t!$, we don't need to prove the soundness of elimination relation. 
\section{Summary}
\label{sec:related}
 
\noindent We have formalized the notion of internalization structure
and demonstrated that the internalized system is terminating. We also
have shown how our formalization can be applied to full-beta term
equality, subtyping and term-type inhabitation relation. Our approach
makes it easier to establish normalization for type theories with
these features, since the framework provides the analysis for all but
the internalization-specific parts of the language. 

In retrospective, the difficulty of this approach is that, with
internalization, the type system has the ability to make inconsistent
assumption using the internalized relation such as $\triangleleft, <, \downarrow$,
which will falsify the type preservation property. For example, assuming
we manage to internalize a form of type equivalence that we can
automatically convert one type to another, and suppose
$a : A \to B \equiv A \to C, d:A  \vdash t : A \to B$. Then 
we can have $a : A \to B \equiv A \to C, d:A  \vdash t\ d : B$
and $a : A \to B \equiv A \to C, d:A  \vdash t\ d : A$ due to automatic
conversion. But we know that $A, B$ are not unifiable. Thus we have a counterexample
for type preservation\footnote{This counterexample is found by the Upenn group.}. 
This counter example will not arise if we adopt the Leibniz equality, namely, 
we define $T \equiv T'$ as $\forall P. P(T) \to P(T')$. Then we would have $a : A \to B \equiv A \to C, d:A  \vdash t\ d : B$ and $a : A \to B \equiv A \to C, d:A  \vdash (a \ t)\ d : A$,
which is entirely legal and type preservation still hold. This sequence of
development suggests that we should at least be careful when we
try to ``lift'' propositional equivalence $\leftrightarrow$ to meta-level equivalence $\equiv$.
We will discuss Leibniz equality more in Chapter \ref{comprehension}. 

\chapter{Lambda Encodings with Dependent Types}
\label{selftype}
  In this Chapter, we revisit lambda encodings of data, proposing new solutions to
  several old problems, in particular dependent elimination with
  lambda encodings (Section \ref{s:overview}). We start with a type-assignment form of the
  Calculus of Constructions, restricted recursive definitions and
  Miquel's implicit product. We add a type construct $\iota x.T$,
  called a \emph{self type}, which allows $T$ to refer to the subject
  of typing (Section \ref{self}).  We show how the resulting System $\self$ with this novel
  form of dependency supports dependent elimination with lambda
  encodings, including induction principles (Section \ref{encoding}). 
  Strong normalization of $\self$ is established by defining an erasure from $\self$ to a
  version of $\fomega$ with positive recursive type definitions, which
  we analyze (Section \ref{s}). We also prove type preservation for $\self$. 


\section{Introduction} 
\label{sec:intro}

Modern type-theoretic tools Coq and Agda extend a typed lambda
calculus with a rich notion of primitive datatypes.  Both tools build
on established foundational concepts, but the interactions of these,
particularly with datatypes and recursion, often leads to unexpected
problems.  For example, it is well-known that type preservation does
not hold in Coq, due to the treatment of coinductive
types~\cite{gimenez96}.  Arbitrary nesting of coinductive and
inductive types is not supported by the current version of Agda,
leading to new proposals like co-patterns~\cite{abel+13}.  And new
issues are discovered with disturbing frequency; e.g., an unexpected
incompatibility of extensional consequences of Homotopy Type Theory
with both Coq and Agda was discovered in December,
2013~\cite{schepler13}.

The above issues all are related to the datatype system, which must
determine what are the legal inductive/coinductive datatypes, in the
presence of indexing, dependency, and generalized induction (allowing
functional arguments to constructors).  And for formal study of the
type theory -- either on paper~\cite{werner:phd}, or in a proof
assistant~\cite{barras10} -- one must formalize the datatype system,
which can be daunting, even in very capable hands (cf. Section 2
of~\cite{capretta05}).

Fortunately, an alternative to primitive datatypes exists: lambda
encodings, like the well-known Church and Scott
encodings~\cite{Church:1985,CHS:72}.  Utilizing the core typed lambda
calculus for representing data means that no datatype system is needed
at all, greatly simplifying the formal theory.  We focus here just on
inductive types, since in extensions of System \textbf{F}, coinductive types
can be reduced to inductive ones~\cite{geuvers94}.

Several problems historically prevented lambda encodings from being
adopted in practical type theories.  Scott encodings are efficient but
do not inherently provide a form of iteration or recursion. Church
encodings inherently provide iteration, and are typable in System \textbf{F}.
Due to strong normalization of System \textbf{F}~\cite{Girard:72}, they are
thus suitable for use in a total (impredicative) type theory, but: 
\begin{enumerate}
\item The predecessor of $n$ takes $O(n)$ time to compute instead of constant time.
\item We cannot prove $0 \neq 1$ with the usual
definition of $\neq$.
\item Induction is not derivable~\cite{geuvers01}.
\item Large eliminations (computing types from data) are not supported.
\end{enumerate}
These issues motivated the development of the Calculus of Inductive
Constructions (cf.~\cite{Werner:92}).  Problem (1) is best known but
has a surprisingly underappreciated solution: if we accept positive
recursive definitions (which preserve normalization), then we can use
Parigot numerals, which are like Church numerals but based on
recursors not iterators~\cite{parigot88}.  Normal forms of Parigot
numerals are exponential in size, but a reasonable term-graph
implementation should be able to keep them linear via sharing.  The
other three problems have remained unsolved.

In this Chapter, we propose solutions to problems (2) and (3).  For
problem (2) we propose to change the definition
of falsehood from explosion ($\forall X.X$, everything is true) to
equational inconsistency ($\forall X.\Pi x : X. \Pi y : X. x =_X y$,
everything is equal for any type).  We point out that $0\neq 1$ is
derivable with this notion.  Our main contribution is for problem (3).
We adapt $\cc$ to support dependent elimination with Church or Parigot
encodings, using a novel type construct called \emph{self types},
$\iota x.T$, to express dependency of a type on its subject.  This
allows deriving induction principles in a total type theory, and we
believe it is the missing piece of the puzzle for dependent typing of
pure lambda calculus. For problem (4), we suspect it would be hard to
extend self type to support large elimination, that would mean we would have to surport impredicative kind polymorphism, which is known to render Girard's paradox.

We summarize the main technical points:

\begin{itemize}
\item System $\self$, which enables us to encode Church
  and Parigot data and derive induction principles for these data.
  
\item We prove strong normalization of $\self$ by erasure to a version
  of $\fomega$ with positive recursive type definitions.  We prove
  strong normalization of this version of $\fomega$ by adapting a
  standard argument.
  
\item Type preservation for $\self$ is proved by extending Barendregt's
  method \cite{Barendregt:1993} to handle implicit products and making use
  of a confluence argument. %% crucial
  %% application of Hardin's interpretation method~\cite{Hardin:1989}.
  %% This is a new confluence result for the combination of standard
  %% $\beta$-reduction with permutation of recursive definitions.
\end{itemize}

Detailed arguments omitted here may be found in \cite{fu+14}.

\section{Overview of System $\self$}
\label{s:overview}
System $\self$ extends a type-assignment formulation of the Calculus of
Constructions ($\cc$)~\cite{Coquand:1988}. We allow   
global recursive definitions in a form we call a \emph{closure}:
$
\bm{x}{S}{t}{N} \cup \bm{X}{\kappa}{T}{M}
$
The $x_i$ are term variables which cannot appear in the terms $t_i$,
but can appear in the types $T_i$.  Occurrences in types are used to
express dependency, and are crucial for our approach.  Erasure to
$\fomega$ with positive recursive definitions will drop all such
occurrences. The $X_i$ are type variables that can appear 
positively in the $T_i$ or at erased positions (explained later). 

%% These are similar to what is proposed
%% in the $\Pi\Sigma$ system of Altenkirch et al~\cite{altenkirch+10},
%% although here we show confluence of reduction in the presence of steps
%% which push closures deeper into terms.  The work cited on $\Pi\Sigma$                          
%% does not establish confluence, which turns out to be rather intricate.
%% Allowing local recursive definitions is an attractive generalization
%% over global datatype declarations, although our confluence proof
%% currently requires that closures are closed: all variables in the
%% $t_i$ and $T_i$ must be among the $x_i$ and $X_i$.

The essential new construct is the self type $\iota x.T$.  Note that
this is different from self typing in the object-oriented (OO)
literature, where the central problem has been to allow
self-application while still validating natural record-subtyping
rules~\cite{odersky+03,abadi+94}.  Typing the self parameter of an
object's methods appears different from allowing a type to refer to
its subject, though Hickey proposes a type-theoretic encoding of
objects based on very dependent function types $\{ f \,|\, x:A\to
B\}$, where the range $B$ can depend on both $x$ and values of the
function $f$ itself~\cite{hickey96}.  The self types we propose appear to be simpler.


\subsection{Induction Principle}
Let us take a closer look at the difficulties of deriving an induction principle for Church
numerals in $\cc$, and then consider our solutions. 
In $\cc$ \`a la Curry, let $ \mathsf{Nat} := \forall X.(X \to X) \to X \to X$. One can obtain a notion of \textit{indexed iterator} by $\mathsf{It} := \lambda x.\lambda f.\lambda a. x\ f\ a$ and $\mathsf{It}: \forall X.\Pi x:\mathsf{Nat}. (X \to X) \to X \to X$. Thus we have 
 $\mathsf{It}\ \bar{n} =_{\beta} \lambda f.\lambda a. \bar{n}\ f\ a =_{\beta}\lambda f.\lambda a. \underbrace{f ( f ( f...(f}_{n} a)...))$. 
 One may want to know if we can obtain a finer version, namely, the induction principle-$\mathsf{Ind}$ such that:

$\mathsf{Ind} :\forall P:\mathsf{Nat} \to *. \Pi x:\mathsf{Nat}. (\Pi y:\mathsf{Nat}.(P y \to P(\mathsf{S} y))) \to P\ \bar{0} \to P\ x$ 

\noindent Let us try to construct such $\mathsf{Ind}$. First observe the following beta-equalities and typings:


$\mathsf{Ind} \ \bar{0} =_{\beta} \lambda f.\lambda a.a $

$\mathsf{Ind} \ \bar{0} : (\Pi y:\mathsf{Nat}.(P y \to P(\mathsf{S} y))) \to P\ \bar{0} \to P\ \bar{0}$

$\mathsf{Ind} \ \bar{n} =_{\beta} \lambda f.\lambda a.\underbrace{f\ \overline{n-1} (... f\ \bar{1}\ (f}_{n>0}\ \bar{0}\ a)) $

$\mathsf{Ind} \ \bar{n} : (\Pi y:\mathsf{Nat}.(P y \to P(\mathsf{S} y))) \to P\ \bar{0} \to P\ \bar{n}$


with $f:\Pi y:\mathsf{Nat}.(P y \to P(\mathsf{S} y)), a: P\ \bar{0}$

\noindent These equalities suggest that $\mathsf{Ind} := \lambda x.\lambda f.\lambda a.x\ f\ a$, 
using Parigot numerals~\cite{parigot88}:

$\bar{0}:= \lambda s.\lambda z.z$

$\bar{n}:= \lambda s.\lambda z.s\ \overline{n-1}\ (\overline{n-1}\ s\ z)$ 

\noindent %% We want to remark that this notion of lambda numerals have been discovered independently by many others(cite).
Each numeral corresponds to its terminating recursor.  

Now, let us try to type these 
lambda numerals. It is reasonable to assign $s:\Pi y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y))$
and $z: P\ \bar{0}$. Thus we have the following typing relations: 

$\bar{0} : \Pi y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{0}$

$\bar{1} : \Pi y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{1}$

$\bar{n} : \Pi y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{n}$

\noindent So we want to define $\mathsf{Nat}$ to be something like:

$\forall P: \mathsf{Nat}\to *.\Pi y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{n}$ for any $\bar{n}$. 

\noindent Two problems arise with this scheme of encoding. The first problem involves recursiveness. The definiens of $\mathsf{Nat}$ contains $\mathsf{Nat}$ and $\mathsf{S}, \bar{0}$, while the type of $\mathsf{S}$ is $\mathsf{Nat} \to \mathsf{Nat}$ and the type of $\bar{0}$ is $\mathsf{Nat}$. So the typing of $\mathsf{Nat}$ will be mutually recursive. Observe that the recursive occurrences of $\mathsf{Nat}$ are all at the type-annotated positions; i.e., the right side of the ``$:$''. 

Note that the subdata of $\bar{n}$ is responsible for one recursive occurrence of $\mathsf{Nat}$, namely, $\Pi y:\mathsf{Nat}$. If one never computes with the subdata, then these numerals will behave just like Church numerals. This inspires us to use Miquel's implicit product \cite{miquel:2001}. In this case, we want to redefine $\mathsf{Nat}$ to be something like:

$\forall P: \mathsf{Nat}\to *.\forall y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{n}$ for any $\bar{n}$. 

\noindent Here $\forall y:\mathsf{Nat}$ is the implicit product. Now our notion of numerals are exactly Church numerals instead of Parigot numerals. Even better, this definition of $\mathsf{Nat}$ can be erased to $\fomega$. Since $\fomega$'s types do not have dependency on terms, $P:\mathsf{Nat} \to *$ will get erased to $P:*$. It is known that one can also erase the implicit product~\cite{ahn:2013}. The erasure of $\mathsf{Nat}$ will be $\forall P:  *.(P \to P) \to P \to P$, which is the definition of $\mathsf{Nat}$ in $\fomega$. %% As long as 
%% we restrict the recursive occurrences of the type to be at the erased positions, we will have a meaningful interpretation over $\fomega$. 

The second problem is about quantification. We want to define a type $\mathsf{Nat}$ for any $\bar{n}$, but right now what we really have is one $\mathsf{Nat}$ for each numeral $\bar{n}$. 
 We solve this problem by introducing a new type construct $\iota x.T$ called a \textit{self type}. This allows us to make this
definition (for Church-encoded naturals):

$\mathsf{Nat} := \iota x.\forall P:\mathsf{Nat} \to *.\forall y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ x$

\noindent We require that the self type can only be instantiated/generalized by its own subject, so we add the following two rules:
\[
\begin{array}{ccc}
  \infer[\textit{selfGen}]{\Gamma \vdash t: \iota x.T}{\Gamma \vdash t: [t/x]T}
& \ & 
 \infer[\textit{selfInst}]{\Gamma \vdash t: [t/x]T}{\Gamma \vdash t: \iota x.T}
\\
\end{array}
\]

\noindent We have the following inferences\footnote{The double bar means that the converse of the inference also holds.}:
\[
\begin{array}{c}
\infer={\bar{n}: \iota x.\forall P:\mathsf{Nat} \to *.\forall y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ x}{\bar{n} : \forall P:\mathsf{Nat} \to *.\forall y:\mathsf{Nat}.(P\ y \to P(\mathsf{S}\ y)) \to P\ \bar{0} \to P\ \bar{n}} 
\end{array}
\]

\subsection{The Notion of Contradiction}

In $\cc$ \`a la Curry, it is customary to use $\forall X:*.X$ as the notion of contradiction, since an inhabitant of the type $\forall X:*.X$ will inhabit any type, so the law of explosion is subsumed by the type $\forall X:*.X$. However, this notion of contradiction is too strong to be useful. Let $t =_A t'$ denote $\forall C: A \to *. C\ t \to C\ t'$ with $t, t' : A$. Then $0 =_{\nat} 1$ can be expanded to $\forall C:\nat \to *. C\ 0 \to C\ 1$ ($0$ is Leibniz equals to $1$). One can not derive a proof for $(\forall C:\nat \to *. C\ 0 \to C\ 1) \to \forall X:*.X$, because the erasure of $(\forall C:\nat \to *. C\ 0 \to C\ 1) \to \forall X:*.X$ in System \textbf{F} would be $(\forall C:*. C \to C) \to \forall X:*.X$, and we know that $\forall C:*. C \to C$ is inhabited. So the inhabitation of $(\forall C:\nat \to *. C\ 0 \to C\ 1) \to \forall X:*.X$ will imply the inhabitation of $\forall X:*.X$ in System \textbf{F}, which does not hold. If we take Leibniz equality and use $\forall X:*.X$ as contradiction, then we can not prove any negative results about equality. 

On the other hand, an equational theory is considered inconsistent if $a = b$ for all term $a$ and $b$. So we propose to use $\forall A:*.\Pi x:A.\Pi y:A. x =_A y$ as the notion of contradiction in $\cc$. We first want to make sure it is uninhabited. The way to argue that is first assume it is inhabited by $t$. Since $\cc$ is strongly normalizing, the normal form of $t$ must be of the form\footnote{We use square brackets [ ] to show annotations that are not present in the inhabiting lambda term in Curry-style System \textbf{F}.} $[\lambda A:*.]\lambda x[:A].\lambda y[:A].[\lambda C:A \to *].\lambda z[: C\ x]. n$ for some normal term $n$ with type $C\ y$, but we know that there is no combination of $x,y,z$ to make a term of type $C\ y$. So the type $\forall A:*. \Pi x:A.\Pi y:A. \forall C: A \to *. C x \to C y$ is uninhabited. We can then
prove the following theorem.

\begin{theorem}
  \label{contract}
  $0 = 1 \to \bot$ is inhabited in $\cc$, where $\bot := \forall A:*.\Pi x:A.\Pi y:A. \forall C: A \to *. C\ x \to C\ y$, $0 := \lambda s.\lambda z.z$, $1 := \lambda s.\lambda z.s\ z$.
\end{theorem}
\begin{proof}
  Assume $\nat := \forall B:*. (B \to B) \to B\to B$. Let $\Gamma = a : (\forall D: \nat \to *. D 0 \to D 1), A:*, x:A, y:A, C:A \to *, c: C\ x$. We want to construct a term of type $C\ y$. Let $F := \lambda n[:\nat]. n\ [A]\ (\lambda q[:A].y) x$. Note that 
  $F: \nat \to A$. We know that
 $F 0 =_{\beta} x$ and $F 1 =_{\beta} y$. So we can indeed convert the type of $c$ from $C x$ to $C\ (F 0)$. And then we instantiate the $D$ in $\forall D: \nat \to *. D 0 \to D 1$ with $\lambda x[:\nat].C\ (F x)$. So we have $C\ (F 0) \to C\ (F 1)$ as the type of $a$. So $a\ c : C (F 1)$, which means $a\ c : C y$. So we just show how to inhabit $0 = 1 \to \bot$ in $\cc$. 
  
\end{proof}

Once $\bot$ is derived, one can not distinguish the domain of individuals. Note that this notion of contradiction does not subsume law of explosion. 

\section{System $\self$}
\label{self}

\begin{definition}[Syntax]
\

\noindent Terms $t \ :: = \ x \ | \ \lambda x.t \ | \ t t' $

\noindent Types $T \ ::= \ X \ | \ \forall X:\kappa.T \ | \ \Pi x:T_1.T_2 \ | \ \mgray{\forall x:T_1.T_2}  | \mgray{\iota x.T} \ | \ \mgray{T \ t} \ | \ \lambda X.T \ | \ \mgray{\lambda x.T}\ | \ T_1 T_2 $

\noindent Kinds $\kappa \ ::= \ * \ | \ \mgray{\Pi x:T.\kappa} \ | \  \Pi X:\kappa'. \kappa$

\noindent Context $\Gamma \ ::= \ \cdot \ | \ \Gamma, x:T \ |  \ \Gamma, X:\kappa\ | \ \Gamma, \mu$

\noindent Closure $\mu \ ::= \ \bm{x}{S}{t}{N} \cup \bm{X}{\kappa}{T}{M}$
  
\end{definition}

%%\noindent Some remarks about the syntax are in order:

\textbf{Closures.} For $\bm{x}{S}{t}{N}$, we mean the term variable $x_i$ of type $S_i$ is defined to be $t_i$ for some $i\in N$; similarly for $\bm{X}{\kappa}{T}{M}$.

\textbf{Legal positions for recursion in closures.} For $\bm{x}{S}{t}{N}$, we do not
allow any recursive (or mutually recursive) definitions. For $\bm{X}{\kappa}{T}{M}$, we only allow singly recursive type definitions, but not mutually recursive ones. This is not a
fundamental limitation of the approach; it is just for simplicity of
the normalization argument. The recursive occurrences of type
variables can only be at positive or erased positions.  Erased
positions, following the erasure function we will see in
Section~\ref{erasure}, are those in kinds or in the types for
$\forall$-bound variables.

%% \textbf{Annotated closures.} If $\mu$ is $\M{x}{t}{N}\cup
%% \M{X}{T}{M}$, then $\tilde{\mu}$ is an annotated closure
%% $\bm{x}{S}{t}{N} \cup \bm{X}{\kappa}{T}{N}$ for some type $S_i$, kind
%% $\kappa_i$.

\textbf{Variable restrictions for closures.}   Let $\mathrm{FV}(e)$ denote the set of free term variables in expression $e$ (either term, type, or kind), and let $\mathrm{FVar}(T)$ denote the set of free type variables in type $T$.  Then for $\bm{x}{S}{t}{N}\cup \bm{X}{\kappa}{T}{M}$, we make the simplifying assumption that for any $ 1 \leq i \leq n $, $\mathrm{FV}(t_i) = \emptyset$.   Also, for any $ 1 \leq i \leq m $, we require $\mathrm{FV}(T_i) \subseteq \mathrm{dom}(\mu)$, and $\mathrm{FVar(T_i)} \subseteq \{X_i\}$. All our examples below satisfy these conditions.  %% These restrictions are necessary in order to prove strong normalization and type preservation. ? 

\textbf{Notation for accessing closures.} $(t_i : S_i) \in \mu$ means $(x_i:S_i) \mapsto t_i \in \mu$ and $(T_i : \kappa_i) \in \mu$ means $(X_i:\kappa_i) \mapsto T_i \in \mu$. Also, $x_i \mapsto t_i \in \mu$ means $(x_i:S_i) \mapsto t_i \in \mu$ for some $S_i$ and $X_i \mapsto T_i \in \mu$ means $(X_i:\kappa_i) \mapsto T_i \in \mu$ for some $\kappa_i$. 

 \textbf{Well-formed annotated closures.} $\Gamma \vdash \mu \ \mathsf{ok}$ stands for $\{\Gamma, \mu \vdash t_j: T_j\}_{(t_j:T_j) \in {\mu}}$ and $\{\Gamma, \mu \vdash T_j: \kappa_j\}_{(T_j:\kappa_j) \in \mu}$.  In other words, the defining expressions in closures must be typable with respect to the context and the entire closure.
  

\textbf{Notation for equivalence.} $\cong $ is the congruence closure of $\to_{\beta}$. 
 
\textbf{Self type formation.} Typing and kinding do not depend on well-formedness of the
context, so the self type formation rule \textit{self} is not circular. %% In fact, we can even show: if $\Gamma \vdash \mathsf{wf}$ and $\Gamma \vdash t:t'$, then $\Gamma \vdash t':*$(see appendix).

\noindent \textbf{Well-formed Contexts} \fbox{$\Gamma \vdash \mathsf{wf}$}

\begin{center}

\begin{tabular}{llllllll}
\infer{ \cdot \vdash \mathsf{wf}}{}

&
\
&
\infer{ \Gamma, x:T \vdash \mathsf{wf}}{\Gamma \vdash \mathsf{wf} & \Gamma \vdash T:*}

&
\
&
\infer{ \Gamma, X:\kappa \vdash \mathsf{wf}}{\Gamma \vdash \mathsf{wf} & \Gamma \vdash \kappa:\Box}
&
\
&
\infer{\Gamma, \mu \vdash \mathsf{wf}}{\Gamma \vdash \mathsf{wf}
& \Gamma \vdash \mu \ \mathsf{ok}}
\end{tabular}

\end{center}

\noindent \textbf{Well-formed Kinds} \fbox{$\Gamma \vdash \kappa : \Box$}
\begin{center}

\begin{tabular}{lllll}

\infer{\Gamma \vdash *:\Box}{}

&
\
&
\infer{\Gamma \vdash \Pi X:\kappa'.\kappa: \Box}{\Gamma, X:\kappa' \vdash \kappa : \Box & 
\Gamma  \vdash \kappa' : \Box}

&
\
&
\gray{\infer{\Gamma \vdash \Pi x:T.\kappa: \Box}{\Gamma, x:T \vdash \kappa : \Box
& \Gamma \vdash T:*}}

\end{tabular}

\end{center}

\noindent \textbf{Kinding} \fbox{$\Gamma \vdash T : \kappa$}      
\begin{center}

\begin{tabular}{ll}
\infer{\Gamma \vdash X:\kappa}{(X:\kappa) \in \Gamma}

&
\infer{\Gamma \vdash T : \kappa'}{\Gamma \vdash T:
\kappa & \Gamma \vdash \kappa \cong \kappa' & \Gamma \vdash \kappa': \Box}

\\
\\
\infer{\Gamma \vdash \Pi x:T_1.T_2 : *}{ \Gamma \vdash T_1 : * &
\Gamma, x: T_1 \vdash T_2 : *}

&


\infer{\Gamma \vdash \forall X:\kappa. T : *}{ \Gamma, X:\kappa \vdash T : * & \Gamma \vdash \kappa:\Box}

\\
\\
\gray{
\infer{\Gamma \vdash \forall x:T_1.T_2 : *}{ \Gamma, x:T_1 \vdash T_2 : * &
\Gamma \vdash T_1 : *}
}

&

\gray{\infer[\textit{Self}]{\Gamma \vdash \iota x.T : *}{\Gamma, x:\iota x.T \vdash T : *}}
\\
\\

\infer{\Gamma \vdash \lambda X.T: \Pi X:\kappa. \kappa'}{\Gamma, X:\kappa \vdash T : \kappa' & \Gamma \vdash \kappa : \Box }

&
\gray{
\infer{\Gamma \vdash \lambda x.T: \Pi x:T'.\kappa}{\Gamma, x:T' \vdash T : \kappa & \Gamma \vdash T':*}
}
\\
\\
\gray{\infer{\Gamma \vdash S\ t: [t/x]\kappa}{\Gamma \vdash S: \Pi x:T.\kappa & 
\Gamma \vdash t:T}}

&

\infer{\Gamma \vdash S\ T: [T/X]\kappa}{\Gamma \vdash S: \Pi X:\kappa'.\kappa & 
\Gamma \vdash T:\kappa'}
\end{tabular}

\end{center}

\noindent \textbf{Typing} \fbox{$\Gamma \vdash t : T$}

\begin{center}

\begin{tabular}{ll}

\infer[\textit{Conv}]{\Gamma \vdash t : T_2}{\Gamma \vdash t:
T_1 & \Gamma \vdash T_1 \cong T_2 & \Gamma \vdash T_2:*}
&
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

\\

\\
\gray{
\infer[\textit{SelfGen}]{\Gamma \vdash t : \iota x.T}{\Gamma
\vdash t: [t/x]T & \Gamma \vdash \iota x.T: *}
}
&

\gray{
\infer[\textit{SelfInst}]{\Gamma \vdash t: [t/x]T}{\Gamma
\vdash t : \iota x.T}
}
\\
\\
\gray{
\infer[\textit{Indx}]{\Gamma \vdash t : \forall x:T_1.T_2}
{\Gamma, x:T_1 \vdash t: T_2 & \Gamma \vdash T_1:* & x \notin \mathrm{FV}(t)}
}
&
\gray{
\infer[\textit{Dex}]{\Gamma \vdash t :[t'/x]T_2}{\Gamma
\vdash t: \forall x:T_1.T_2 & \Gamma \vdash t': T_1}
}
\\
\\
\infer[\textit{App}]{\Gamma \vdash t t':[t'/x]T_2}{\Gamma
\vdash t: \Pi x:T_1. T_2 & \Gamma \vdash t': T_1}


&

\infer[\textit{Poly}]{\Gamma \vdash  t :\forall X:\kappa.T}
{\Gamma, X:\kappa \vdash t: T & \Gamma \vdash \kappa:\Box}

\\
\\
\infer[\textit{Inst}]{\Gamma \vdash t:[T'/X]T}{\Gamma \vdash t: \forall X:\kappa.T 
& \Gamma \vdash T': \kappa}

&

\infer[\textit{Func}]{\Gamma \vdash \lambda x.t : \Pi x:T_1. T_2}
{\Gamma, x:T_1 \vdash t: T_2 & \Gamma \vdash T_1:*}
\end{tabular}

\end{center}
      
\noindent \textbf{Reductions} \fbox{$\Gamma \vdash t \to_{\beta} t'$}, \fbox{$\Gamma \vdash T \to_{\beta} T'$}
\begin{center}

\begin{tabular}{lllll}

%% \infer{\Gamma \vdash\mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto
%% t_i) \in \mu}

\infer{\Gamma \vdash x \to_{\beta} t}{(x\mapsto t) \in \Gamma}

&\  &
\infer{\Gamma \vdash(\lambda x.t)t' \to_{\beta} [t'/x]t}{}
& \ & 

\infer{\Gamma \vdash X \to_{\beta} T}{(X\mapsto T) \in \Gamma}
\\
\\
\gray{
\infer{\Gamma \vdash(\lambda x.T)t \to_{\beta} [t/x]T}{}
}

&\
&

\infer{\Gamma \vdash(\lambda X.T)T' \to_{\beta} [T'/X]T}{}
\end{tabular}

\end{center}


\section{Lambda Encodings in $\self$}
\label{encoding}
Now let us see some concrete examples of lambda encoding in $\self$. For convenience, we write $T \to T'$ for $\Pi x:T.T'$ with $x \notin \mathrm{FV}(T')$, and similarly for kinds.

\subsection{Natural Numbers}
\begin{definition}[Church Numerals]
Let $\mu_c$ be the following closure:

\noindent $(\mathsf{Nat}:* ) \mapsto \iota x. \forall C: \mathsf{Nat}\to *.  (\forall n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n)) \to C\ 0 \to C\ x$

\noindent $(\mathsf{S}: \mathsf{Nat} \to \mathsf{Nat} )\mapsto \lambda n.\lambda s.\lambda z. s \ (n\ s\ z)$

\noindent $(0:\mathsf{Nat})  \mapsto  \lambda s. \lambda z.z$
\end{definition}

\noindent With $s: \forall n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n), z: C\ 0, n: \mathsf{Nat}$, we have $ {\mu}_c \vdash \mathsf{wf}$ (using \textit{selfGen} and \textit{selfInst} rules). Also note
that the $ {\mu}_c$ satisfies the constraints on recursive definitions. Similarly, if we choose to use explicit product, then we can define Parigot numerals. 

\begin{definition}[Parigot Numerals]
Let $ {\mu}_p$ be the following closure:

\noindent $(\mathsf{Nat}:* ) \mapsto \iota x. \forall C: \mathsf{Nat}\to *.  (\gray{$\Pi$} n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n)) \to C\ 0 \to C\ x$

\noindent $(\mathsf{S}: \mathsf{Nat} \to \mathsf{Nat} )\mapsto \lambda n.\lambda s.\lambda z. s \ \gray{$n$} \ (n\ s\ z)$

\noindent $(0:\mathsf{Nat})  \mapsto  \lambda s. \lambda z.z$
\end{definition}

\noindent Note that the recursive occurences of $\mathsf{Nat}$ in Parigot numerals are at positive positions. The rest of the examples are about Church numerals, but a similar development can be carried out with Parigot numerals.  

\begin{theorem}[Induction Principle]
\

\noindent $ {\mu}_c \vdash \mathsf{Ind} : \forall C: \mathsf{Nat}\to *. (\forall n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n)) \to C\ 0 \to \Pi n:\mathsf{Nat}. C\ n$  

\noindent where $\mathsf{Ind}\ := \lambda s.\lambda z. \lambda n. n\ s\ z$

\noindent with $s:\forall n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n), z: C\ 0, n : \mathsf{Nat}$.
\end{theorem}
\begin{proof}
  Let $\Gamma =  {\mu}_c, C: \mathsf{Nat}\to *, s:\forall n : \mathsf{Nat}. C\ n \to C\ (\mathsf{S}\ n), z: C\ 0, n : \mathsf{Nat}$. Since $n : \mathsf{Nat}$, by \textit{selfInst}, $n : \forall C: \mathsf{Nat}\to *.  (\forall y : \mathsf{Nat}. C\ y \to C\ (\mathsf{S}\ y)) \to C\ 0 \to C\ n$. Thus $n \ s\ z : C\ n$.
\end{proof}
\noindent It is worth noting that it is really the definition of $\mathsf{Nat}$ and the \textit{selfInst} rule that give us the induction principle, which is not 
derivable in $\cc$ \cite{coquand:inria-00075471}. 
\begin{definition}[Addition]
  $ m+n := \mathsf{Ind}\ \suc\ n\ m$
\end{definition}
\noindent One can check that $ {\mu}_c \vdash + : \nat \to \nat \to \nat$ by instantiating the $C$ in the type of $\ind$ by $\lambda y.\nat$, then the type of $\ind$ is $(\nat \to \nat) \to \nat \to (\nat \to \nat)$. 
\begin{definition}[Leibniz's Equality]
\

$\mathsf{Eq}  :=  \lambda A[:*]. \lambda x[:A].\lambda y[:A]. \forall C: A\to *. C\ x \to C\ y$.
\end{definition}
\noindent Note that we use $x =_A y$ to denote $\mathsf{Eq}\ A \ x \ y$. We often write $t = t'$ when the type is clear. One can check that if $\vdash A:*$ and $\vdash x, y : A$, then $\vdash x =_A y : *$. 

\begin{theorem}
\noindent $ {\mu}_c \vdash \Pi x : \mathsf{Nat}. x + 0 =_{\nat} x$
\end{theorem}

\begin{proof}
  We prove this by induction. We instantiate $C$ in the type of $\ind$ with $\lambda n. (n + 0) =_{\nat} n$. So by beta reduction at type level, we have $(\forall n : \mathsf{Nat}. ( n + 0 =_{\nat} n) \to ( (\mathsf{S}\ n) + 0 =_{\nat} \suc\ n)) \to  0+0 =_{\nat} 0 \to \Pi n:\mathsf{Nat}. n +0=_{\nat} n$. So for the base case, we need to show $0+0 =_{\nat} 0$, which is easy. For the step case, we assume $ n + 0 =_{\nat} n$ (Induction Hypothesis), and want to show $(\mathsf{S}\ n) + 0 =_{\nat} \suc\ n$. Since $(\mathsf{S}\ n) + 0 \to_{\beta} \suc\ (n \ \suc \ 0) =_{\beta} \suc (n+0)$,  by congruence on the induction hypothesis, we have $(\mathsf{S}\ n) + 0 =_{\nat} \suc\ n$. Thus $\Pi x : \mathsf{Nat}. x + 0 =_{\nat} x$.
\end{proof}

%% The above theorem is provable inside $\self$. It shows how to inhabit the type $\Pi x : \mathsf{Nat}. x + 0 =_{\nat} x$ given $ {\mu}_c$, using $\mathsf{Ind}$. %%Similar development can be carr

\subsection{Vector Encoding}
\label{vector}

\begin{definition}[Vector]
Let $ {\mu}_v$ be the following definitions:

\noindent  $(\mathsf{vec} :* \to \nat \to *) \mapsto$

$\lambda U:*.\lambda n:\nat . \gray{$\iota x$}. \forall C: \gray{$\Pi p:\nat.\vecc\ U\ p \to *$}.$

$(\Pi m: \mathsf{Nat}.\Pi u:U.\forall y: \vecc \ U\ m.(  C \  m \ y \  \to C \ (\mathsf{S}\ m)\ (\mathsf{cons}\ m\ u\ y)))$

$\to C \ 0 \ \mathsf{nil} \to  C\ n\ \gray{$x$}$

\noindent $(\nil:\forall U:*.\vecc\ U\ 0) \mapsto \lambda y. \lambda x.x$

\noindent $(\cons: \Pi n: \mathsf{Nat}.\forall U:*. U \to \vecc \ U\ n \to \vecc \ U\ (\suc \ n)) \mapsto \lambda n.\lambda v. \lambda l. \lambda y. \lambda x.y \ n\ v\ (l \ y\ x)$

\noindent where $n: \mathsf{Nat}, v: U, l: \vecc \ U\ n, y:\Pi m: \mathsf{Nat}.\Pi u:U.\forall z: \vecc\ U\ m.(  C \  m \ z \  \to C \ (\mathsf{S}\ m)\ (\mathsf{cons}\ m\ u\ z)) , x:  C \ 0 \ \nil $. 

\end{definition}

\noindent \textbf{Typing}: It is easy to see that $\nil$ is typable to $\forall U:*.\vecc \ U\ 0$. Now we show how $\cons$ is typable to $\Pi n: \mathsf{Nat}.\forall U:*. U \to \vecc\ U\ n \to \vecc \ U\ (\suc\ n)$. We can see that $l\ y\ x: C\ n\ l$ (using \textit{selfinst} on $l$). After the instantiation with $l$, the type of $y \ n\ v$ is $C\ n\ l \to  C\ (\mathsf{S}\ n)\ (\mathsf{cons}\ n\ v\ l)$. So $y\ n\ v \ (l\ y\ x):  C\ (\mathsf{S}\ n)\ (\mathsf{cons}\ n\ v\ l)$. So $\gray{$\lambda y. \lambda x. y\ n\ v \ (l\ y\ x)$} :  \Pi C: (\nat\to \vecc \ U\ p \to *).(\Pi m: \mathsf{Nat}.\Pi u:U.\forall y: \vecc\ U\ m.(  C \  m \ y \  \to C \ (\mathsf{S}\ m)\ (\mathsf{cons}\ m\ u\ y)))\to  C\ 0\ \nil \to C\ (\suc\ n) \ \gray{$(\lambda y. \lambda x. y\ n\ v \ (l\ y\ x))$} $. So by \textit{selfGen}, we have $\lambda y. \lambda x. y\ n\ v \ (l\ y\ x) : \vecc \ U (\suc \ n)$. Thus $ \cons : \Pi n:\mathsf{Nat}.\forall U : * . U \to \vecc\ U\ n \to \vecc\ U\ (\suc\ n)$.
  

\begin{definition}[Induction Principle for Vector]
\

  \noindent  $ {\mu}_v \vdash \mathsf{Ind}  : \forall U:*.\Pi n:\nat. \forall C: \nat\to \vecc\ U\ p\to *.$
  
 $(\Pi m: \mathsf{Nat}.\Pi u:U.\forall y: \vecc\ U\ m .(  C \  m \ y \  \to C \ (\mathsf{S}\ m)\ (\mathsf{cons}\ m\ u\ y))) $
  
  $\to C\ 0\ \nil \to \Pi x: \vecc\ U\ n .(C\ n\ x)$

\noindent where $\mathsf{Ind} := \lambda n.\lambda s. \lambda z. \lambda x. x\ s\ z$

\noindent $n:\nat, s :\forall C:(\nat\to \vecc\ U\ p \to *).(\Pi m: \mathsf{Nat}.\Pi u:U.\forall y: \vecc\ U\ m.(  C \  m \ y \  \to C \ (\mathsf{S}\ m)\ (\mathsf{cons}\ m\ u\ y))), z: C\ 0\ \nil, x:  \vecc\ U\ n$. 

\end{definition}

\begin{definition}[Append]
%%   \noindent $\app := \lambda n_1. \lambda n_2. \lambda l_1. \lambda l_2. l_1\ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v)\ l_2$.

%% \noindent  We can use induction to define append
%% as well.

\noindent $ {\mu}_v \vdash \app :\forall U:*. \Pi n_1:\mathsf{Nat}. \Pi n_2:\mathsf{Nat}. \vecc\ U\ n_1 \to \vecc\ U\ n_2 \to \vecc\ U\ (n_1+n_2)$

\noindent where $\app := \lambda n_1. \lambda n_2.\lambda l_1.\lambda l_2. (\mathsf{Ind}\ n_1) \ (\lambda n. \lambda x.\lambda v. \cons\  (n+n_2)\ x\ v)\ l_2 \ l_1$. 
%We can show that $\app : $
\end{definition}


\noindent \textbf{Typing}: We want to show $\app :\forall U:*. \Pi n_1:\mathsf{Nat}. \Pi n_2:\mathsf{Nat}. \vecc\ U\ n_1 \to \vecc\ U\ n_2 \to \vecc\ U\ (n_1+n_2)$. Observe that $\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v: \Pi n:\mathsf{Nat}. \Pi x:U. \vecc\ U \ (n+n_2) \to \vecc\ U\ (n+n_2+1) $. We instantiate $C :=  \lambda y.(\lambda x.\vecc\ U\ (y + n_2))$ , where $x$ free over $\vecc\ U\ (y + n_2)$, in $\mathsf{Ind}\ n_1$. By beta reductions, we get $\mathsf{Ind}\ n_1 : (\Pi m: \mathsf{Nat}.\Pi u:U.\forall y: \vecc\ U\ m.(  \vecc\ U\ (m + n_2)  \to \vecc\ U\ ((\mathsf{S}\ m) + n_2)) \to \vecc\ U\  (0+n_2) \to \Pi x: \vecc\ U\ n_1. \vecc\ U\ (n_1+n_2)$. 

\noindent So $(\mathsf{Ind}\ n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) : \vecc\ U\ (0+n_2)  \to \Pi x: \vecc\ U\ n_1. \vecc\ U\ (n_1+n_2)$. We assume $l_1: \vecc\ U\ n_1, l_2:\vecc\ U\ n_2$. Thus $(\mathsf{Ind}\ n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) \ l_2 \ l_1: \vecc\ U\ (n_1+n_2)$. 

\section{Metatheory}
\label{s}
We first outline the erasure from $\self$ to $\fomega$ with positive recursive definitions, which shows the strong normalization of $\self$. We also prove type preservation for $\self$, which involves \textit{confluence analysis} (Section~\ref{confanalysis}) and \textit{morph analysis} (Section~\ref{morphanalysis}).  

\subsection{Strong Normalization}
\label{erasure}
We prove strong normalization of $\self$ through the strong normalization of $\fomega$ with positive recursive definitions. We first define the syntax for $\fomega$ with positive recursive definitions.  

\begin{definition}[Syntax for $\fomega$ with positive definitions]

\

\noindent \textit{Terms} $t \ :: = \ x \ | \ \lambda x.t \ | \ t t'$

\noindent \textit{Kinds} $\kappa \ ::= \ * \ | \ \kappa' \to \kappa$ 

\noindent \textit{Types} $T^\kappa \ ::= \ X^\kappa \ | \ (\forall X^\kappa.T^*)^* \ | \ (T_1^* \to T_2^*)^* \ | \ (\lambda X^{\kappa_1}.T^{\kappa_2})^{\kappa_1 \to \kappa_2} \ | \ (T_1^{\kappa_1 \to \kappa_2} T_2^{\kappa_1})^{\kappa_2}$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x:T^{\kappa} \ | \ \Gamma, \mu$ 

\noindent \textit{Definitions} $\mu \ ::= \ \bm{x}{S^{\kappa}}{t}{N}\cup \M{X^{\kappa}} {T^{\kappa}}{M}$

\noindent \textit{Term definitions} $\rho \ ::= \ \M{x}{t}{N}$

\end{definition} 

Note that for every $x \mapsto t, X^\kappa \mapsto T^\kappa \in \mu$, we require $\mathrm{FV}(t) = \emptyset$ and $\mathrm{FVar}(T^\kappa) \subseteq \{X^\kappa\}$; and the $X^\kappa$ can only occur at the positive position in $T^\kappa$, no mutually recusive definitions are allowed. We elide the typing rules for space reason. We adopt kind-annotated types to obtain a clearer interpretation of types. e.g. with kind annotation, we do not need to worry about interpretation for ill-formed types like $(\lambda X.X) \to (\lambda X. X)$.



\begin{definition}[Erasure for kinds]
We define a function $F$ maps kinds in $\self$ to kinds in $\fomega$ with positive definitions.

  $F(*) \ := *$  

$F(\Pi x:T.\kappa) \ := F(\kappa)$

$F(\Pi X:\kappa'.\kappa) \ := F(\kappa') \to F(\kappa)$  


\end{definition}

\begin{definition}[Erasure relation]
\noindent We define relation $\Gamma \vdash T \triangleright T'^\kappa$ (intuitively, it means that type $T$ can be erased to $T'^\kappa$ under the context $\Gamma$), where $T,\Gamma$ are types and context in $\self$, $T'^\kappa$ is a type in $\fomega$ with positive definitions.

\

\begin{tabular}{ll}

\infer{\Gamma \vdash X \triangleright X^\kappa}{ F(\kappa') = \kappa & (X:\kappa') \in \Gamma} 


&

\infer{\Gamma \vdash \iota x.T \triangleright T_1^\kappa}{\Gamma \vdash T \triangleright T_1^\kappa}

\\
\\

\infer{\Gamma \vdash \forall X:\kappa.T \triangleright (\forall X^{F(\kappa)}. T_1^{*})^*}{\Gamma, X:\kappa \vdash T \triangleright T_1^{*}}


&
\infer{\Gamma \vdash \Pi x:T_1.T_2 \triangleright (T_a^* \to T_b^*)^*}{\Gamma \vdash T_1 \triangleright T_a^* & \Gamma \vdash T_2\triangleright T_b^*}

\\
\\

\infer{\Gamma \vdash \forall x:T_1.T_2 \triangleright T^\kappa}{\Gamma \vdash T_2 \triangleright T^\kappa}

&

\infer{\Gamma \vdash T_1 T_2 \triangleright (T_a^{\kappa_1 \to \kappa_2} T_b^{\kappa_1})^{\kappa_2}}
      {\Gamma \vdash  T_1 \triangleright T_a^{\kappa_1 \to \kappa_2} & \Gamma \vdash T_b^{\kappa_1}}

\\
\\

\infer{\Gamma \vdash \lambda X.T \triangleright (\lambda X^{F(\kappa)}. T_a^{\kappa'})^{\kappa \to \kappa'}}{\Gamma, X:\kappa \vdash T \triangleright T_a^{\kappa'}}

&
\infer{\Gamma \vdash T\ t \triangleright T_1^\kappa}{\Gamma \vdash T \triangleright T_1^\kappa}


\\
\\
\infer{\Gamma \vdash \lambda x.T \triangleright T_1^\kappa}{\Gamma \vdash T \triangleright T_1^\kappa}

\end{tabular}

\end{definition}

\begin{definition}[Erasure for Context]
We define relation $\Gamma \triangleright \Gamma'$ inductively.  

\begin{tabular}{lll}
\infer{\Gamma, (X:\kappa) \mapsto T \triangleright \Gamma', X^{F(\kappa)} \mapsto T_a^{F(\kappa)}}{\Gamma \vdash T \triangleright T_a^{F(\kappa)} & \Gamma \triangleright \Gamma'} 

&
\infer{\Gamma, X:\kappa \triangleright\Gamma'}{ \Gamma \vdash \Gamma'}
&
\infer{\cdot \triangleright \cdot}{}
\\
\\
\infer{\Gamma, (x:T) \mapsto t \triangleright\Gamma', x:T_a^\kappa \mapsto t}{\Gamma \vdash T \triangleright T_a^\kappa & \Gamma \triangleright \Gamma'}
&
\infer{\Gamma, x:T \triangleright\Gamma', x:T_a^\kappa}{\Gamma \vdash T \triangleright T_a^\kappa
  & \Gamma \triangleright \Gamma'}

\end{tabular}

\end{definition}

\begin{theorem}[Erasure Theorem]
\label{erase}
\
\begin{enumerate}
\item If $\Gamma \vdash T:\kappa$, then there exists a $T_a^{F(\kappa)}$ such that $\Gamma \vdash T \triangleright T_a^{F(\kappa)}$. 
  \item If $\Gamma \vdash t:T$ and $\Gamma \vdash \mathsf{wf}$, then there exist $T_a^{*}$ and $\Gamma'$ such that $\Gamma \vdash T \triangleright T_a^{*}$, $\Gamma \triangleright \Gamma'$ and $\Gamma' \vdash t:T_a^{*}$.

\end{enumerate}
\end{theorem}

%% \begin{theorem}
%%   \label{sn}
%%   If $\Gamma \vdash t:T^*$ and $\Gamma \vdash \mathsf{wf}$ in $\fomega$ with positive definitions, then $t$ is strongly normalizing.
%% \end{theorem}

%% \begin{definition}[Reducibility Candidate]
%% A reducibility candidate $\mathcal{R}_\rho$ is a set of terms such that: 
%% \begin{itemize}
%% \item (CR1) If $t \in \mathcal{R}_\rho$, then $\rho \vdash t $ is strongly normalizing.
%% \item (CR2) If $t \in \mathcal{R}_\rho$ and $\rho \vdash t \to^* t'$, then $t'\in \mathcal{R}_\rho$.
%%   \item (CR3) If $t$ is neutral and $\rho \vdash t \to^* t'$ with $t'\in \mathcal{R}_\rho$, then $t\in \mathcal{R}_\rho$. A term is neutral if it is of the form $x, t\ u$. 
%% \end{itemize}
%% \end{definition}

Now that we obtained an erasure from $\self$ to $\fomega$ with positive definitions. We continue to show latter is strongly normalizing. The development below is in $\fomega$ with positive definitions. Let $\mathfrak{R}_\rho$ be the set of all reducibility candidates\footnote{The notion of reducibility candidate here slightly extends the standard one to handle definitional reduction: $\rho \vdash x \to_\beta t$, where $x \mapsto t \in \rho$. So it is parametrized by $\rho$.}. Let $\sigma$ be a mapping between type variable of kind $\kappa$ to element of $\rho\interp{\kappa}$. 


\begin{definition}
  \
  
  \begin{itemize}
    \item $\rho\interp{*} := \mathfrak{R}_\rho$.
  \item $\rho\interp{\kappa \to \kappa'} := \{ f \ | \ \forall a \in \rho\interp{\kappa}, f(a) \in \rho\interp{\kappa'} \}$.
  \item $\rho\interp{X^\kappa}_\sigma := \sigma(X^\kappa)$. 
  \item $\rho\interp{(T_1^* \to T_2^*)^*}_{\sigma} := \{ t  \ |\ \forall u.\in \rho\interp{T_1^*}_{\sigma}, tu \in \rho\interp{T_2^*}_{\sigma}\}$. 
  \item $\rho\interp{(\forall X^\kappa.T^*)^*}_{\sigma} := \bigcap_{f \in \rho\interp{\kappa}}\rho\interp{T^*}_{\sigma[f/X]}$. 
  %% \item $\interp{(\mu X^\kappa.T^\kappa)^\kappa}_{\sigma} := \mathrm{lfp}(f)$, where $f$ is the map $a \mapsto \interp{T^\kappa}_{\sigma[a/X]}$, with $a \in \interp{\kappa}$. 
    
  \item $\rho\interp{(\lambda X^{\kappa'}.T^\kappa)^{\kappa' \to \kappa}}_{\sigma} := f$ where $f$ is the map $a \mapsto \rho\interp{T^{\kappa}}_{\sigma[a/X]}$ for any $a \in \rho\interp{\kappa'}$. 
  \item $\rho\interp{(T_1^{\kappa' \to \kappa} T_2^{\kappa'})^{\kappa}}_{\sigma} := \rho\interp{T_1^{\kappa' \to \kappa}}_{\sigma}(\rho\interp{T_2^{\kappa'}}_{\sigma}) $.
    
  \end{itemize}
\end{definition}
\noindent Let $|\cdot|$ be a function that retrieves all the term definitions from the context $\Gamma$. 
\begin{definition}
Let $\rho = |\Gamma|$, and $\mathrm{FVar}(\Gamma)$ be the set of free type variables in $\Gamma$. We define $\sigma \in \rho\interp{\Gamma}$ if $\sigma(X^\kappa) \in \rho\interp{\kappa}$ for undefined variable $X^\kappa$; and $\sigma(X^\kappa) = \mathrm{lfp}(b \mapsto \rho\interp{T^\kappa}_{\sigma[b/X^\kappa]})$ for $b \in \rho\interp{\kappa}$ if $X^\kappa \mapsto T^\kappa \in \Gamma$. 
\end{definition}
\noindent Note that the least fix point operation in $\mathrm{lfp}(b \mapsto \rho\interp{T^\kappa}_{\sigma[b/X^\kappa]})$ is defined since we can extend the complete lattice of reducibility candidate to complete lattice $(\rho\interp{\kappa}, \subseteq_{\kappa}, \cap_{\kappa})$. 
\begin{definition}
Let $\rho = |\Gamma|$ and $\sigma \in \rho\interp{\Gamma}$. We define the relation $\delta \in \rho\interp{\Gamma}$ inductively:

  \begin{tabular}{lllll}
    \infer{ \cdot  \in \rho\interp{\cdot}}{}
    &
    &
    \infer{ \delta[t/x] \in \rho\interp{\Gamma, x:T^{\kappa}}}{ \delta \in \rho\interp{\Gamma} & t \in \rho\interp{T^{\kappa}}_{\sigma}}
    &
    &
        \infer{\delta\in \rho\interp{\Gamma, (x:T^{\kappa}) \mapsto t}}{\delta \in \Gamma}


  \end{tabular}
\end{definition}

\begin{theorem}[Soundness theorem]
  \label{sn}
  Let $\rho = |\Gamma|$. If $\Gamma \vdash t:T^*$ and $\Gamma \vdash \mathsf{wf}$, then for any $\sigma, \delta \in \rho\interp{\Gamma}$, we have $\delta t \in \rho\interp{T^*}_\sigma$, with $\rho\interp{T^*}_\sigma \in \mathfrak{R}_{\rho}$. 
\end{theorem}

%% The proof of Theorem \ref{sn} apply the usual reducibility method and the method developed by Mendler~\cite{mendler:1987}.
\noindent Theorem \ref{erase} and \ref{sn} imply all the typable term in $\self$ is strongly normalizing.   %can be is consistent in the sense that not every type is inhabited. %% However, at type level, we will not have strong normalization(observe the type of $\nat$ is diverging), yet 
 %% \footnote{For readers who want to see the algorithmic typing, we provide an annotated version of the typing rules for $\self$ in appendix \ref{ann}.}
  

\subsection{Confluence Analysis}
\label{confanalysis}

The complications of proving type preservation are due to several
rules which are not syntax-directed. To prove type preservation, one
needs to ensure that if $\Pi x:T.T'$ can be transformed to $\Pi
x:T_1.T_2$, then it must be the case that $T$ can be transformed to
$T_1$ and $T'$ can be transformed to $T_2$. This is why we need to
show confluence for type-level reduction.  We first observe that the
\textit{selfGen} rule and \textit{selfInst} rule are mutually inverse,
and model the change of self type by the following reduction relation.

 %% first establish the confluence property for type level reductions, then we need a straightforward extension of Barendregt's method for proving Curry style \textbf{F} to handle the implicit product. Barendregt's method allows typing to \textit{quotient out} the \textit{transformations} of $\Pi X:\kappa.T$ and $\forall x:T.T'$. Then confluence for the type level reductions will 
 
%% Thus, one establishes the so called \textit{compatibility} property for the explicit dependent types, which is the key to achieve type preservation. 

%% We will first prove several confluence results (this process is called \textit{confluence analysis}), then we develop a technique to quotient out the transformations of $\forall x:T.T'$ and $\Pi X:\kappa.T$ (called \textit{morph analysis}). Finally, we establish the compatibility theorem, which is enough for proving preservation.
%% We want to emphasize that we give the outline of the whole process because we want to convince readers that the method itself is worth noting and can be adapted to many other Curry-style systems with mutually recursive definitions.   


\begin{definition}
\

\noindent  $\Gamma \vdash T_1 \to_{\iota} T_2 $ if $T_1 \equiv \iota x.T' $ and $T_2 \equiv [t/x]T' $ for some fix term $t$. 
\end{definition}

\noindent Note that $\to_{\iota}$ models the \textit{selfInst} rule, $\to_{\iota}^{-1}$ models the \textit{selfGen} rule. Importantly, the notion of $\iota$-reduction does not include congruence; that is, we do not allow reduction rules like if $T \to_{\iota}T'$, then $\lambda x.T \to_{\iota} \lambda x.T'$. The purpose of $\iota$-reduction is to emulate the typing rule \textit{selfInst} and \textit{selfGen}. %% The goal of confluence analysis is to prove the following theorem.

%% \begin{theorem}[Fundamental Theorem]
%% \label{conf}
%%   $\to_{o,\iota,\beta,\mu}$ is confluent.
%% \end{theorem}

 We first show confluence of
$\to_{\beta}$ by applying the standard Tait-Martin L\"of method, and then apply Hindley-Rossen's commutativity theorem to show $\to_{\iota}$ commutes with $\to_{\beta}$. %% Thus we can conclude Theorem \ref{conf}.
 We use $\to^*$ to denote the reflexive symmetric transitive closure of $\to$.

 \begin{lemma}
   \label{conf:beta}
   $\to_{\beta}$ is confluent.
 \end{lemma}
 
\begin{definition}[Commutativity]
  \noindent Let $\to_1, \to_2$ be two notions of reduction. 
Then $\to_1$ commutes with $\to_2$ iff $\leftarrow_1\cdot\to_2\ \subseteq\ \to_1\cdot\leftarrow_2$.

\end{definition}

\begin{proposition}
 Let $\to_1, \to_2$ be two notions of reduction. Suppose both $\to_1$ and $\to_2$ are
confluent, and $\to_1^*$ commutes with $\to_2^*$. Then $\to_1 \cup \to_2$ is confluent.
%% \item If $\to_1$ weak commutes with $\to_2$, then $\to_1^*$ and $\to_2^*$ commute\footnote{Taken from Chapter 3.3 of \cite{Barendregt:1985}.}.

\end{proposition}

\begin{lemma}
  $\to_{\beta}$ commutes with $\to_{\iota}$. Thus $\to_{\beta, \iota}$ is confluent, where $\to_{\beta, \iota} = \to_\beta \cup \to_\iota$.
\end{lemma}


\begin{theorem}[$\iota$-elimination]
\label{invsc}
If $\Gamma \vdash \Pi x:T_1.T_2 =_{\beta,\iota} \Pi x:T_1'.T_2'$, then $\Gamma \vdash T_1 =_{\beta} T_1'$ and $\Gamma \vdash T_2 =_{\beta} T_2'$. 
\end{theorem}

\begin{proof}
  If $\Gamma \vdash \Pi x:T_1.T_2 =_{\beta,\iota } \Pi x:T_1'.T_2'$, then by the confluence of $\to_{\beta,\iota}$, there exists a $T$ such that $\Gamma \vdash \Pi x:T_1.T_2 \to_{\iota,\beta }^* T$ and $\Gamma \vdash \Pi x:T_1'.T_2' \to_{  \iota,\beta }^* T$. Since all the reductions on $\Pi x:T_1.T_2$ preserve the structure of the dependent type, one will never have a chance to use $\to_{\iota}$-reduction, thus $\Gamma \vdash \Pi x:T_1.T_2 \to_{ \beta }^* T$ and $\Gamma \vdash \Pi x:T_1'.T_2' \to_{ \beta }^* T$. So $T$ must be of the form $\Pi x:T_3.T_4$. And $\Gamma \vdash T_1 \to_{ \beta }^* T_3$, $\Gamma \vdash T_1' \to_{ \beta}^* T_3$, $\Gamma \vdash T_2  \to_{ \beta }^* T_4$ and $\Gamma \vdash T_2' \to_{ \beta }^* T_4$. Finally, we have $\Gamma \vdash T_1 =_{\beta  } T_1'$ and $\Gamma \vdash T_2 =_{\beta  } T_2'$. 

\end{proof}


%% \subsection{Type Preservation Result}
%% \label{result}
%% \begin{lemma}
%% \label{type}
%%   Let $([\Gamma, \Delta], T_1) {\to_{o,\iota,\beta,\mu,i,g,I,G}}^* ([\Gamma], T_2)$. If $\Gamma, \Delta \vdash t:T_1$ with $\mathsf{dom}(\Delta) \# \mathsf{FV}(t)$, then $\Gamma \vdash t:T_2$. 
%% \end{lemma}

%% \noindent \textbf{Note}: We write $\stackrel{t}{\to}_{\beta,\mu,\iota,o, i, g, I, G}$ to means
%% the same thing as ${\to}_{\beta,\mu,\iota,o, i, g, I, G}^*$ with an emphasis on the subject $t$. 

%% \begin{lemma} 
%% \label{conv}
%%  If $([\Gamma], T_1) \stackrel{t}{\to}_{\beta,\mu,\iota,o, i, g, I, G} ([\Gamma'],T_2)$ and $\Gamma \vdash t =_{\beta,\mu} t'$, then $([\Gamma], T_1) \stackrel{t'}{\to}_{\beta,\mu,\iota,o, i, g, I, G} ([\Gamma'],T_2)$.  
%% \end{lemma} 
%% \begin{proof} By induction on length of $([\Gamma],T_1) \stackrel{t}{\to}_{\beta,\mu,\iota,o, i,g, I, G} ([\Gamma],T_2)$. Note that this lemma is \textbf{not} subject expansion, do not get confused.
%% \end{proof}


%% \begin{lemma} 
%%     \label{perm} If $\Gamma, \tilde{\mu}, y:T'
%% \vdash t:T$ , then $\Gamma, y: \mu  T',\tilde{\mu} 
%%  \vdash t :T$.  
%%  \end{lemma}
%% \begin{proof} By induction on the
%% derivation of $\Gamma, \tilde{\mu} , y:T' \vdash t:T$.
%% \end{proof}

%% \noindent \textbf{Note}: If $\Delta = x:T,..., X:\kappa$, then $\mu \Delta := x:\mu T,..., X:\mu \kappa$. 

%% \begin{lemma} 
%%     \label{metacong} 
%%     If $([\Gamma, \tilde{\mu}, \Delta], T) \stackrel{t}{\to}_{\beta,\mu,\iota,o, i, g, I, G} ([\Gamma, \tilde{\mu}, \Delta'], T')$ for some $\Delta, \Delta'$ and $\Gamma, \mu \Delta , \tilde{\mu}
%% \vdash \ \mathsf{ok}$, then 

%% \noindent $([\Gamma, \mu \Delta], \mu T) \stackrel{\mu t }{\to}_{\beta,\mu,\iota,o, i, g, I, G} ([\Gamma, \mu \Delta'], \mu T')$.
%%  \end{lemma} 

%% The proof of type preservation proceeds as usual. The inversion lemma and substitution lemma
%% are standard. Note that in the final preservation proof, we use the compatibility theorem.
 
%% \begin{lemma}[Inversion]

%%   \begin{itemize}
%%   \item If $\Gamma \vdash x:T$, then exist $\Delta, T_1$ such that $([\Gamma, \Delta], T_1) {\to_{o,\iota,\beta,\mu,i,g,I,G}^*} ([\Gamma], T)$ and $(x:T_1) \in \Gamma$.
%%  \item If $\Gamma \vdash t_1 t_2:T$, then exist $\Delta, T_1, T_2$ such that $\Gamma, \Delta \vdash t_1:\Pi x:T_1.T_2$ and $\Gamma, \Delta \vdash t_2:T_1$ and $([\Gamma, \Delta], [t_2/x]T_2) {\to_{o,\iota,\beta,\mu,i,g,I,G}^*} ([\Gamma],T)$. 

%% \item If $\Gamma \vdash \lambda x.t:T$, then exist $\Delta, T_1, T_2$ such that $\Gamma, \Delta , x:T_1 \vdash t:T_2$ and $([\Gamma, \Delta], \Pi x:T_1. T_2) {\to_{o,\iota,\beta,\mu,i,g,I,G}^*} ([\Gamma], T)$. 

%% \item If $\Gamma \vdash \mu t:T$, then exist $\Delta, T_1$ such that $\Gamma, \Delta, \tilde{\mu} \vdash t:T_1$ and $([\Gamma, \Delta], \mu T_1)$

%% $ {\to_{o,\iota,\beta,\mu,i,g,I,G}^*} ([\Gamma],T)$.
%%   \end{itemize}
%% \end{lemma}

%% \begin{lemma}[Substitution]
%% \label{subst1}
%% \

%%   \begin{enumerate}
%%   \item   If $\Gamma \vdash t:T$, then for any mixed substitution $\phi$ with $\mathrm{dom}(\phi) \# \mathrm{FV}(t) $, $\phi \Gamma \vdash t: \phi T$.
%% \item If $\Gamma, x:T \vdash t:T'$ and $\Gamma \vdash t':T$, then $\Gamma \vdash [t'/x]t:[t'/x]T'$.
%%   \end{enumerate}

%% \end{lemma}

%% \begin{proof}
%%   By induction on derivation.
%% \end{proof}
\subsection{Morph Analysis}
\label{morphanalysis}
The methods of the previous section are not suitable for dealing with implicit polymorphism, since as a reduction relation, polymorphic instantiation is not confluent. For example, $\forall X:\kappa. X$ can be instantiated either to $T$ or to $T \to T$. The only known syntactic method (to our knowledge) to deal with preservation proof for Curry-style System \textbf{F} is Barendregt's method~\cite{Barendregt:1993}. We will extend his method to handle the instantiation of $\forall x:T.T'$. 
 

\begin{definition}[Morphing Relations]
\

  \begin{itemize}
  \item $([\Gamma], T_1) \to_{i} ([\Gamma], T_2)$ if $T_1 \equiv \forall X:\kappa.T' $ and $T_2 \equiv [T/X]T' $ for some $T$ such that $\Gamma \vdash T:\kappa$. 
  \item $([\Gamma,X:\kappa], T_1) \to_{g} ([\Gamma], T_2) $ if $T_2 \equiv \forall X:\kappa.T_1$ and $\Gamma \vdash \kappa:\Box$.
  \item $([\Gamma], T_1) \to_{I} ([\Gamma], T_2) $ if $T_1 \equiv \forall x:T.T' $ and $T_2 \equiv [t/x]T' $ for some $t$ such that $\Gamma \vdash t:T$. 
  \item $([\Gamma, x:T], T_1) \to_{G} ([\Gamma], T_2) $ if $T_2 \equiv \forall x:T.T_1 $ and $\Gamma \vdash T:*$. 

  \end{itemize}
\end{definition}
  
\noindent Intuitively, $([\Gamma], T_1) \to ([\Gamma'], T_2)$ means $T_1$ can be transformed to $T_2$ with a change of context from $\Gamma$ to $\Gamma'$. One can view morphing relations as a way to model typing rules which are not syntax-directed. Note that morphing relations are not intended to be viewed as rewrite relation. Instead of proving confluence for these morphing relations, we try to use substitutions to \textit{summarize} the effects of a sequence of morphing relations. Before we do that, first we ``lift'' $=_{\beta,\iota}$ to 
a form of morphing relation.

\begin{definition}
  $([\Gamma], T) =_{\beta,\iota} ([\Gamma], T')$ if $\Gamma \vdash T =_{\beta,\iota} T'$ and $\Gamma \vdash T:*$ and $\Gamma \vdash T':*$.
\end{definition}

\noindent The best way to understand the $E, G$ mappings below is through understanding Lemmas~\ref{igelim} and~\ref{IGelim}. They give concrete demonstrations of how to \textit{summarize} a sequence of morphing relations.

\begin{definition}
\

  \begin{tabular}{lll}
\gray{$E(\forall X:\kappa.T) := E(T)$}
&
  $E(X) := X$

&

 $E(\Pi x:T_1. T_2) := \Pi x:T_1. T_2$

\\
$E(\lambda X.T) := \lambda X.T$

&

$E(T_1 T_2) := T_1 T_2$
&

$E(\forall x:T'.T) := \forall x:T'.T$

\\
$E(\iota x.T) := \iota x.T$

&
$E( T \ t) := T \ t$

&
$E(\lambda x.T) := \lambda x.T$
\end{tabular}
\end{definition}

\begin{definition}
\

  \begin{tabular}{lll}
  $G(\forall X:\kappa.T) := \forall X:\kappa.T$
&
  $G(X) := X$

&

 $G(\Pi x:T_1. T_2) := \Pi x:T_1. T_2$

\\
$G(\lambda X.T) := \lambda X.T$

&

$G(T_1 T_2) := T_1 T_2$
&

\gray{$G(\forall x:T'.T) := G(T)$}

\\
$G(\iota x.T) := \iota x.T$

&
$G( T \ t) := T \ t$

&
$G(\lambda x.T) := \lambda x.T$
\end{tabular}
\end{definition}


\begin{lemma}
\label{subeg}
  $E([T'/X]T) \equiv [T''/X]E(T)$ for some $T''$; $G([t/x]T) \equiv [t/x]G(T)$ .
\end{lemma}
\begin{proof}
  By induction on the structure of $T$.
\end{proof}

\begin{lemma}
\label{igelim}
  If $([\Gamma], T) {\to_{i,g}^*} ([\Gamma'],T')$, then there exists a type substitution $\sigma$ such that $\sigma E(T) \equiv E(T')$. 
\end{lemma}
\begin{proof}
  It suffices to consider $([\Gamma],T) {\to_{i,g}} ([\Gamma'], T')$.
If $T' \equiv \forall X:\kappa .T$ and $\Gamma = \Gamma',X:\kappa$, then $E(T') \equiv E(T)$.
If $T \equiv \forall X:\kappa .T_1$ and $T' \equiv [T''/X]T_1$ and $\Gamma = \Gamma'$, then $E(T) \equiv E(T_1)$. By Lemma~\ref{subeg}, we know $E(T') \equiv E([T''/X]T_1) \equiv [T_2/X]E(T_1)$ for some $T_2$. 
\end{proof}


\begin{lemma}
\label{IGelim}
  If $([\Gamma], T) {\to_{I,G}^*} ([\Gamma'], T')$, then there exists a term substitution $\delta$ such that $\delta G(T) \equiv G(T')$. 
\end{lemma}
\begin{proof}
  It suffices to consider $([\Gamma], T) {\to_{I,G}} ([\Gamma'], T')$.
If $T' \equiv \forall x:T_1 .T$ and $\Gamma = \Gamma',x:T_1$, then $G(T') \equiv G(T)$.
If $T \equiv \forall x:T_2 .T_1$ and $T' \equiv [t/x]T_1$ and $\Gamma = \Gamma'$, then $E(T) \equiv E(T_1)$. By Lemma~\ref{subeg}, we know $E(T') \equiv E([t/x]T_1) \equiv [t/x]E(T_1)$. 
\end{proof}

\begin{lemma}
\label{typesub}
  If $([\Gamma],\Pi x:T_1. T_2) {\to_{i,g}^*} ([\Gamma'], \Pi x:T_1'.T_2')$, then there exists a type substitution $\sigma$ such that $\sigma(\Pi x:T_1.T_2) \equiv \Pi x:T_1'.T_2'$.
\end{lemma}

\begin{proof}
  By Lemma~\ref{igelim}.
\end{proof}

\begin{lemma}
\label{termsub}
  If $([\Gamma], \Pi x:T_1. T_2) {\to_{I,G}^*} ([\Gamma'], \Pi x:T_1'.T_2')$, then there exists a term substitution $\delta$ such that $\delta(\Pi x:T_1.T_2) \equiv \Pi x:T_1'.T_2'$.
\end{lemma}

\begin{proof}
  By Lemma~\ref{IGelim}.
\end{proof}

\noindent Let $\to_{\iota,\beta,i,g,I,G}^*$ denote $(\to_{i,g,I,G} \cup =_{\iota,\beta})^*$.
Let $\to_{\iota,\beta,i,g,I,G}$ denote $\to_{i,g,I,G} \cup =_{\iota,\beta}$. The goal of confluence analysis and morph analysis is to establish the following \textit{compatibility} theorem. 

\begin{theorem}[Compatibility]
\label{comp}
  If $([\Gamma], \Pi x:T_1. T_2)  \to_{\iota,\beta,i,g,I,G}^* ([\Gamma'],\Pi x:T_1'. T_2')$, then there exists a mixed substitution\footnote{A substitution that contains both term substitution and type substitution.} $\phi$ such that $([\Gamma], \phi(\Pi x:T_1. T_2)) =_{\iota,\beta} ([\Gamma],\Pi x:T_1'.T_2')$. Thus $\Gamma \vdash \phi T_1 =_{\beta} T_1'$ and $\Gamma \vdash \phi T_2 =_{\beta} T_2'$ (by Theorem~\ref{invsc}).
\end{theorem}
\begin{proof}
By Lemma~\ref{termsub} and~\ref{typesub}, making use of the fact that if $\Gamma \vdash t =_{\iota,\beta} t'$, then for any mixed
substitution $\phi$, we have $\Gamma \vdash \phi t =_{\iota,\beta} \phi t'$.
\end{proof}

\begin{theorem}[Type Preservation]
  \label{presv}
  If $\Gamma \vdash t:T$ and $\Gamma \vdash t \to_{\beta} t'$ and $\Gamma \vdash \mathsf{wf}$, then $\Gamma \vdash t':T$.
\end{theorem}

\section{$0 \not = 1$ in $\self$}
\label{zero}
The proof of $0 \not = 1$ follows the same method as in Theorem \ref{contract}, while emptiness of $\bot$ needs the erasure and preservation theorems. Notice that in this section, by $a = b$, we mean $\forall C: A \to *. C\ a \to C\ b$ with $a, b :A$. 
\begin{definition}
  $\bot := \forall A:*.\forall x: A. \forall y:A. x = y$. 
\end{definition}

\begin{theorem}
 There is no term $t$ such that ${\mu}_c \vdash t: \bot$
\end{theorem}
\begin{proof}
Suppose $ {\mu}_c \vdash t: \bot$. By the erasure theorem (Theorem \ref{erase}) in Section \ref{erasure}, we have $ F( {\mu}_c)\vdash t:\forall A:*. \forall C:*. C \to C$ in $\fomega$. We know that $\forall A:*.\forall C:*. C \to C$ is the singleton type\footnote{Note that we are dealing with Curry-style $\fomega$.}, which is inhabited by $\lambda z.z$. This means $t \to_{\beta}^* \lambda z.z$ (the term reductions of $\fomega$ with let-bindings are the same as $\self$) and $ {\mu}_c \vdash \lambda z.z: \bot$ in $\self$ (by type preservation, Theorem \ref{presv}). Then we would have $ {\mu}_c, A:*, x:A,y:A,  C: A \to *, z: C\ x \vdash z :C\ y$. We know this derivation is impossible since $C\ x \not \cong C\ y$. 
  
\end{proof}
\begin{theorem}
$ {\mu}_c \vdash 0 = 1 \to \bot$.
\end{theorem}
\begin{proof}
  This proof follows the method in Theorem \ref{contract}. Let $\Gamma =  {\mu}_c, a : (\forall B: \nat\to *. B\ 0 \to B\ 1), A:*, x:A, y:A, C:A\to *, c: C\ x$. We want to construct a term of type $C\ y$. Let $F := \lambda n[:\nat]. n\ [\lambda p:\nat. A]\ (\lambda q[:A].y) x$, and note that 
  $F: \nat \to A$. We know that
 $F\ 0 =_{\beta} x$ and $F\ 1 =_{\beta} y$. So we can indeed convert the type of $c$ from $C\ x$ to $C\ (F\ 0)$. And then we instantiate the $B$ in $\forall B:\nat\to *. B\ 0 \to B\ 1$ with $\lambda x[:\nat].C\ (F\ x)$. So we have $C\ (F\ 0) \to C\ (F\ 1)$ as the type of $a$. So $a\ c : C\ (F\ 1)$, which means $a\ c : C\ y$. So we have just shown how to inhabit $0 = 1 \to \bot$ in $\self$. 
\end{proof}

\section{Summary}

We have revisited lambda encodings in type theory, and shown how a new
self type construct $\iota x. T$ supports dependent eliminations with
lambda encodings, including induction principles.  We considered
System $\self$, which incorporates self types together with implicit
products and a restricted version of global positive recursive
definition. The corresponding induction principles for Church- and
Parigot-encoded datatypes are derivable in $\self$.  By changing the
notion of contradiction from explosion to equational inconsistency, we
are able to show $0 \not = 1$ in both $\cc$ and $\self$. We proved
type preservation, which is nontrivial for $\self$ since several rules
are not syntax-directed.  We also defined an erasure from $\self$ to
$\fomega$ with positive definitions, and proved strong normalization
of $\self$ by showing strong normalization of $\fomega$ with positive
definitions. 

\chapter{Lambda Encoding with Comprehension}
\label{comprehension}
In this chapter, we will investigate iota-binder from a different perspective. Instead of viewing iota-binder as a type construct, we view it as a set-forming construct. For example, if $F[x]$ is a formula containing a free term variable $x$ , then $\iota x.F[x]$ describes a set of terms $t$, which satisfies the formula, i.e. $ t \in \iota x.F[x] $ iff $ F[t]$. Recalled that in Chapter \ref{selftype}, we have $\vdash t : \iota x.T$ iff $\vdash t:[t/x]T$. If we compare $t \in \iota x.F[x]$ with $\vdash t: \iota x.T$, we observe that there is a similarity between the meta-level typing relation (denoted by ``$:$'') and the set membership notation ``$\in$'', which lies in the object logic. This observation is inspired from our earlier work on internalization (\cite{fu2011framework}, see also Chapter \ref{internalization}.). Right now we are being informal, because it is hard to draw a connection between $F[t]$ and $\vdash t:[t/x]T$, since equating $t \in F[t]$ with $F[t]$ violates the grammatical structure of the logic. Furthermore, one can not na\"ively view self type described in Chapter \ref{selftype} as formula. Suppose both $\iota x.T$ and $T$ are corresponding to formulas, we know that for the formula $F[x]$, the $\iota x.F[x]$ is representing a set, not a formula, so it is again incoherent to equates $\iota x.F[x]$ with $\iota x.T$. Nonetheless, the observation above motivates us to investigate iota-binder from a pure logical perspective. 

Another source of inspiration of our work in this Chapter is from Hatcher's formulation of Frege's logic \cite{hatcher:1982}. Hatcher present Frege's system \cite{frege1967basic} in modern notations, i.e. a logic with basic set-like construct and comprehension axiom. He shows how to prove all of Peano's axioms in Frege's system. Despite Frege's system is inconsistent, the development of Peano's axioms, especially the derivation of induction principle is remarkable and should be emphasis over the inconsistency. 

We first present Frege's System $\mathfrak{F}$ (section \ref{frege}), to motivate our construction of arithmetic with lambda calculus. Then we give a formulation of second order theory of lambda calculus based on 
iota-binder $\iota$ and epsilon relation $\ep$, we call it $\mathfrak{G}$ (section \ref{sysg}). There are at least three similar systems, namely, Girard's formulation of $\mathbf{HA}_2$ \`a la Takeuti \cite{Girard:1989}, Krivine's $\mathbf{FA}_2$ \cite{krivine2002lambda} and Takeuti's second order logic \cite{takeuti1975proof}. There are two subtle differences between $\systemg$ and these systems, the first one is that the domain of individuals of $\systemg$ is lambda terms instead of primitive notion of numbers. The second one is that $\systemg$ has $(\ep, \iota)$-notation, namely, set-abstraction and membership relation are explict in the object language, thus comprehension axiom is needed in $\systemg$. While the other systems use predication instead of membership relation, and set-abstraction is implicit at the meta-level, the comprehension axiom is admissible by performing substitution. We found that with explicit $(\ep, \iota)$-notation and
explicit comprehension axiom are easier to extend to full fledge higher order system and
easier to implement(see Chapter \ref{final}). In section \ref{internal}, we define a notion of polymorphic-dependent typing within $\systemg$, which benefits from the facts that $\systemg$ adimits explicit $(\ep, \iota)$-notation. We prove all of Peano's axioms in section \ref{peano}. We enrich the reduction on lambda term with $\eta$-and $\Omega$-reductions, then we are able to show that the member of the inductively defined sets such as $\nat$ is terminating with respect to \textit{head beta-reduction} (section \ref{prog}). Finally, we show the notion of Leibniz equality in $\systemg$ is \textit{faithful} to the conversions in lambda calculus (section \ref{leibniz}). 
 

\section{Frege's System $\mathfrak{F}$}
\label{frege}

Certain inconsistent systems and their corresponding antinomies are invaluable, because not only the antinomies can be served as criterions for maintaining consistency, but also, perhaps more importantly, they give us examples to see how to reconstruct a large part of mathematics within these systems. Frege's system (\`a la Hatcher) belongs to this category\footnote{Of course, one should also mention Church's lambda calculus.}. In fact, $\systemg$ is inspired by the Fregean construction of numbers. We formalize an intuitionistic version of Frege's system $\mathfrak{F}$, and then we show how to derive basic arithmetic with $\mathfrak{F}$ and how the antinomy arises. 

\begin{definition}[Syntax]
\

\noindent \textit{Domain Terms/Set} $a, b, s \ :: = \ x \ | \ \iota x.F$

\noindent \textit{Formula} $F \ ::= \bot \ | \ s \ep s' \ | \ \ F_1 \to F_2 \ | \ \forall x.F \ | \ F \wedge F'$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, F$

\end{definition} 

\noindent We identify four syntactical categories in $\mathfrak{F}$, namely, \textit{domain terms}, \textit{set} and \textit{formula}. Note that \textit{set} in this chapter is just a name for a syntactical category, we should not confused the notion of set in this chapter with the \textit{set} in set theory like $\mathbf{ZF}$. Note taht the notion of set coincides with the notion of domain terms in $\mathfrak{F}$.  

\begin{definition}[Deduction Rules] \fbox{$\Gamma \vdash F$} 

  \

\begin{tabular}{lll}
    
\infer{\Gamma \vdash F}{F \in \Gamma}

&
\infer{\Gamma \vdash  F_2}{\Gamma \vdash 
F_1 &  F_1 \cong F_2}

&

\infer{\Gamma \vdash \forall x.F}
{\Gamma \vdash  F &  x \notin \mathrm{FV}(\Gamma)}

\\
\\
\infer{\Gamma \vdash [s/x]F}{\Gamma
\vdash \forall x.F}

&

\infer{\Gamma \vdash F_1\to F_2}
{\Gamma, F_1 \vdash F_2}

&

\infer{\Gamma \vdash  F_2}{\Gamma
\vdash F_1 \to F_2 & \Gamma \vdash  F_1}
\\
\\

\infer{\Gamma \vdash F_i}{\Gamma
\vdash  F_1 \wedge F_2 }

&

\infer{\Gamma \vdash  F_1 \wedge F_2}{\Gamma
\vdash F_1 & \Gamma \vdash  F_2 }

\end{tabular}

\end{definition}

\noindent Note that $F_1 \cong F_2$ is specified by the comprehension axiom. 

\begin{definition}[Comprehension]
$s \ep (\iota x.F) \cong [s/x]F$
\end{definition}

\noindent Comprehension axiom is essential for Fregean number construction. The definition of 
number, the induction principle for numbers rely on comprehension. 
Because the notion domain terms and set coincide, with comprehension axiom, $\mathfrak{F}$ is inconsistent. We will show this later. %% There are many ways to avoid the antinomy, for examples,  Russell's type theory \cite{whitehead1927principia}, Quine's new foundation \cite{quine1937new} and Zermelo's $\mathbf{ZF}$ \cite{zermelo1908untersuchungen}. In next section, we present System $\systemg$, which separates the notion of domain terms and set, thus is in the spirit of Quine's stratefication.

\begin{definition}[Equality]
  $a = b := \forall z. (z \ep a \equiv z \ep b)$.
\end{definition}

\noindent For convenient, we write $a \equiv b$ to denote $ a \to b . \wedge . b \to a$. We also write $a \not = b$
for $a = b \to \bot$, $\exists a. A $ for $ (\forall a.(A \to \bot)) \to \bot$. Now we can proceed to construct a na\"ive set theory in $\mathfrak{F}$.

\begin{definition}[Na\"ive Set Theory]
\

  $\Lambda := \iota x.(x  = x \to \bot)$.

  $\{ b\} := \iota y. y = b$

  $\bar{c} := \iota y. (y \ep c \to \bot)$.

  $ a \cap b := \iota z. (z \ep a \wedge z \ep b)$

 $ a \cup b := \iota z. (z \ep a \to \bot . \wedge . z \ep b \to \bot) \to \bot$
\end{definition}

\begin{theorem}
\

  $\vdash \forall x.(x = x)$

  $\vdash \forall x.(x \ep \Lambda \to \bot)$.
\end{theorem}

\noindent We can take $x \ep \Lambda$ as our notion of contradictory because $x \ep \Lambda$ implies $\bot$. We now can develop an elementary number theory in $\mathfrak{F}$. 

\begin{definition}[Fregean Numbers]
\

  $N := \iota x. \forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to x \ep c$.

  $0 := \{ \Lambda \}$.

  $S\ a := \iota y. \exists z.(z \ep y . \wedge . (y \cap \overline{\{z\}}) \ep a )$.
\end{definition}

\begin{theorem}
  \

$\vdash 0 \ep N$.
\end{theorem}
\begin{proof}
  We want to prove $\forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to 0 \ep c$. Assume $\forall y.(y \ep c \to S y \ep c)$ and $0 \ep c$, we want to show $0 \ep c$, which
  is obvious\footnote{Observe that the lambda term for the proof is Church numberal zero
$\lambda s.\lambda z.z$.}. 
\end{proof}

\begin{theorem}
  $\vdash \forall y. (y \ep N \to S y \ep N)$.
\end{theorem}
\begin{proof}
Assume $y \ep N$, we want to show $S y \ep N$. By comprehension, we want to show $\forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to (S y) \ep c$. So we assume $\forall y.(y \ep c \to S y \ep c)$ and $0 \ep c$, we need to show $(S y) \ep c$. We know that $y \ep N$ implies $\forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to y \ep c$. By modus ponens, we have $y \ep c$. By universal instantiation, we have $y \ep c \to S y \ep c$. So by modus ponens, we have $S y \ep c$. Thus we have the proof\footnote{The lambda term for this proof is Church successor $\lambda n.\lambda s.\lambda z.s (n\ s\ z)$.}.
\end{proof}

\begin{theorem}[Induction Principle]
  $\vdash \forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to \forall x.(x \ep N \to x \ep c)$.
\end{theorem}
\begin{proof}
 Assume $\forall y.(y \ep c \to S y \ep c), 0 \ep c, x\ep N$. We want to show $x \ep c$. We 
 know $x \ep N$ implies $\forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to x \ep c$. By modus ponens, we get $x \ep c$\footnote{The lambda term for this proof is iterator $\lambda f.\lambda a.\lambda n.n\ f\ a$.}. 
\end{proof}

\noindent Observe that there is an algorithmic interpretation for constructive proof of totality of certain kind of function. For example, the proof of $S$ is total, namely, $\forall y. (y \ep N \to S y \ep N)$, can be encoded as Church numeral's successor $\lambda n.\lambda s.\lambda z. s\ (n \ s\ z)$. This result is already known by Leivant and Krivine \cite{leivant1983reasoning}, \cite{krivine2002lambda}. So one should at least admit there is a constructive flavor in Fregean construction of number. Of course, the system itself is inconsistent, i.e. the following formula is provable in system $\mathfrak{F}$: 

Let $A := (\iota u_1. u_1 \not \in u_1) \ep (\iota u_1. u_1 \not \in u_1) \cong A \to \bot$. So we have $ \vdash A \to \bot$ because $A \vdash A$ and $A \vdash A \to \bot$. Also, $A \to \bot \vdash A \to \bot$ implies $A \to \bot \vdash A$, thus $\vdash (A \to \bot) \to \bot$. By modus ponens, we can derive $\vdash \bot$. It is worthnoting that intuitionistic is irrelavant to prevent inconsistency.


\section{System $\mathfrak{G}$}
\label{sysg}

System $\systemg$ is inspired by Frege's $\mathfrak{F}$ and the possibility of understanding the iota-binder as set-abstraction in higher order logic. System $\systemg$ is a \textit{simple} logical system with the $(\ep, \iota)$-notation. 

\begin{definition}
\

\noindent \textit{Formula} $F \ ::= \  X^0 \ | \ t \ep S \ | \ \Pi X^1.F \ | \ \ F_1 \to F_2 \ | \ \forall x.F \ | \ \Pi X^0.F$ 

\noindent \textit{Set} $S \ ::= X^1 \ | \ \iota x.F$

%% \noindent \textit{Morphism} $M \ ::= t \ep S \ | \ \forall x.(x\ep S \to M)$

\noindent \textit{Domain Terms/Pure Lambda Terms} $t \ :: = \ x \ | \ \lambda x.t \ | \ t t'$

%\noindent \textit{Proof Terms} $p \ ::= \ a \ | \ \lambda a .p \ | \ p p'$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, F$

%\noindent \textit{Records} $\Delta \ :: = \ \cdot \ | \ \Delta, a: x \ep S$

\end{definition} 

\noindent Note that $X^0$ is a formula variable, it represents any formula. $X^1$ is a set variable, it represents any set. $\iota x.F$ is the set formed by the formula $F$. To avoid inconsistency arised in $\mathfrak{F}$, we separate
the notion of set and domain terms, the domain terms in $\mathfrak{G}$ are pure lambda terms. Set can only occur inside of a formula, they do not have their own rule and identity outside of a formula. Again, please do not confuse the set in this Chapter with the set in $\mathbf{ZF}$. $\Pi X^0.F$ is a formula formed by quantifying over formula and $\Pi X^1.F$ is formed by quantifying over set. The notation of $\ep, \iota$ are formal parts of the language of $\systemg$, they are called $(\ep, \iota)$-notation.

\begin{definition}[Deduction Rules]
\fbox{$\Gamma \vdash F$}

\

\begin{tabular}{lll}
    
\infer{\Gamma \vdash F}{F \in \Gamma}

&
\infer[\textit{Conv}]{\Gamma \vdash F_2}{\Gamma \vdash 
F_1 &  F_1 =_{\beta, \iota} F_2}

&

\infer{\Gamma \vdash \forall x.F}
{\Gamma \vdash F &  x \notin \mathrm{FV}(\Gamma)}

\\
\\
\infer{\Gamma \vdash [t/x]F}{\Gamma
\vdash \forall x.F}
&

\infer{\Gamma \vdash  \Pi X^i.F}
{\Gamma \vdash F & X^i \notin \mathrm{FV}(\Gamma) & i= 0,1}

&
\infer[\textit{Inst0}]{\Gamma \vdash [F'/X^0]F}{\Gamma \vdash \Pi X^0.F}

\\
\\

\infer{\Gamma \vdash F_1\to F_2}
{\Gamma, F_1 \vdash F_2}

&

\infer{\Gamma \vdash F_2}{\Gamma
\vdash  F_1 \to F_2 & \Gamma \vdash F_1}

&


\infer[\textit{Inst1}]{\Gamma \vdash [S/X^1]F}{\Gamma \vdash \Pi X^1.F}

\end{tabular}

\end{definition}

\noindent The rule \textit{Inst0} allows us to instantiate $X^0$ with 
any formula, this is what the instantiation does in system \textbf{F}, while the \textit{Inst1} rule allows us to instantiate a set variable $X^1$ with any set $S$.

\begin{definition}[Axioms]
$F_1 =_{\iota,\beta} F_2$ iff one the the following holds.

  \begin{enumerate}
  \item $F_1$ (or $F_2$) is of the form $t \ep (\iota x.F)$ and $F_2$ (or $F_1$) is of the form $[t/x]F$. 
    \item $F_1$ (or $F_2$) contains a term $t$ and $F_2$ (or $F_1$) is obtained from $F_1$ by replacing $t$ with its beta-equivalent term $t'$. 
  \end{enumerate}
\end{definition}

\noindent The first axiom corresponds to the comprehension axiom. The second axiom corresponds to the axiom of extensionality \cite{hatcher:1982}, it also depends on beta-conversion axiom
in lambda calculus. We know that beta-conversion in lambda calculus is Church-Rosser, thus not every lambda terms are considered equal. The reason we set up axioms through the \textit{Conv} rule is that it will not affect the overall proof tree, a direct consequence is that the
consistency and subject reduction are relatively easy to prove, as we shall see next. 
 

\subsection{Consistency of System $\systemg$}
\label{gp}
We have presented the whole specifications of $\systemg$. Now we show $\systemg$ is consistent, in the sense that not every formula is provable in $\systemg$. To prove consistency, we will first devise a version of $\mathfrak{G}$ with proof term annotation, denoted by $\systemg\lbrack p \rbrack$. Then a forgetful mapping from 
$\mathfrak{G}\lbrack p \rbrack$ to System $\mathbf{F}$ is defined. Finally, any deravable judgement in $\mathfrak{G}\lbrack p \rbrack$ can be mapped to a deravable judgement in System $\mathbf{F}$. Thus we can conclude the proof term for $\systemg\lbrack p \rbrack$ is strongly normalizing and not every formula in $\systemg$ is provable.

\begin{definition}[System $\systemg \lbrack p \rbrack$]
\

\noindent  \textit{Proof Terms} $p \ ::= \ a \ | \ \lambda a .p \ | \ p p'$

\noindent  \textit{Proof Context} $\Gamma \ ::= \ \cdot \ | \ a:F, \Gamma$
\end{definition}

\begin{definition}[Proof Annotation]
  \fbox{$\Gamma \vdash p : F$}
  \
  
\begin{tabular}{lll}
    
\infer{\Gamma \vdash p : \forall x.F}
{\Gamma \vdash p: F &  x \notin \mathrm{FV}(\Gamma)}

&
\infer{\Gamma \vdash p : F_2}{\Gamma \vdash p:
F_1 &  F_1 =_{\beta,\iota} F_2}

\\
\\
\infer{\Gamma \vdash a:F}{(a:F) \in \Gamma}

&

\infer{\Gamma \vdash p :[t'/x]F}{\Gamma
\vdash p: \forall x.F}
\\
\\

\infer{\Gamma \vdash  p :\Pi X^i.F}
{\Gamma \vdash p: F & X^i \notin \mathrm{FV}(\Gamma) & i= 0,1}

&
\infer{\Gamma \vdash p:[F'/X^0]F}{\Gamma \vdash p: \Pi X^0.F}

\\
\\

\infer{\Gamma \vdash \lambda a.p : F_1\to F_2}
{\Gamma, a:F_1 \vdash p: F_2}

&

\infer{\Gamma \vdash p p':F_2}{\Gamma
\vdash p: F_1 \to F_2 & \Gamma \vdash p': F_1}

\\
\\


\infer{\Gamma \vdash p:[S/X^1]F}{\Gamma \vdash p: \Pi X^1.F}

\end{tabular}
  



\end{definition}

\noindent The proof terms only annotated the introduction and elimination rules of implication. We say it is in Curry style. 

\begin{definition}
We define $\phi$ to be a mapp from $\systemg[p]$ to System $\mathbf{F}$. 


\begin{tabular}{ll}
  $\phi(X^0) := X$

  &
  
  $\phi(t \ep S) := \phi(S)$
  
  \\
  
  $\phi(F_1 \to F_2) := \phi(F_1) \to \phi(F_2)$
  &
  
  $\phi(\Pi X^0.F) := \Pi X.\phi(F)$

  \\

    $\phi(\Pi X^1.F) := \Pi X.\phi(F)$

  &
  
    $\phi(\forall x.F) := \phi(F)$
  
  \\
   $\phi(X^1) := X$
  
  &
  $\phi(\iota x.F) := \phi(F)$
\end{tabular}
\end{definition}

\noindent Note that the function $\phi$ can be easily extended to the proof context. It maps formula and set in $\systemg\lbrack p \rbrack$ to types in System $\mathbf{F}$.

\begin{lemma}
  \label{eq-phi}
  \
  
  \begin{enumerate}
  \item If $F_1 =_{\beta, \iota} F_2$, then $\phi(F_1) \equiv \phi(F_2)$.
    \item  $\phi(F) \equiv \phi([t'/x]F)$.
      \item $\phi([F'/X^0]F) \equiv [\phi(F')/X]\phi(F)$. 
      \item $\phi([S/X^1]F) \equiv [\phi(S)/X]\phi(F)$. 
        
  \end{enumerate}
\end{lemma}

\noindent The following theorem connects System $\systemg \lbrack p \rbrack$ with System $\mathbf{F}$.

\begin{theorem}
  \label{const}
 If $\Gamma \vdash p:F$ in $\systemg \lbrack p \rbrack$, then $\phi(\Gamma)\vdash p:\phi(F)$ in $\mathbf{F}$.
\end{theorem}

\begin{proof}
  By induction on the derivation of $\Gamma \vdash p:F$.
  \begin{itemize}
  \item \textbf{Case}: \infer{\Gamma \vdash a:F}{(a:F) \in \Gamma}
  
  \noindent By $a:\phi(F)\in \phi(\Gamma)$.

    \item \textbf{Case}: \infer{\Gamma \vdash p : \forall x.F}
{\Gamma \vdash p: F &  x \notin \mathrm{FV}(\Gamma)}

    \noindent By IH, we know that $\phi(\Gamma) \vdash p: \phi(F) \equiv \phi(\forall x.F)$. 
    
      \item \textbf{Case}: \infer{\Gamma \vdash p : F_2}{\Gamma \vdash p:
F_1 &  F_1 =_{\beta,\iota} F_2}

\noindent By lemma \ref{eq-phi}, we know that $\phi(F_1) \equiv \phi(F_2)$.

  \item \textbf{Case}: \infer{\Gamma \vdash p :[t'/x]F}{\Gamma
\vdash p: \forall x.F}

\noindent By lemma \ref{eq-phi}, we know that $\phi(\forall x.F) \equiv \phi(F) \equiv \phi([t'/x]F)$. 

  \item \textbf{Case}: \infer{\Gamma \vdash  p :\Pi X^i.F}
{\Gamma \vdash p: F & X^i \notin \mathrm{FV}(\Gamma) & i= 0,1}

\noindent By IH, we know $\phi(\Gamma) \vdash p:\phi(F)$. And $X \notin \mathrm{FV}(\phi(\Gamma))$, thus $\phi(\Gamma) \vdash p: \Pi X.\phi(F) \equiv \phi(\Pi^i X.F)$.

  \item \textbf{Case}: \infer{\Gamma \vdash p:[F'/X^0]F}{\Gamma \vdash p: \Pi X^0.F}

\noindent By IH, we know that $\phi(\Gamma) \vdash p: \Pi X.\phi(F)$. Thus $\phi(\Gamma) \vdash p: [\phi(F')/X]\phi(F) \equiv \phi([F'/X^0]F)$. The last equality is by lemma \ref{eq-phi}.

  \item \textbf{Case}: \infer{\Gamma \vdash \lambda a.p : F_1\to F_2}
{\Gamma, a:F_1 \vdash p: F_2}

\noindent By IH, we know $\phi(\Gamma), a:\phi(F_1) \vdash p: \phi(F_2)$. Thus $\phi(\Gamma) \vdash \lambda a.p: \phi(F_1) \to \phi(F_2)$.

  \item \textbf{Case}: \infer{\Gamma \vdash p p':F_2}{\Gamma
\vdash p: F_1 \to F_2 & \Gamma \vdash p': F_1}

\noindent By IH, $\phi(\Gamma) \vdash p: \phi(F_1) \to \phi(F_2)$ and $ \phi(\Gamma) \vdash p': \phi(F_1)$. Thus $\phi(\Gamma) \vdash p p':\phi(F_2)$. 

  \item \textbf{Case}: \infer{\Gamma \vdash p:[S/X^1]F}{\Gamma \vdash p: \Pi X^1.F}

\noindent By IH, we know that $\phi(\Gamma) \vdash p: \Pi X.\phi(F)$. Thus $\phi(\Gamma) \vdash p: [\phi(S)/X]\phi(F) \equiv \phi([S/X^1]F)$. The last equality is by lemma \ref{eq-phi}.
  \end{itemize}
\end{proof}

\noindent Theorem \ref{const} implies that if $\Gamma \vdash p:F$ in $\systemg\lbrack p \rbrack$, then $p$ is strongly normalizing. So the formlua $\Pi X^0.X$ in $\systemg$ is unprovable. 

\subsection{Preservation Theorem for $\systemg\lbrack p \rbrack$}
 
We need to establish preservation property (subject reduction) for $\systemg\lbrack p \rbrack$ in order to explore more unprovable formulas in $\systemg$ (at meta-level). The proof of preservation theorem is an adaption of Barendregt's method for proving preservation for System $\mathbf{F}$ \`a la Curry \cite{Barendregt:1993}. 

\begin{definition}[Formula Reduction]
\

\begin{itemize}
  \item $F_1 \to_{\beta} F_2$ if $t_1 =_{\beta} t_2$, $F_1 \equiv F[t_1]$ and $F_2 \equiv F[t_2]$.
    \item $F_1 \to_{\iota} F_2$ if $F_1 \equiv t\ep \iota x.F$ and $F_2 \equiv [t/x]F$.
  \end{itemize}
\end{definition}

\noindent Note that $F[t_1]$ means the lambda term $t_1$ appears inside the formula $F$ and 
$\to_{\beta, \iota}$ denotes $\to_{\beta} \cup \to_{\iota}$. 

\begin{lemma}
  $\to_{\beta, \iota}$ is confluent. 
\end{lemma}
\begin{proof}
  We know that $\to_{\beta}$ and $\to_{\iota}$ are confluent. We also know that $\to_{\beta}$
  commutes with $\to_{\iota}$, so $\to_{\beta, \iota}$ is confluent.
\end{proof}

\begin{definition}[Morphing Relations]
\

  \begin{itemize}
   \item $F_1 \to_i F_2$ if $F_1 \equiv \forall x.F$ and $F_2 \equiv [t/x]F$ for some term
     $t$.
   \item  $F_1 \to_g F_2$ if $F_2 \equiv \forall x.F_1$.

   \item $F_1 \to_I F_2$ if $F_1 \equiv \Pi X^0.F$ and $F_2 \equiv [F'/X^0]F$ for formula
     $F'$.
   \item  $F_1 \to_G F_2$ if $F_2 \equiv \Pi X^0.F_1$.

   \item $F_1 \to_{is} F_2$ if $F_1 \equiv \Pi X^1.F$ and $F_2 \equiv [S/X^1]F$ for some set
     $S$.
   \item  $F_1 \to_{gs} F_2$ if $F_2 \equiv \Pi X^1.F_1$.

  \end{itemize}
\end{definition}

\noindent Let $\twoheadrightarrow_{gi}$ denotes the reflexive and transitive closure of 
$\to_{i,g, I, G, is, gs}$. 

\begin{lemma}
Suppose no free variable of $F$ occurs in $\Gamma$. If $\Gamma \vdash p: F$

\noindent and $F \twoheadrightarrow_{gi} F'$, then $\Gamma \vdash p: F'$.
\end{lemma}


\begin{definition}
\

  \begin{tabular}{lll}
  $E_0(\Pi X^0.F) := E_0(F)$
&
  $E_0(X^0) := X^0$

&

 $E_0(F_1\to F_2) := F_1\to F_2$

\\
  $E_0(\Pi X^1.F) := \Pi X^1.F$
&

$E_0(\forall x.F) := \forall x.F$

&

$E_0(t\ep S) := t\ep S$
\end{tabular}
\end{definition}

\begin{definition}
\

  \begin{tabular}{lll}
  $E_1(\Pi X^1.F) := E_1(F)$
&
  $E_1(X^0) := X^0$

&

 $E_1(F_1\to F_2) := F_1\to F_2$

\\
$E_1(t\ep S) := t\ep S$
&

$E_1(\forall x.F) := \forall x.F$

&

  $E_1(\Pi X^0.F) := \Pi X^0.F$

\end{tabular}
\end{definition}

\begin{definition}
\

  \begin{tabular}{lll}
  $G(\Pi X^i.F) := \Pi X^i.F$
&
  $G(X^0) := X^0$

&

 $G(F_1\to F_2) := F_1\to F_2$

\\

$G(\forall x.F) := G(F)$

&
$G(t\ep S) := t\ep S$

\end{tabular}
\end{definition}

\begin{lemma}
  \label{subeg}
  $E_0([F'/X^0]F) \equiv [F''/X^0]E_0(F)$ for some $F''$; 
  
  \noindent $E_1([S/X^1]F) \equiv [S/X^1]E_1(F)$; $G([t/x]F) \equiv [t/x]G(F)$.  
\end{lemma}

\begin{proof}
Proof by induction on the structure of $F$.  
\end{proof}

\begin{lemma}
  \label{termsub}
  If $F \twoheadrightarrow_{i, g} F'$, then there exist a substitution $\delta$ with domain
  of term variables and codomain of terms such that $\delta G(F) \equiv G(F')$.
\end{lemma}

\begin{proof}
  It suffices to consider $ F {\to_{i,g}} F'$.
If $F' \equiv \forall x .F$, then $G(F') \equiv G(F)$.
If $F \equiv \forall x .F_1$ and $F' \equiv [t/x]F_1$, then $G(F) \equiv G(F_1)$. By lemma \ref{subeg}, we know $G(F') \equiv G([t/x]F_1) \equiv [t/x]G(F_1)$. 
\end{proof}

\begin{lemma}
  \label{formsub}
  If $F \twoheadrightarrow_{I, G} F'$, then there exist a substitution $\delta$ with domain
  of formula variables and codomain of formulas such that $\delta E_0(F) \equiv E_0(F')$.
\end{lemma}
\begin{proof}
    It suffices to consider $F {\to_{I,G}} F'$. If $F' \equiv \Pi X^0.F$, then $E_0(F') \equiv E_0(F)$. If $F \equiv \Pi X^0.F_1$ and $F' \equiv [F''/X^0]F_1$, then $E_0(F) \equiv E_0(F_1)$. By lemma \ref{subeg}, we know $E_0(F') \equiv E_0([F''/X^0]F_1) \equiv [F_2/X^0]E_0(F_1)$ for some $F_2$. 

\end{proof}

\begin{lemma}
  \label{setsub}
  If $F \twoheadrightarrow_{is, gs} F'$, then there exist a substitution $\delta$ with domain
  of set variables and codomain of sets such that $\delta E_1(F) \equiv E_1(F')$.
\end{lemma}
\begin{proof}
    It suffices to consider $ F {\to_{is,gs}} F'$.
If $F' \equiv \Pi X^1 .F$, then $E_1(F') \equiv E_1(F)$.
If $F \equiv \Pi X^1 .F_1$ and $F' \equiv [S/X^1]F_1$, then $E_1(F) \equiv E_1(F_1)$. By lemma \ref{subeg}, we know $E_1(F') \equiv E_1([S/X^1]F_1) \equiv [S/X^1]E_1(F_1)$. 
\end{proof}

\begin{theorem}[Compatibility]
  \label{comp}
  If $(F_1 \to F_2) \to_{\iota,\beta,i,g,I,G, is, gs}^*(F_1' \to F_2')$, then there exists a mixed substitution\footnote{A substitution that contains term, set and formula substitution} $\delta$ such that $ \delta(F_1 \to F_2) \to_{\beta} F_1' \to F_2'$. Thus $\delta F_1 \to_{\beta} F_1'$ and $ \delta F_2 \to_{\beta} F_2'$. 
\end{theorem}

\begin{proof}
  By lemma \ref{termsub}, lemma \ref{formsub} and lemma \ref{setsub}, we have $\delta(F_1 \to F_2) \to_{\beta, \iota} F_1' \to F_2'$ for some mix substitution $\delta$. Since $\to_{\iota}$ reduction can not happen in the sequence $\delta(F_1 \to F_2) \to_{\beta, \iota} F_1' \to F_2'$, so we have $\delta(F_1 \to F_2) \to_{\beta} F_1' \to F_2'$. Thus $\delta F_1 \to_{\beta} F_1'$ and $ \delta F_2 \to_{\beta} F_2'$. 

\end{proof}

\begin{lemma}[Inversion]
  
  \
  
  \begin{itemize}
  \item If $\Gamma \vdash a:F$, then exist $F_1$ such that $F_1 {\to_{\iota,\beta,i,g,I,G, is,gs}^*} F$ and $(x:F_1) \in \Gamma$.

  \item If $\Gamma \vdash p_1 p_2:F$, then exist $F_1, F_2$ such that $\Gamma \vdash p_1:F_1 \to F_2$ and $\Gamma \vdash p_2:F_1$ and $F_2 {\to_{\iota,\beta,i,g,I,G, is, gs}^*} F$. 

\item If $\Gamma \vdash \lambda a.p:F$, then exist $F_1, F_2$ such that $\Gamma, a:F_1 \vdash p:F_2$ and $F_1 \to F_2 {\to_{\iota,\beta,i,g,I,G, is, gs}^*}F$. 

  \end{itemize}
\end{lemma}

\begin{lemma}[Substitution]
\label{subst1}
\

  \begin{enumerate}
  \item   If $\Gamma \vdash p:F$, then for any mixed substitution $\delta$, $\delta \Gamma \vdash p: \delta F$.
\item If $\Gamma, a:F \vdash p:F'$ and $\Gamma \vdash p':F$, then $\Gamma \vdash [p'/a]p:F'$.
  \end{enumerate}

\end{lemma}

\begin{theorem}[Preservation]
  \label{preservation}
If $\Gamma \vdash p : F$ and $p \to_{\beta} p'$, then $\Gamma \vdash p':F$.
\end{theorem}

\begin{proof}
  We list one interesting case:

  \

\infer{\Gamma \vdash p_1 p_2:F_2}{\Gamma
\vdash p_1: F_1 \to F_2 & \Gamma \vdash p_2: F_1}

\noindent Suppose $\Gamma \vdash (\lambda a.p_1) p_2 \to_{\beta} [p_2/a]p_1$. We know that
$\Gamma \vdash \lambda a.p_1 : F_1 \to F_2$ and $\Gamma \vdash p_2:F_1$. By
inversion on $\Gamma \vdash \lambda a.p_1 : F_1 \to F_2$, we know that 
there exist $F_1', F_2'$ such that $\Gamma, a:F_1' \vdash p_1:F_2'$
and $(F_1' \to F_2') {\to_{\iota,\beta,i,g,I,G, is,gs}^*} (F_1 \to F_2)$. 
By theorem \ref{comp}, we have $ \delta(F_1' \to F_2') =_{\beta} (F_1 \to F_2)$. By Church-Rosser
of $=_{\beta}$, we have $ \delta F_1'=_{\beta}  F_1$ and $\delta F_2' =_{\beta} F_2$. 
So by (1) of lemma \ref{subst1}, we have $\Gamma, a:\delta F_1' \vdash p_1: \delta F_2'$. So $\Gamma, a: \delta F_1' \vdash p_1: F_2$. Since $\Gamma \vdash p_2:\delta F_1'$, by (2) of lemma \ref{subst1}, $\Gamma \vdash [p_2/a]p_1: F_2$. 

\end{proof}


\section{A Polymorphic Dependent Type System $\systemg \lbrack t \rbrack$}
\label{internal}
In this section, we first show a polymorphic dependent type system $\systemg \lbrack t \rbrack$. Then, we define an embedding from $\systemg \lbrack t \rbrack$ to $\systemg$. The embedding is invertable, thus we can transform (at meta level) a judgement in $\systemg$ to a judgement in $\systemg \lbrack t \rbrack$ and vice versa. We call this behavior \textit{reciprocity}. 

\begin{definition}[Syntax]
\

\noindent Lambda Terms $t\ := \ x \ | \ \lambda x.t \ | \ t t'$
 
\noindent Internal Types $U\ := X^1 \ | \ \iota x.Q \ | \ \Pi x:U.U' \ | \ \Delta X^1.U$

\noindent Internal Formula $Q \ := X^0 \ | \ t \ep U \ | \ \Pi X^0.Q \ | \ Q \to Q' \ | \ \forall x.Q \ | \  \Pi X^1. Q$

\noindent Internal Context $\Psi\ :=  \ \cdot \ | \ \Psi, x\ep U$

\end{definition}

\noindent Besides basic set formed by formula and set variable, internal types includes dependent-type-like construct $\Pi x:U.U'$ and polymorphic-type-like construct $\Delta X^1.U$. The internal formula stay the same as formula in $\systemg$ except replacing the notion of set by the notion of internal type. We can view $\ep$ relation as a kind of typing relation, thus we have the notion of
internal context as a list of formula of the form $x\ep U$ and the following internal typing relation. 

\begin{definition}[Internal Typing]
  \fbox{$\Psi \Vdash t \ep U$}
  
  \
  
    \begin{tabular}{lll}
%%\infer[toSet]{\intern{\Gamma} \Vdash t': t \ep \intern{S}}{\Gamma \vdash t':t\ep S}

\infer{\Psi \Vdash x \ep U}{x \ep U \in \Psi }

&

\infer{\Psi \Vdash \lambda x.t\ep \Pi x:U. U'}
{\Psi, x \ep U \Vdash  t \ep U'}

&

\infer{\Psi \Vdash t\ep \Delta X^1. U}
{\Psi \Vdash t\ep U & X^1 \notin FV(\Psi)}

\\
\\
\infer{\Psi \Vdash  t\ep [U'/X] U}
{\Psi \Vdash t \ep \Delta X^1.U}

& 
\infer{\Psi \Vdash  t_1 t_2 \ep [t_2/x]U}{\Psi
\Vdash  t_1 \ep \Pi x: U'.U & \Psi \Vdash  t_2 \ep U'}
\end{tabular}
  
\end{definition}

\noindent The internal typing looks remarkably like the usual polymorphic dependent type
system. But we want to emphasis that the meaning of internal typing in $\systemg \lbrack t \rbrack$ is different from the usual notion of typing. The internal typing relation is an internal formula in $\systemg \lbrack t \rbrack$ (which lies in the object language), while the ususal notion of typing relation is a meta-level relation. For example, $t\ep U$ is a formula while
$t : T$ is a meta-level relation. The emergence of internal typing benefits from the $(\ep, \iota)$-notation. Now let us relate $\systemg \lbrack t \rbrack$ with $\systemg$. 

\begin{definition}
    $\interp{\cdot}$ is an \textbf{embedding} from internal types in $\systemg \lbrack t \rbrack$ to sets in $\systemg$, internal formulas in $\systemg \lbrack t \rbrack$ to formulas in $\systemg$.

\noindent  $\interp{X^1} := X^1$

\noindent  $\interp{\iota x.Q} := \iota x.\interp{Q}$

\noindent $\interp{\Pi x:U'.U} := \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$, where $f$ is fresh.

\noindent $\interp{\Delta X^1.U} := \iota x. (\Pi X^1. x \ep \interp{U})$, where $x$ is fresh.

\noindent  $\interp{X^0} := X^0$

\noindent  $\interp{t\ep U} := t \ep \interp{U}$

\noindent  $\interp{Q \to Q'} := \interp{Q} \to \interp{Q}$

\noindent  $\interp{\Pi X^i.Q} := \Pi X^i.\interp{Q}$.

\noindent  $\interp{\forall x.Q} := \forall x.\interp{Q}$.

\noindent  $\interp{x\ep U, \Psi} :=  x \ep \interp{U}, \interp{\Psi}$

\end{definition}


\begin{lemma}
\label{subterm}
$[t'/x]\interp{U} = \interp{[t'/x]U}$ and $[\interp{U'}/X^1] \interp{U} = \interp{[U'/X^1]U}$.
\end{lemma}
\begin{proof}
  By induction on structure of $U$.
\end{proof}

\begin{theorem}
 \label{ext}
 If $\Psi \Vdash t \ep U$, then $\interp{\Psi} \vdash  t\ep \interp{U}$.
\end{theorem}
\begin{proof}
  By induction on the derivation of $\Psi \Vdash t \ep U$. 
  \begin{itemize}
  \item \textbf{Case}: \infer{\Psi \Vdash x \ep U}{x\ep U \in \Psi }

\noindent $\interp{\Psi} \vdash x \ep \interp{U}$, since $x\ep \interp{U} \in \interp{\Psi}$.

\item \textbf{Case}: \infer{\Psi \Vdash \lambda x.t \ep \Pi x:U. U'}
{\Psi, x\ep U \Vdash t \ep U'}

\noindent By induction, we have $\interp{\Psi}, x\ep \interp{U} \vdash  t \ep \interp{U'}$.
So $\interp{\Psi} \vdash x\ep \interp{U} \to t \ep \interp{U'}$, then by $\forall$-intro
rule, we have $\interp{\Psi} \vdash \forall x.(x\ep \interp{U} \to t \ep \interp{U'})$. By comprehension rule and beta-reduction, we get $\interp{\Psi} \vdash \lambda x.t \ep \iota f.\forall x.(x\ep \interp{U} \to f \ x \ep \interp{U'})$. We know that $\interp{\Pi x:U.U'} := \iota f. \forall x. (x \ep \interp{U} \to f\ x \ep \interp{U'})$. 

\item \textbf{Case}: \infer{\Psi \Vdash t t' \ep [t'/x]U}{\Psi
\Vdash t \ep  \Pi x: U'.U & \Psi \Vdash t'\ep U'}

\noindent By induction, we have $\interp{\Psi} \vdash t \ep \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$ and $ \interp{\Psi} \vdash t'\ep \interp{U'}$. By comprehension, we have $\interp{\Psi} \vdash \forall x. (x \ep \interp{U'} \to t\ x \ep \interp{U})$. Instantiate $x$ with $t'$, we have $\interp{\Psi} \vdash  t' \ep \interp{U'} \to t\ t' \ep [t'/x] \interp{U}$. So by modus ponens, we have $\interp{\Psi} \vdash t t' \ep [t'/x]\interp{U}$. By lemma \ref{subterm}, we know that $[t'/x]\interp{U} = \interp{[t'/x]U}$. So $\interp{\Psi} \vdash t t' \ep \interp{[t'/x]U}$.

\item \textbf{Case}: \infer{\Psi \Vdash t \ep \Delta X^1. U}
{\Psi \Vdash t \ep U & X^1 \notin FV(\Psi)}

\noindent By induction, one has $\interp{\Psi} \vdash  t \ep \interp{U}$. So one has 
$\interp{\Psi} \vdash \Pi X^1. t \ep \interp{U}$. So by comprehension, one has $\interp{\Psi} \vdash t\ep \iota x. \Pi X^1. x \ep \interp{U}$. 

\item \textbf{Case}: \infer{\Psi \Vdash t \ep [U'/X] U}
{\Psi \Vdash t \ep \Delta X^1.U}

\noindent By induction, one has $\interp{\Psi} \vdash t \ep \iota x. \Pi X^1. x \ep \interp{U}$. By comprehension, we have $\interp{\Psi} \vdash \Pi X^1. t \ep \interp{U}$. So by instantiation, we have $\interp{\Psi} \vdash t \ep [\interp{U'}/X^1] \interp{U}$. Since by lemma \ref{subterm}, we know $[\interp{U'}/X^1] \interp{U} = \interp{[U'/X^1]U}$.
  \end{itemize}
\end{proof}

%We now define the inverse of $\interp{\cdot}$. 

\begin{definition}

\

\noindent $\intern{\cdot}$ is a maping from the sets in $\systemg$ to the internal types in $\systemg \lbrack t \rbrack$, from the formulas in $\systemg$ to the internal formulas $\systemg \lbrack t \rbrack$.

\noindent $\intern{X^1} := X^1$

\noindent $\intern{\iota f. \forall x. (x \ep S' \to f\ x \ep S)} := \Pi x:\intern{S'}.\intern{S}$, where $f$ is fresh.

\noindent $\intern{\iota x. (\Pi X^1. x \ep S)} := \Delta X^1.\intern{S}$, where $x$ is fresh.

\noindent $\intern{\iota x.T} := \iota x.\intern{T}$

\noindent $\intern{X^0} := X^0$

\noindent $\intern{t\ep S} := t \ep \intern{S}$

\noindent $\intern{T \to T'} := \intern{T} \to \intern{T}$

\noindent $\intern{\Pi X^i.T} := \Pi X^i.\intern{T}$.

\noindent $\intern{\forall x.T} := \forall x.\intern{T}$.

\noindent $\intern{x\ep S, \Gamma} := x\ep \intern{S}, \intern{\Gamma}$
\end{definition}

\begin{lemma}
\label{id}
  $\interp{\intern{S}} = S$ and $\intern{\interp{U}} = U$.
\end{lemma}

\begin{proof}
By induction. 
\end{proof}

\noindent By lemma \ref{id}, if we have $\Gamma \vdash t \ep S$ in $\systemg$, 
we can go to $\systemg \lbrack t \rbrack$ by $\intern{\Gamma} \Vdash t \ep \intern{S}$. Then, 
after a few deductions in $\systemg \lbrack t \rbrack$, we can use theorem \ref{ext} to go 
back to $\systemg$. 

\section{Proving Peano's Axioms}
\label{peano}
In this section, we prove all of Peano's axioms~\cite{peano1889} in $\systemg$. First, let us
define natural number as Scott numeral.

\begin{definition}[Scott Numerals]
  \
  
  \noindent $\mathsf{Nat} := \iota x. \Pi C^1.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to x \ep C$

\noindent $\mathsf{S} \ := \lambda n. \lambda s.\lambda z. s \ n$

\noindent $0\  := \lambda s. \lambda z.z$

\end{definition}


\begin{theorem}[Peano's Axiom 1]
  $\vdash 0 \ep \nat$.
\end{theorem}
\begin{proof}
By comprehension, we want to show $\vdash \Pi C^1.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to 0 \ep C$, which is obvious\footnote{Note the proof terms for the theorem is Church numeral $0$.}. 
\end{proof}

\begin{definition}[Leibniz Equality]
  $x = y\ := \ \Pi C^1. x \ep C \to y \ep C$.
\end{definition}
\begin{theorem}[Peano Axiom 2-4]
\

\begin{enumerate}
  \item $\forall x. x = x $.
  \item $\forall x.\forall y . x = y \to y = x$.
  \item $\forall x.\forall y. \forall z. x = y \to y = z \to x = z$.
  \end{enumerate}
\end{theorem}
\begin{proof}
  We only prove 2, the others are easy. Assume $\Pi C^1. x\ep C \to y \ep C$(1), we want to show $ y \ep A \to x \ep A$ for any $A^1$. Instantiate $C$ in (1) with $\iota z.  (z \ep A \to x \ep A)$. By comprehension, we get $(x \ep A \to x \ep A) \to (y \ep A \to x \ep A)$. And we know that $x \ep A \to x \ep A$ is derivable in our system, so by modus ponens we get $y \ep A \to x \ep A$. 
\end{proof}

\begin{lemma}
\label{oconv}
   $\cdot \vdash \forall a. \forall b. \Pi P^1. ( a \ep P \to a = b \to b \ep P)$. 
\end{lemma}
\begin{proof}
  By modus ponens.
\end{proof}
\begin{theorem}[Peano's Axiom 5]
   $\cdot \vdash  \forall a. \forall b. (a \ep \mathsf{Nat} \to a = b \to b \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
  Let $P := \iota x.x\ep \mathsf{Nat}$ for lemma \ref{oconv}. 
\end{proof}


\begin{theorem}[Peano's Axiom 6]
  \label{succ}
$\cdot \vdash \forall m. (m \ep \mathsf{Nat} \to \mathsf{S}m \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
Assume $m \ep \mathsf{Nat}$. We want to show $\mathsf{S}m \ep \mathsf{Nat}$. By comprehension, we just need to show $ \Pi C^1.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to \suc m \ep C$. By Intros, we want to derive $m \ep \mathsf{Nat}, \forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), 0 \ep C \vdash \suc m \ep C$. Since $m \ep \mathsf{Nat}$, we know that $\Pi C^1.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to m \ep C$. By Modus Ponens, we have $ m \ep C$. We know that $m \ep \mathsf{Nat}, \forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), 0 \ep C \vdash (m \ep C) \to (\mathsf{S} m)\ep C$. Thus we derive $m \ep \mathsf{Nat}, \forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), 0 \ep C \vdash \suc m \ep C$, which is what we want\footnote{Note that the proof term for this theorem is Church successor.}. 
  \end{proof}

\begin{theorem}[Induction Principle]
  \label{induction}
\

\noindent  $\vdash \Pi C^1. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$

\end{theorem}
\begin{proof}
Assume $\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), 0 \ep C$ and $m \ep \mathsf{Nat}$. We need to show that $m \ep C$. Since $m \ep \mathsf{Nat}$ implies that $\Pi C^1.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to m \ep C$. So by instantiation and modus ponens we get $m \ep C$. \footnote{The proof terms for this theorem is $\lambda s. \lambda z. \lambda n. n\ s\ z$.}

\end{proof}

\noindent In order to proceed to prove Peano's axiom 7, we need to define a notion of contradiction in $\systemg$. 

\begin{definition}[Notion of Contradiction]
  $\bot := \forall x. \forall y. (x = y)$.  
\end{definition}

\begin{theorem}[Consistency (Meta)\footnote{Meaning the proof of this theorem relies on meta-level argument.}]
  \label{contradiction}
  $\bot$ is uninhabited in $\systemg \lbrack p \rbrack$.
\end{theorem}
\begin{proof}
  Suppose $\bot$ is inhabited, that is, there is a proof term $p$ such that 
  $\cdot \vdash p : \forall x. \forall y. \Pi C^1. x \ep C \to y\ep C$. By theorem \ref{const} and theorem \ref{preservation}, we know that $p$ must normalized at some normal proof term $p'$
  such that $\cdot \vdash p' : \forall x. \forall y. \Pi C^1. x \ep C \to y\ep C$. We know that $p'$ must of the form $\lambda a.p''$ with $a: x \ep C$. Since $=_{\beta, \iota}$ is Church-Rosser, we can not convert $x\ep C$ to $y \ep C$. So $p'$ can not exist.   
\end{proof}

\begin{lemma}
  \label{zero1}
  $\vdash 0 = \suc 0 \to \bot$.
\end{lemma}
\begin{proof}
Assume $0 = \suc 0$, namely, $\Pi C^1. 0 \ep C \to \suc 0 \ep C$ $\dagger$. We want to show $\forall x.\forall y. \Pi A^1. x \ep A \to y \ep A$. 
Assume $x \ep A$ (1). We now instantiate $C$ with $\iota u. (((\lambda n. n\ (\lambda z.y)\ x)\ u) \ep A)$ in $\dagger$. By comprehension and beta reduction, we get $x \ep A \to y \ep A$ (2). By modus ponens of (1), (2), we get $y \ep A$. 
\end{proof}

\noindent We also need predecessor to prove Peano's axiom 7. 
\begin{definition}
  $\pred := \lambda n. n  (\lambda x.x) 0$.
\end{definition}

\begin{lemma}[Congruence of Equality]
  \label{cong}
  $\vdash \forall a.\forall b.\forall f. a = b \to f a = f b$.
\end{lemma}
\begin{proof}
  Assume $\Pi C. a \ep C \to b \ep C$. Let $C := \iota x. f x \ep P$ with $P$ free. Instantiate $C$ for the 
assumption, we get $a \ep (\iota x. f x \ep P) \to b \ep (\iota x. f x \ep P)$. By conversion, 
we get $f\ a \ep P \to f\ b \ep P$. So by polymorphic generalization, we get $f\ a = f\ b$.
  
\end{proof}

\begin{theorem}[Peano's Axiom 7]
  $\vdash \forall n. n \ep \nat \to (\suc n = 0 \to \bot)$
\end{theorem}
\begin{proof}
  We will use induction principle (theorem \ref{induction}) to prove this. We instantiate 
  $C$ in theorem \ref{induction} with $\iota z. (\suc z = 0 \to \bot)$, we have $ \forall y . ( (\suc y = 0 \to \bot ) \to (\suc \mathsf{S} y = 0 \to \bot)) \to (\suc 0 = 0 \to \bot) \to \forall m. (m \ep \mathsf{Nat} \to (\suc m = 0 \to \bot))$. Base case is by lemma \ref{zero1}. For the step case, we assume $\suc y = 0 \to \bot$ (IH), we want to show $\suc \mathsf{S} y = 0 \to \bot$. 
  Assuming $\suc \suc y = 0$, we want to show $\bot$. By lemma \ref{cong}, we know that $\pred(\suc \suc y) = \pred 0$. By beta-reduction, we have $\suc y = 0$. Thus by IH, we have $\bot$. 
\end{proof}


\begin{theorem}[Peano's Axiom 8]
  $\forall m. \forall n. m\ep \nat \to n\ep \nat \to \suc m = \suc n \to m = n$.
\end{theorem}
\begin{proof}
Assume $\suc m = \suc n$. By lemma \ref{cong}, we have $\pred (\suc m) = \pred (\suc n)$. So 
by beta reduction, we have $m = n$.
\end{proof}

\noindent In order to state Peano's axiom 9, we extend the formula in $\systemg$ with $F \wedge F'$. And the proof of $F\wedge F'$ consist of both the proof of $F$ and the proof of $F'$\footnote{This extension can be avoided by defining $F\wedge F' := \forall Y^0 . (F \to F' \to Y) \to Y$.}.

\begin{theorem}[Peano's Axiom 9, Weak Induction]
 \
 
  \noindent  $\vdash \Pi C^1. (\forall y . (y\ep \nat \wedge (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$
\end{theorem}
\begin{proof}
  Assume $\forall y . (y\ep \nat \wedge (y \ep C) \to (\mathsf{S} y) \ep C)$ $\dagger$ and $0 \ep C$. We want to show that $\forall m. (m \ep \mathsf{Nat} \to m \ep C)$. We just need to show $\forall m. (m \ep \mathsf{Nat} \to (m \ep \nat \wedge m \ep C))$. We prove this using theorem \ref{induction}. For the base case, it is obvious that $0 \ep \nat \wedge 0 \ep C$. For step case, assuming $z\ep \nat \wedge z\ep C$ (IH), we need to show $\suc z \ep \nat \wedge \suc z\ep C$. By theorem \ref{succ}, we have $\suc z \ep \nat$. By $\dagger$, we know that $\suc z \ep C$. Thus $\forall z. (z \ep \mathsf{Nat} \to (z \ep \nat \wedge z \ep C))$. 
\end{proof}

\noindent We have proved all Peano's nine axioms. We will leave the investigation of the relation between strong induction principle and the weak induction principle as future work. 

\section{Reasoning about Programs}
\label{prog}
System $\systemg$ is expressive enough to reason about programs. By programs we mean 
lambda calculus with Scott encoding and recursive term definitions. We first show 
some simple examples about Scott numerals, and then we show how to encode Vector in $\systemg$.

\begin{definition}
  $\mathsf{add} :=  \lambda n. \lambda m.n\ (\lambda p. \mathsf{add}\ p\ (\mathsf{S} m))\ m$
\end{definition}

\noindent We know that the above recursive equation can be solved by fixpoint. 
For convenient, we simply use the definition as a kind of build-in beta equality. i.e. whenever
we see a $\mathsf{add}$, we one step unfold it. 

\begin{theorem}
$\cdot \vdash \forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$. 
\end{theorem}
\begin{proof}

We want to show $\forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$.
 Let $P := \iota x. \mathsf{add}\ x\ 0 = x$. Instantiate the $C^1$ in theorem \ref{induction} with $P$, we get $\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y) \to\mathsf{add}\ 0\ 0 = 0 \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. We just have to prove $\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y)$ and $\mathsf{add}\ 0\ 0 = 0$. For the base case, we want to show $\Pi C. \mathsf{add}\ 0\ 0 \ep C \to 0 \ep C$. Assume $\mathsf{add}\ 0\ 0 \ep C$, since $\mathsf{add}\ 0\ 0 \to_{\beta} 0$, by conversion, we get $0 \ep C$. For the step case is a bit complicated, assume $\mathsf{add}\ y\ 0 = y$, we want to show $\mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y$. Since $\mathsf{add}\ y\ 0 \to_{\beta} y\ (\lambda p.\mathsf{add}\ p\ (\suc 0))\ 0$,  And $\mathsf{add}\ (\mathsf{S}y)\ 0 \to_{\beta} \mathsf{add} \ y \ (\suc 0) \leftarrow_{\beta}^* \suc (\mathsf{add}\ y\ 0)$. So lemma \ref{cong} will give us this. 
\end{proof}

\begin{theorem}
$\cdot \vdash \forall n. (n \ep \mathsf{Nat} \to \forall m.(m\ep \nat \to \mathsf{add}\ n\ m \ep \nat))$. 
  After transformed to $\systemg \lbrack t \rbrack$, we have $\Vdash \add \ep \nat \to \nat \to \nat$. \footnote{We write $U \to U'$ if $\Pi x:U.U'$ with $x \notin \mathrm{FV}(U')$.}
\end{theorem}
\begin{proof}
  Let $P := \iota z.\forall m.(m\ep \nat \to \mathsf{add}\ z\ m \ep \nat)$. We instantiate the
  $C$ in theorem \ref{induction}, we have $(\forall y . ( (y \ep P) \to (\mathsf{S} y) \ep P)) \to 0 \ep P \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. For the base case, we need to show $\forall m.(m\ep \nat \to \mathsf{add}\ 0\ m \ep \nat)$. By $\mathsf{add}\ 0\ m \to_{\beta} m$,
  we have the base case. For the step case, assuming $\forall m.(m\ep \nat \to \mathsf{add}\ y\ m \ep \nat)$ (IH), we need to show $\forall m.(m\ep \nat \to \mathsf{add}\ (\suc y)\ m \ep \nat)$. We know that $\mathsf{add}\ (\suc y)\ m \to_{\beta}^* \add \ y \ (\suc m)$. By (IH), we know
  $\add \ y \ (\suc m) \ep \nat$. So $\mathsf{add}\ (\suc y)\ m \ep \nat$. 
\end{proof}

In order to do vector encoding in $\systemg$, we need to extend the formulas of $\systemg$
to specify binary relation, so we add the following syntatic category.

\begin{definition}[Relation\footnote{We will show a more uniform extension of $\systemg$ in next Chapter.}]
  \
  
\noindent  Formula $F \ ::= ... \ | \ (t;t') \ep R \ | \ \Pi X^2. F$

\noindent  Binary Relation $R \ :: = X^2 \ | \ \iota(x;y).F$

\noindent Relational Comprehension $(t;t') \ep \iota(x;y).F =_{\iota} [(t;t')/(x;y)]F$
\end{definition}

\begin{definition}[Vector]
\

\noindent  $\mathsf{vec}(U, n) := $

$\iota x. \Pi C^2. (\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )) \to (\mathsf{nil}; 0) \ep C \to (x; n) \ep C$

\noindent $\nil := \lambda y. \lambda x.x$

\noindent $\cons := \lambda n.\lambda v. \lambda l. \lambda y. \lambda x.y \ n\ v\ l$.

\end{definition}
\begin{lemma}
  $\vdash \nil \ep \vecc(U, 0)$.
\end{lemma}

\begin{lemma}
  \label{veccons}
  $\vdash \forall n. n\ep \mathsf{Nat} \to \forall u. (u \ep U \to \forall l . (l \ep \vecc (U, n) \to (\cons\ n\ u\ l) \ep \vecc (U, \suc n)))$.
  Transform to $\systemg \lbrack t \rbrack$, we get $\Vdash \cons \ep \Pi n: \mathsf{Nat}.U \to \vecc (U, n) \to \vecc (U, \suc n)$. 
    
\end{lemma}
\begin{proof}
Assume $n\ep \mathsf{Nat}, u\ep U, l\ep \vecc (U, n)$. We want to show $(\cons\ n\ u\ l) \ep \vecc (U, \suc n)$. By comprehension, we need to show $\Pi C^2. (\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )) \to (\mathsf{nil}; 0) \ep C \to ((\cons\ n\ u\ l); \suc n) \ep C$. Assume that we have $\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )$ $\dagger$ and $(\mathsf{nil}; 0) \ep C$, we need to show that $((\cons\ n\ u\ l);\suc n) \ep C$. We know that $l\ep \vecc (U, n)$, by comprehension, we have 

$\Pi C^2. (\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )) \to (\mathsf{nil}; 0) \ep C \to (l; n) \ep C$. 

\noindent By modus ponens, we have $(l; n) \ep C$. Instantiate $y$ with $l$, $m$ with $n$, $u$ with $u$ in $\dagger$, we have $n\ep \mathsf{Nat} \to u \ep U \to (l;n) \ep C  \to (\mathsf{cons}\ n\ u\ l; \mathsf{S}n) \ep C $. So by modus ponens, we have $(\mathsf{cons}\ n\ u\ l; \mathsf{S}n) \ep C$.
  
\end{proof}

\begin{theorem}[Induction Principle]
  \label{indvec}
\

\noindent  $\vdash \mathsf{Ind}(U, n) := $

$\Pi C^2. (\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )) \to (\mathsf{nil}; 0) \ep C \to \forall l.( l\ep \vecc(U,n) \to (l; n) \ep C)$
\end{theorem}
\begin{proof}
  \noindent Assume we have $ l\ep \vecc(U,n)$ and
  
  $\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C ), (\mathsf{nil}; 0) \ep C$.
  
  \noindent We want to show $(l; n) \ep C$. By comprehension, we have
  
 $\Pi C^2. (\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep C  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep C )) \to (\mathsf{nil}; 0) \ep C \to (l; n) \ep C$. 
  
  \noindent By modus ponens, we have $(l; n) \ep C$.

\end{proof}

\begin{definition}[Append]
\

\noindent  $\app := \lambda n_1. \lambda n_2.\lambda l_1.\lambda l_2. l_1 (\lambda m. \lambda h. \lambda t. \cons\ (m+n_2) \ h \ (\app \ m \ n_2\ t\ l_2)) l_2$ 

\end{definition}

\begin{theorem}
  $\Vdash \app \ep \Pi n_1:\mathsf{Nat}. \Pi n_2:\mathsf{Nat}. \vecc(U, n_1) \to \vecc(U, n_2) \to \vecc(U, n_1+n_2)$
\end{theorem}
\begin{proof}
  Note that we state the theorem in $\systemg \lbrack t \rbrack$. So we want to derive
  
 $n_1 \ep \nat, n_2 \ep \nat \Vdash \lambda l_1. \lambda l_2. l_1 (\lambda m. \lambda h. \lambda t. \cons\ (m+n_2) \ h \ (\app \ m \ n_2\ t\ l_2)) l_2\  \ep $
  
\noindent $\vecc(U, n_1) \to \vecc(U, n_2) \to \vecc(U, n_1+n_2)$. 
  
\noindent  We now transform it back to $\systemg$, we have: 

\noindent $n_1 \ep \nat, n_2 \ep \nat \vdash \forall x_1. x_1 \ep \vecc(U, n_1) \to \forall x_2. (x_2 \ep \vecc(U, n_2) \to  x_1 (\lambda m. \lambda h. \lambda t. \cons\ (m+n_2) \ h \ (\app \ m \ n_2\ t\ x_2)) x_2 \ep \vecc(U, n_1+n_2))$. 

\noindent We instantiate the $C$ in theorem \ref{indvec} by

\noindent $P := \iota \gray{$(l;n)$}.\forall x_2. (x_2 \ep \vecc(U, n_2) \to $

\noindent $\gray{$l$} (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep\vecc(U, \gray{$n$}+n_2))$. 

\noindent So we get 

$(\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep P  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep P )) \to (\mathsf{nil}; 0) \ep P \to \forall l.( l\ep \vecc(U,n) \to (l; n) \ep P)$.

%% \noindent $(\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to $

%% \noindent $(\forall x_2. (x_2 \ep \vecc(U, n_2) \to  y (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, m+n_2))) \to $

%% \noindent $(\forall x_2. (x_2 \ep \vecc(U, n_2) \to (\cons\ m\ u\ y) (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, \mathsf{S}m+n_2)))$

%% \noindent $\to (\forall x_2. (x_2 \ep \vecc(U, n_2) \to (\nil (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, 0+n_2)))) \to $

%% \noindent $(\forall x_1. x_1 \ep \vecc(U, n_1) \to \forall x_2. (x_2 \ep \vecc(U, n_2) \to $

%% \noindent $ x_1 (\lambda m. \lambda h. \lambda t. \cons\ (m+n_2) \ h \ (\app \ m \ n_2\ t\ x_2)) x_2 \ep \vecc(U, n_1+n_2)))$. 

\noindent For the base case, we can easily prove $\forall x_2. (x_2 \ep \vecc(U, n_2) \to (\nil (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, 0+n_2)))$. For the step case, assume (IH)

\noindent $\forall x_2. (x_2 \ep \vecc(U, n_2) \to  y (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, m+n_2))$, 

\noindent we want to show that $\forall x_2. (x_2 \ep \vecc(U, n_2) \to (\cons\ m\ u\ y) (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, \mathsf{S}m+n_2))$. We know that 

\noindent $(\cons\ m\ u\ y) (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \to_{\beta}^* $

\noindent $\cons (m+n_2)\ u\ (\app\ m \ n_2 \ y\ x_2)\to_{\beta}^*$

\noindent $\cons\ (m+n_2)\ u\ (y \ (\lambda m'.\lambda h.\lambda t.\cons (m'+n_2)\ h \ (\app \ m'\ n_2 \ t\ x_2))x_2))$. 

\noindent By (IH), we know that

\noindent $y (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, m+n_2)$. By lemma \ref{veccons} $\cons\ (m+n_2)\ u\ (y \ (\lambda m'.\lambda h.\lambda t.\cons (m'+n_2)\ h \ (\app \ m'\ n_2 \ t\ x_2))x_2)) \ep \vecc(U, \suc(m+n_2))$. Thus 
$(\cons\ m\ u\ y) (\lambda m'. \lambda h. \lambda t. \cons\ (m'+n_2) \ h \ (\app \ m' \ n_2\ t\ x_2)) x_2 \ep \vecc(U, \suc(m+n_2))$. Of course, we assume we have $\suc(m+n_2) = \suc m + n_2$, so we have the proof.

\end{proof}

\begin{theorem}[Associativity]
  
  $ \vdash \forall (n_1. n_2. n_3. v_1. v_2.v_3).(n_1 \ep \mathsf{Nat} \to n_2 \ep \mathsf{Nat} \to n_3 \ep \mathsf{Nat} \to v_1 \ep \vecc(U, n_1) \to v_2 \ep \vecc(U, n_2)) \to v_3 \ep \vecc(U, n_3) \to $
  
  \noindent $\app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3$
\end{theorem}

\begin{proof}
  Assume $n_1 \ep \mathsf{Nat}, n_2 \ep \mathsf{Nat}, n_3 \ep \mathsf{Nat}, v_2 \ep \vecc(U, n_2)) , v_3 \ep \vecc(U, n_3)$. We want to show 
  
  $\forall v_1. (v_1 \ep \vecc(U, n_1) \to \app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3)$. 
  
  \noindent Let $P:= \iota (y;z). (\app \ z\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (z + n_2) \ n_3\ (\app \ z \ n_2 \ y \ v_2) \ v_3)$. We instantiate the $C$ in $\mathsf{Ind}(U,n_1)$ with $P$, by comprehension we have 
  
\noindent $(\forall y. \forall m. \forall u. (m\ep \mathsf{Nat} \to u \ep U \to (y;m) \ep P  \to (\mathsf{cons}\ m\ u\ y; \mathsf{S}m) \ep P )) \to (\mathsf{nil}; 0) \ep P \to \forall l.( l\ep \vecc(U,n) \to (l; n) \ep P)$. 
  
  \noindent So we just need to prove base case: 
  
  $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$
  
  \noindent and step case:
  
  $\forall y. \forall m. \forall u.(m\ep \mathsf{Nat} \to u \ep U\to  (\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3)\to (\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3))$. 
  
  \noindent For the base case, $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \app\ n_2\ n_3 \ v_2 \ v_3 \leftarrow_{\beta}^* \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$. For the step case, we assume $\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3$(IH), we want to show
  
\noindent  $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = $
  
  \noindent $\app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3$(Goal). 
  
  \noindent We know that $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \cons (m+n_2+n_3)\ u \ \gray{$(\app \ m\ (n_2 + n_3)\ y \ (\app\ n_2 \ n_3\ v_2 \ v_3))$}$. The right hand side of the (Goal) can be reduced to $\cons (m+n_2+n_3)\ u \gray{$(\app \ (m+n_2)\ n_3\ (\app \ m \ n_2\ y\ v_2)\ v_3)$}$. So (IH) is enough to give us the (goal).
\end{proof}

\section{Termination Analysis in System $\systemg$}
In this section, we will show that elements in the inductive defined set are solvable. A direct consequence of this result is that these elements is terminating with respect to head reduction.

\subsection{Preliminary}
\label{g:pre}
The definitions, lemmas and theorems in this subsection are came from Barendregt's \cite{Barendregt:1985}, Chapter 8.3.

\begin{definition}[Solvability]
\

  \begin{itemize}
  \item   A closed lambda term $t$, i.e. $\mathrm{FV}(t) = \emptyset$, is solvable if
there exists $t_1,..., t_n$ such that $t t_1 ... t_n =_{\beta} \lambda x.x$.
\item An arbitrary term $t$ is solvable if the closure $\lambda x_1...\lambda x_n.t$, where
$\{x_1,...,x_n\} = \mathrm{FV}(t)$, is solvable.
  \item $t$ is unsolvable iff $t$ is not solvable.
  \end{itemize}
\end{definition}

\begin{lemma}
  Every term $t$ is of the following forms:
  \begin{itemize}
  \item $\lambda x_1....\lambda x_n.x t_1...t_m$, where $n,m \geq 0$. It is called head normal form. 
  \item $\lambda x_1....\lambda x_n.((\lambda y.t) t_1)...t_m$, where $m \geq 1, n \geq 0$ and $(\lambda y.t) t_1$ is called head redex.
  \end{itemize}
\end{lemma}
\begin{definition}[Head Reduction]
  $t \to_h t'$ if $t'$ is resulting from contracting the head redex of $t$. 
\end{definition}
\begin{theorem}
  A term $t$ has a head normal form iff it is terminating with respect to head reduction.
\end{theorem}
\begin{theorem}[Wadsworth]
  $t$ is solvable iff $t$ has a head normal form. In particular, all terms in
normal forms are solvable, and unsolvable terms have no normal form.
\end{theorem}

\begin{theorem}[Genericity]
  \label{general}
  For a unsolvable term $t$, if $t_1 t =_{\beta} t_2$, where $t_2$ in normal form, then
for any $t'$, we have $t_1 t' =_{\beta} t_2$.
\end{theorem}

\noindent Unsolvable in general is computational irrelavance, thus it is reasonable to equate
all unsolvable terms. 

\begin{definition}[Omega-Reduction]
Let $\Omega$ be $(\lambda x.xx)\lambda x.xx$, then $t \to_{\omega} \Omega$ iff $t$ is unsolvable and $t \not \equiv \Omega$.
%  $\lambda x.t x \to_{\eta} t$ if $x \notin \mathrm{FV}(t)$. 

\end{definition}

\begin{theorem}
  $\to_{\beta} \cup \to_{\omega}$ is Church-Rosser.
\end{theorem}

\subsection{Head Normalization}
\label{head}
We add Omega-reduction as part of the term reduction in $\mathfrak{G}$. We now define another notion of contradictory: $\bot' := \forall x. x = \Omega$. Note that this will imply $\forall x.\forall y. x = y$, thus we can safely take it as contradictory.

\begin{theorem}
  \label{omega}
  $\vdash \forall n. ( n \ep \mathsf{Nat} \to (n = \Omega \to \bot'))$.
\end{theorem}
\begin{proof}
  We will prove this by induction. Recall the induction principle:  
  
  $\Pi C^1. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$.
  
  \noindent We instantiate $C$ with $\iota z. (z = \Omega \to \bot')$, by comprehension, we then have $(\forall y . ( (y = \Omega \to \bot'  ) \to (\mathsf{S} y = \Omega \to \bot')) \to (0 = \Omega \to \bot') \to \forall m. (m \ep \mathsf{Nat} \to (m = \Omega \to \bot'))$. It is enough to show that $0 = \Omega \to \bot'$ and $\mathsf{S} y = \Omega \to \bot'$. We know that for Scott numerals we have $0 := \lambda s.\lambda z.z$ and $\suc y := \lambda s.\lambda z.s y$. Assume $0 = \Omega = \lambda x_1.\lambda x_2.\Omega$, let $F := \lambda u. u \ p\ q$. Assume $q \ep X^1$, then $F \ 0 \ep X^1$ (since $F0 =_{\beta}q$). So $F \ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, thus $\Omega \ep X^1$. Thus we just show
$\forall X^1. (q \ep X^1 \to \Omega \ep X^1)$, which means $\forall q. q = \Omega$. So $0 = \Omega \to \bot'$. Now let us show $\mathsf{S} y = \Omega \to \bot'$. Assume $\lambda s.\lambda z.sy = \Omega = \lambda x_1.\lambda x_2.\Omega$. Let $F := \lambda n.n\ (\lambda p.q)\ z$. Assume $q \ep X^1$, then $F\ (\lambda s.\lambda z.s y) \ep X^1$, thus $F\ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, meaning $\Omega \ep X^1$. So we just show $\Pi X^1. (q \ep X \to \Omega \ep X)$. Thus $\forall q. q = \Omega$. So $\mathsf{S} y = \Omega \to \bot'$.  
\end{proof}

\noindent Above theorem implies that all the member of $\mathsf{Nat}$ has a head normal form and it can be generalized to show that the elements of inductive describable set are solvable. To see this, we prove the following meta-theorem.

\begin{theorem}[Meta]
 If $\vdash t \ep \nat$, then $t \not =_{\beta,\omega} \Omega$.
\end{theorem}
\begin{proof}
By theorem \ref{omega}, we know that $\vdash t = \Omega \to \bot$. We know that by the \textit{conv} rule, if $t =_{\beta, \omega} t'$, then $\vdash t = t'$ in $\systemg$. By contraposition, we have if $\not\vdash t = t'$, then $t \not =_{\beta, \omega} t'$. Since $\systemg$ is consistent (theorem \ref{contradiction}), we know that $\not \vdash t = \Omega$. So $t \not =_{\beta, \omega} \Omega$.
\end{proof}
\subsection{Leibniz Equality in $\mathfrak{G}$}
\label{leibniz}

We know that by the \textit{conv} rule, if $t =_{\beta,\eta, \omega} t'$, then $\vdash t = t'$ in $\systemg$. It is natural to consider wether the other direction is the case, namely, to prove: if $\vdash t = t'$, then $t=_{\beta, \eta, \omega} t'$. By the contraposition, we need to prove: if $t\not =_{\beta, \eta, \omega} t'$, then  $\not\vdash t = t'$. We conjecture that it is hard to prove this. Due to the genericity (theorem \ref{general}) property in lambda calculus. Oraclely, if $t$ is solvable and $t'$ is unsolvable, we can not define a lambda term $F$ such that $F t =_{\beta} x$ and $F t' =_{\beta} y$. Because by genericity, we would have $F t =_{\beta} y$, thus $x =_{\beta}y$, which is impossible for beta-reduction. However, when $t,t'$ both are solvable and $t \not =_{\beta,\eta} t'$, then by the results of Coppo et al. \cite{Coppo:1978}, we can indeed define a lambda term $F$ such that $F t =_{\beta} x$ and $F t' =_{\beta} y$. So we can derive $\vdash t = t' \to \bot$ in System $\systemg$. 

%% \begin{theorem}\footnote{This theorem comes from Barendregt's \cite{Barendregt:1985}, page 396.} 
%%   \label{sep}
%%   Let $t_1, t_2$ be two closed beta-eta normal forms, then there exists a closed term 
%%   $F$ such that:
  
%%   $F t_1 t_2 =_{\beta} \mathsf{True} \equiv \lambda x.\lambda y.x$ if $t_1 \equiv t_2$. 
  
%%   $F t_1 t_2 =_{\beta} \mathsf{False} \equiv \lambda x.\lambda y.y$ if $t_1 \not \equiv t_2$. 
  
%% \end{theorem}



%% \begin{theorem}
%%   \label{neg}
%%   If $t_1$ and $t_2$ are distinct beta-eta normal forms, then $\vdash (t_1 = t_2) \to \bot$.
%% \end{theorem}
%% \begin{proof}
%%   Assume $t_1 = t_2$. By theorem \ref{sep}, we know that $F t_1 t_1 = \mathsf{True}$. Thus 
%%   we have $F t_1 t_2 = \mathsf{True} = \mathsf{False}$. This will lead to a contradiction.
%% \end{proof}

\begin{theorem}
  Assume $t_1, t_2$ are solvable terms. If $\vdash t_1 = t_2$ in $\mathfrak{G}$, then $t_1 =_{\beta\eta} t_2$. 
\end{theorem}
\begin{proof}
 By contraposition, we want to prove: if $t_1 \not =_{\beta\eta} t_2$, then $\not \vdash t_1 = t_2$. Since $t_1$ and $t_2$ are \textit{distinct}, then $t_1$ and $t_2$ are \textit{separable}\footnote{See Barendregt's \cite{Barendregt:1985}, Page 256}. i.e. there exists a lambda term $F$ such that $F t_1 =_{\beta} x$ and $F t_2 =_{\beta} y$. Thus we can derive $\vdash t = t' \to \bot$. Since $\mathfrak{G}$ is consistent, we have $\not \vdash t_1 = t_2$.
\end{proof}

%% When we think about Leibniz law of identity in generals, it is also strongest version of 
%% equality, namely, identity. So intuitively, Leibniz equality in $\mathfrak{G}$ should be corresponded to a notion of intensional identity. And the conversion rule allows us to treat $\beta\eta\Omega$ equivalence as the intensional identity in $\mathfrak{G}$. 
\noindent The developments in this section together with section \ref{head} shows that if $\vdash t = t' \to \bot$ in $\systemg$, then $t\not=_{\beta,\eta,\omega} t'$. And if $t_1, t_2$ are solvable terms, then $\vdash t_1 = t_2$ in $\mathfrak{G}$ implies $t_1 =_{\beta\eta} t_2$. 



\section{Summary}
We present System $\systemg$ and develop Peano's axioms and Vector encoding in System $\systemg$ as evidents for its potentials. The usefulness of $\systemg \lbrack t \rbrack$ is not obvious in this Chapter. In implementation, we make use of the reciprocity to derive inductive set based on algebraic data type definitions. The existence of $\systemg \lbrack t \rbrack$ provides a way to understand polymorphic-dependent type through $\systemg$. One difference between System $\systemg$ and $\mathrm{PTS}$ style system is that computation at formula level is currently not possible in $\systemg$, more research will be needed to explore this issue. 

Compare to usual typed functional programming language, the set in System $\systemg$ is 
more precise than the notion of type in typed functional programming language. Not surprisingly, it is impossible to fully automate the reasoning with $\systemg$. However, a degree of automation is still possible, together with human guidence, it would be an attracting 
tool to have besides the usual type system. In fact, the implementation in next Chapter shows 
that it is possible to obtain such an system.



\chapter{Implementation and Future Improvements}
\label{final}
We first define the logic implemented in $\mathrm{Gottlob}$, which is an extension of $\systemg$. Then we discuss
the current implemented features of $\mathrm{Gottlob}$. Finally, we discuss some possible improvements over the current implementation. 

\section{The $\mathrm{Gottlob}$ System}
\label{logic}
The logic in $\mathrm{Gottlob}$ system is an extension of System $\systemg$ with Church's simple types~\cite{church1940} and maintain the comprehension scheme \`a la Takeuti.

\begin{definition}[Syntax]
  \
  
  Simple Types  $\tau \ ::= \ \iota \ | \ o \ | \ \tau \to \tau' $
  
  Lambda Terms $t \ ::= \ x \ | \ \lambda x.t \ | \ t t'$
  
  PreFormula $F \ ::= x \ | \ \iota x.F \ | \ t \ep F \ | \ F \to F' \ | \ \forall x.F \ | \ F F' \ | \ F t $
  
  Proof $p \ ::= \ x \ | \ \mathrm{mp}\ p\ p' \ | \ \mathrm{inst}\ p\ t \ | \ \mathrm{cmp}\ p \ | \ \mathrm{ug}\ x . p \ | \ \mathrm{discharge}\ x:F. p$
  
  Type Context $\Delta \ ::= \ \cdot \ | \ \Delta, x:\tau$
  
  Proof Context $\Gamma \ ::= \ \cdot \ | \ \Gamma, x:F$
\end{definition}

\noindent The intended meaning of type $\iota$ is individuals and type $o$ is formula. With
Church's simple type device, the set mentioned in previous chapter will be a preformula of type $\iota \to o$. 
\begin{definition}[Type Inference for PreFormula]

  \

\begin{tabular}{lll}
    
\infer{\Delta \vdash t : \iota}{}

&
\infer{\Delta \vdash x : \tau}{x : \tau \in \Delta}

&

\infer{\Delta \vdash \iota x.F : \tau' \to \tau}
{\Delta, x:\tau' \vdash  F : \tau}

\\
\\
\infer{\Delta \vdash F t : \tau}{\Delta \vdash F : \iota \to \tau}

&

\infer{\Delta \vdash F_1\to F_2 : o}
{\Delta \vdash  F_1 : o & \Delta \vdash F_2 : o}

&

\infer{\Delta \vdash t\ep F : o}{\Delta \vdash F : \iota \to o}

\\
\\

\infer{\Delta \vdash F F' : \tau'}{\Delta \vdash F : \tau \to \tau' & \Delta \vdash F' : \tau}

&

\infer{\Delta \vdash  \forall x . F : o}{\Delta, x:\tau \vdash F : o}

\end{tabular}  
  
\end{definition}

\noindent Note that type inference for preformula is decidable. We call a preformula
of type $o$ well-formed formula. 

\begin{definition}[Proof Checking Rules] \fbox{$\Gamma \vdash p : F$} 

  \

  \begin{tabular}{lll}
    
\infer{\Gamma \vdash \mathrm{ug}\ x. p : \forall x.F}
{\Gamma \vdash p: F &  x \notin \mathrm{FV}(\Gamma)}

&
\infer{\Gamma \vdash \mathrm{cmp}\ p : F_2}{\Gamma \vdash p:
F_1 &  F_1 \cong F_2}

\\
\\
\infer{\Gamma \vdash x:F}{(x:F) \in \Gamma}

&

\infer{\Gamma \vdash \mathrm{inst}\ p\ Q :[Q/x]F}{\Gamma
\vdash p: \forall x.F & Q ::= \ t\ |\  F}
\\
\\

\infer{\Gamma \vdash \mathrm{discharge}\ a : F_1 . p : F_1\to F_2}
{\Gamma, a:F_1 \vdash p: F_2}

&

\infer{\Gamma \vdash \mathrm{mp} \ p\ p':F_2}{\Gamma
\vdash p: F_1 \to F_2 & \Gamma \vdash p': F_1}
\end{tabular}
\end{definition}

\noindent We can see that the proof checking is actually simpler than the one in Section \ref{gp}, Chapter \ref{comprehension}. The proof checking rule is specifically designed so that given $\Gamma$ and $p$, we can deduce $F$ with $\Gamma \vdash p : F$.

\begin{definition}
  $F \cong F'$ iff one of the following holds.
  \begin{itemize}
  \item $F \equiv [t/x]F_1$ and $F' \equiv [t'/x]F_1$ with $t =_{\beta} t'$.
  \item $F \equiv t \ep (\iota x.F_1)$ and $F' \equiv [t/x]F_1$.
  \item $F \equiv  \mathcal{C}[(\iota x.F_1)Q]$ and $F' \equiv \mathcal{C}[[Q/x]F_1]$, where $Q \ ::=\ t \ | \ F$ for some preformula context $\mathcal{C}$.
  \end{itemize}
\end{definition}

\noindent We have seen the full specification of the logic in $\mathrm{Gottlob}$. It is consider
more flexible in the sense that now Leibniz equality can be defined as 

$\mathrm{Eq} : \iota \to \iota \to o := \iota a . \iota b . \forall C . a \ep C \to b \ep C$

\noindent Vector can be defined as 

$\mathrm{Vec}\ : (\iota \to o) \to \iota \to \iota \to o := \iota U . \iota a . \iota x . \forall V ...$.

\noindent The point is that with help of comprehension and simple types, we do not need to 
appeal to meta-level conventions like we did before in last Chapter. $\mathrm{Gottlob}$ is considered more expressive than System $\systemg$ in the sense that now we can express set of set, namely, entity of type $(\iota \to o) \to o$, up to arbitrary hierarchy. We can still define an erasure from $\mathrm{Gottlob}$ to Girard's System $\mathbf{F}$ (erasing everything except entity of type $o$), thus not every formula is provable in $\mathrm{Gottlob}$. 

\section{The Implemented Features of $\mathrm{Gottlob}$}
 $\mathrm{Gottlob}$ is implemented in Haskell, a functional-imperative programming language. The total lines of Haskell code (loc) currently is about 2700. About 600 loc are from the parser and the pretty-printing module; about 400 loc are used to describe syntax tree; about 700 loc are used to implement the proof checker; the rest of the codes deal with program transformation and
polymorphic type checking. The project is available through \url{https://github.com/Fermat/Gottlob}.  

\textbf{Logic}. The logic in Gottlob is described in Section \ref{logic}. We implement a simple version of constraints solving algorithm to check the well-formedness of a formula. Proofs are represented as objects but not functions. So in $\mathrm{Gottlob}$, we do not use proof as program and we do not run proof as program. That is not to say we can not program with proof. As we will discuss later, there are many common proof patterns we want to capture, and in $\mathrm{Gottlob}$, we can use a notion of \textit{tactic} to capture the proof pattern. Basically, a tactic is a meta-level function that take in an object (lambda terms, formula or proof) and return a proof. In $\mathrm{Gottlob}$, formula/set can not be defined by recursion/induction. So inductive 
formula or inductive predicate is not supported. The proof language is in natrual deduction style, while still allow user to write a big proof term to prove a theroem if she/he prefer. The proof language is carefully designed such that $\mathrm{Gottlob}$ can infer the formula of a proof
term.  

\textbf{Proof Pattern and Tactic}. After the second iteration of the implementation, the author realized that treating proof as object although can simplify the proof checking process, 
the author has to write long proof most of the time even to prove simple lemmas like 
congruence of equality. Long proof greatly affects the readability, readability
is one of the goals of designing $\mathrm{Gottlob}$. We notice that this issue can be fixed by introducing user-defined tactics in the proofs. By tactic we mean a meta-program that can take in proofs/formulas/programs as arguments and produce a checkable proof. The idea is that there are many proof patterns that can not be easily captured by lemma, but can be captured reasonably by tactic. For example, we know that we can always construct a proof of $t_1 =_{\mathrm{Leibniz}} t_2$ if $t_1$ can be evaluated to $t_2$. However, the notion of ``$t_1$ can be evaluated to $t_2$'' can not captured by the language, so everytime one want to prove $t_1 =_{\mathrm{Leibniz}} t_2$, one would need to manually construct a proof of $t_1 =_{\mathrm{Leibniz}} t_2$. It is easy to see that all these proofs are the same except we only vary $t_1,t_2$. This proof pattern can be captured by introducing a meta-program that takes
in $t_1, t_2$ as argument and produce a of proof of $t_1 =_{\mathrm{Leibniz}} t_2$. This meta-program
does not need to be typed, because the correctness of its outputs will always be checked
by the proof checker. 

Since the logic in Gottlob is higher order logic \`a la Takeuti, it can capture some common proof patterns. For example, we know that for \textit{any} formula $F(x)$ with $x$ free in $F$, 
we know that we can always construct a proof of $\forall x.F(x) \to F(x)$. This pattern can
be captured by the proof of $\forall C.\forall x. x \in C \to x \in C$. To obtain a proof of
$\forall x.F_1(x) \to F_1(x)$, one just need to instantiate $C$ with $\iota x.F_1(x)$, then by
comprehension we can get a proof of $\forall x.F_1(x) \to F_1(x)$. So we think that higher order logic in combination with tactic provide a good way to capture proof patterns.


\textbf{About Gottlob Program}. We mentioned that the logic includes the untyped lambda calculus as its domain of individuals. And untyped lambda calculus is basic of the program in $\mathrm{Gottlob}$. It is natural to concerns about the type descipline for the program. Empirically, we 
realize that type checking can capture an range of bugs without requiring the programmer to
annotate the program. So we implement a version of Hindley-Milner polymorphic type inference based on \cite{jones1999}. Our type inference system can handle mutual recursive defined programs 
 \textit{naturally} similar to the style of Haskell. We want to emphasis that even polymorphic 
 type inference is convenient, it does not capture all the bugs and certainly does not verify
 a program. One would need to use the logic of $\mathrm{Gottlob}$ to prove theorems about programs. $\mathrm{Gottlob}$'s logic system treat program as untype lambda calculus. We think it is appropriate, because one usually need to reason about the execution behavior of the program, not 
 the type behavior of a program, so the type information for a program is not as relevant as one may think. So when the author write programs and prove properties about the programs in $\mathrm{Gottlob}$, internally, it first type check the programs, and then elaborate the programs to 
 untyped lambda calculus, which is the execution model of the program, then finally, the reasoning is performed on the elaborated lambda terms\footnote{So it feels like reasoning directly on the compiled programs.}. 

\textbf{Pattern Matching and Algebraic Data Type}. Pattern matching and algebraic data type 
are central in $\mathrm{Gottlob}$. Program can be defined as a set of ``equations'' just like Haskell function definition. And within each equation, one can use the \textit{case} expression
to further pattern match on data. So it seems like $\mathrm{Gottlob}$ does support 
pattern matching. Internally, polymorphic type checking is first performed on functions defined by pattern matching. After type checking, $\mathrm{Gottlob}$ translate a set of definitions
in to a single equivalent function defined by case-expression (this process is described in \cite{peyton1987}), then further translate this function to lambda term. The translation from 
case-expression to lambda term is done with respect to Scott encoding scheme. So case expression
is not primitive in the execution model. 

The translation process of pattern matching only make sense when the data is Scott encoded
data. So for each algebraic data type declaration, $\mathrm{Gottlob}$ construct the a corresponding Scott-encoded lambda term for each data type constructor. Data type declaration is also used
for automatically deriving the corresponding inductive defined set and the corresponding induction principle. Data type declaration is also used for the polymorphic type checking. Let us 
see a concrete example, the following code are the data declaration for list and the append function in $\mathrm{Gottlob}$.  

\begin{verbatim}
data List U where
    nil :: List U
    cons :: U -> List U -> List U
  deriving Ind
append nil l = l
append (cons u l') l = cons u (append l' l)
\end{verbatim}

\noindent From the type annotation in the data type declaration, $\mathrm{Gottlob}$ will infer the type of $\mathrm{append}$ is $\forall U. \mathrm{List}\ U \to \mathrm{List}\ U \to \mathrm{List}\ U$. And it will also infer that

$\mathrm{nil}\ := \lambda n . \lambda c . n$ 

$\mathrm{cons} \ := \lambda a_2 . \lambda a_1 . \lambda n . \lambda c . c\ a_2\ a_1$

$\mathrm{List} \ : (\iota \to o) \to \iota \to o\ = \iota U . \iota x . \forall L...$

$\mathrm{indList} \ :=\ p : \forall U . \forall L . \mathrm{nil} \ep L\  U \to (\forall x . x \ep U \to \forall x0 . x0 \ep L\ U \to \cons\ x\ x0 \ep L U) \to \forall x . x \ep \mathrm{List}\ U \to x \ep\ L\ U$. 

\noindent Note that the $p$ in $\mathrm{indList}$ is the proof of induction principle, $\mathrm{Gottlob}$ will check that the proof $p$. Also, $\mathrm{Gottlob}$ will perform this process for
\textit{any} algebraic data type. 

\section{Future Improvements}

There are a lot of rooms for improvements for $\mathrm{Gottlob}$. 

\textbf{Equality Reasoning}. We want to implement 
an automatic equality reasoning feature to relieve the burden of simple equality proofs. Gottlob uses Leibniz equality extensively, so it is quite cumbersome to construct simple proofs
about equality even with the help of tactic. We would need to implement this feature at meta-level and generate checkable proof of equality, then we do not need to trust the equality reasoning engine. We think this feature will greatly simplify the current equality proof while still
give the author enough information to know the underlying mechanism. 

\textbf{Reasoning about States}. Some algorithms (for example, graph algorithms) 
are natural to describe with the help of state, to really demonstrate the usefulness of Gottlob, we would need to do a case study on verifying this kind of algorithm. So we would need
to provide a form of monadic framework in $\mathrm{Gottlob}$. 

\textbf{Polymorphic Type Checking}. Currently, the type checking system for Gottlob
can only handle rank-1 polymorphism. It would be interesting to explore possible extensions
of the type checking system to support richer notion of types while not requiring extensive annotations. 

On a more practical side, the author would need to think about the issues of compilation, I/O and efficiency issues. It would also be nice to have an interpretor-like environment for the author to interact with the Gottlob system.


%%  We are currently implementing the tactic feature, and we would like to support Haskell style monadic framework in a untyped setting. Since head reduction is a almost-lazy operational semantics, we can afford definition like $f = \lambda s.\lambda a. t[\mathrm{let} f = f\ (\mathrm{update}\ s) \ \mathrm{in}\ t'[...]]$ $\dagger$, where $s$ represent state, $a$ is an argument for $f$ and ``$\mathrm{update}$'' is some function that take in $s$ and return a new state. $t[...]$ means in side the term $t$. So what we need to do is setting up a monadic syntax and systematically translate that in to the format $\dagger$ above. From a reasoning perspective, this will open the door of reasoning about stateful programs. Last but not least, we also want to implement an interpretor-like environment for user to interact with the proofs or the
%% programs in Gottlob. 





%=============================================================================
%\appendix
%=============================================================================

%=============================================================================
%\chapter{Background}





%% \chapter{Appendix}


%=============================================================================
% bibliography
%=============================================================================
\interlinepenalty=10000	% prevents bib items from splitting across pages
\bibliographystyle{plain}
\bibliography{dissertation}

\end{document}
