\documentclass{article} 
\usepackage{url} 
\usepackage{hyperref}
\usepackage{stmaryrd}
\usepackage{manfnt}
\usepackage{fullpage}
\usepackage{proof}
\usepackage{savesym}
\usepackage{amssymb} 
%% \savesymbol{mathfrak}
%\usepackage{MnSymbol} Overall mnsymbol is depressing.
%\restoresymbol{MN}{mathfrak}
\usepackage{xcolor} 
%\usepackage{mathrsfs}
\usepackage{amsmath, amsthm}
\usepackage{diagrams}
\makeatletter
\newsavebox{\@brx}
\newcommand{\llangle}[1][]{\savebox{\@brx}{\(\m@th{#1\langle}\)}%
  \mathopen{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\newcommand{\rrangle}[1][]{\savebox{\@brx}{\(\m@th{#1\rangle}\)}%
  \mathclose{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\makeatother


\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
% My own macros
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 

\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newcommand{\ep}[0]{\epsilon} 
\newcommand{\nil}[0]{\mathsf{nil}} 
\newcommand{\cons}[0]{\mathsf{cons}} 
\newcommand{\vecc}[0]{\mathsf{vec}} 
\newcommand{\suc}[0]{\mathsf{S}} 
\newcommand{\app}[0]{\mathsf{app}} 
\newcommand{\interp}[1]{\llbracket #1 \rrbracket} 
\newcommand{\intern}[1]{\llangle #1 \rrangle} 
\newcommand*\template[1]{\(\langle\)#1\(\rangle\)}
\newarrowfiller{dasheq} {==}{==}{==}{==}
\newarrow {Mapsto} |--->
\newarrow {Line} -----
\newarrow {Implies} ===={=>}
\newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
\newarrow {Onto} ----{>>}
\newarrow {Dashto}{}{dash}{}{dash}{>}
\newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\newtheorem{prop}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}


\begin{document}
%\pagestyle{empty}
\title{Lambda Encoding in Type Theory \`a la Curry}
\author{Peng Fu \\
Computer Science, The University of Iowa}
\date{Last edited: \today}


\maketitle \thispagestyle{empty}

\begin{abstract}
The theme of this dissertation is lambda encoded data in type theory, more specifically,
intuitionistic type theory building around Curry-Howard correspondence. The goal of this dissertation is to understand the relation between intuitionistic functional type and provable formula expressing totality of the function. 
\end{abstract}

\section{Introduction}
\subsection{Motivation}
Inductive defined data type(inductive datatype), together with the mechanism of \textit{pattern matching}, are considered as bread and butter in theorem proving and functional programming. Most programming languages (Agda, Coq, Ocaml, Haskell) treat them as primitive, namely, one can only use pattern matching to perform computation on inductive data. So the concepts of inductive data and program are separated. On the other hand, it is well know that inductive data can be encoded in lambda calculus through Church encoding. In lambda calculus, program can be used as data, data can be used as program. Each Church encoded data can be used as some sort of iterator, e.g. for Church numeral $2$, it can be used as a higher order function that takes in a function $f$ and a data $b$ as arguments, then applying $f$ to $b$ twice.   

From the language design perspective, it is very hard to accomodate inductive datatype and pattern matching. In Haskell core language, pattern matching \textit{case} expression is the most complicated part of the core(cite). Also, since most functional programming language implement a version of lambda calculus, it seems redundant to implement inductive datatype and pattern matching. An counter argument for that would be it is intuitive and straightforward to retrieve subdata from inductive data with facility of pattern matching. For example, with Church encoded numerals, it takes the predecessor function linear time to compute, while with pattern matching, it will only take constant time(cite). The problem of inefficiency to retrieve subdata can be solved by Scott encoding(cite), another approach to represent data as lambda term. Each Scott encoded data contains its subdata. The drawback of Scott encoding simply be fix point combinator is needed to perform recursion and it is not obvious to define terminating recursion without it. But this can be fixed by introducing yet another encoding scheme(discovered by many) which combine both feature of Church and Scott encoding, namely, easy to obtain subdata and can perform terminating recursion without appeal to fix point combinator.  

From the intuitionistic typing perspective, it is desirable for the type system to interpret the inhabitant of the type $D \to D$ as a total function on inductive datatype $D$. This is not possible for Scott encoding and its derivatives, since Scott encoded data contains a piece of its subdata, one would need recursive definition in order to define a type for Scott encoded data. When unrestricted recursive definition is added, we lose the intuitionistic notion of functional type expresses totality, namely, the type $D \to D$ will means a partial function with domain and codomain $D$. One can adopt a variety of restrictions and techniques(cite) to restore totality, but then we lose the computational content of the proof of the totality of a function\footnote{We shall see this more clearly later}. In this sense Church encoding is more suited for the intuitionistic typing, it is already typable in System \textbf{F}, with a cost of sacrifying the efficiency we mentioned above, in this circumstance, we are willing to pay the price. 

From the philosophical perspective, assumming datatype and pattern matching as primitive is
philosophically undesirable, in the sense that when we are asked `` what is the number 0, number 1 ?'', we have no answer, we just simply assume it exists. But this is not to say this \textit{extrinsic} point of view on data is not working, on the contrary, it works very well, in fact, most parts of mathematics(the notion of sets, groups, etc.) are based on this extrinsic point of veiw. In the context of lambda calculus, the ability to encode inductive data type within lambda calculus in a sense \textit{reduces} the ontology of inductive datatypes to only the ontology of lambda calculus. Assuming the ontology of lambda calculus, now we can answer the question of ``what is the number n?'', it just a lambda term that when applied to term $f$ and term $a$, it got \textit{beta-reduce} to $\underbrace{f ( f ( f...(f}_{n} a)...))$. So we now can explain the meaning of a number $n$ as doing something repeatly $n$ many times. We call this \textit{intrinsic} point of view. The extrinsic view on concept allows us to explore the propertes of the concept without really \textit{understand} the concept itself, only when we obtain an intrinsic view on the concept, we then can say we understand it. 



\subsection{Overview}  
We want to study the notion of inductive datatype using lambda encoding in the context of intuitionistic type theory(i.e. the one constructed around Curry-Howard correspondence), with the hope to achieve a more concise understanding of the relation between intuitionistic functional type and provable formula expressing totality of the function. Specifically, to achieve this goal, in section \ref{tools}, we first lay down the foundamental tools such as lambda calculus and lambda encoding, intuitionistic type systems(system \textbf{F}, Calculus of Construction), which we are going to used through out the thesis. In section \ref{self}, within the framework of calculus of construction,  we will introduce the self type mechanism together with mutual recursive definitions to achieve a consistent type theory with lambda encoding. In section \ref{comp}, we will obtain a even broader view on lambda encoding when we go directly from Girard's system \textbf{F} instead of from Calculus of Construction. \footnote{
The recommended way to read this thesis is first read the introduction, then go directly to each topic, refer to \ref{tools} when the reader feels the need to understand certain basic assumptions. Then finally, of course, the last section.}
% conclude that it is nessesary and beneficial to study lambda encoding

\section{Foundamental Tools}
\label{tools}
\subsection{Abstract Reduction Systems}
\label{ars}
We first introduce some basic concepts about \textit{abstract reduction system}, sometimes it is also called \textit{term rewriting system}. 
%We follow the definitions in \cite{bezem2003term} and \cite{baader1999term}. 

\begin{definition}
 An abstract reduction system $\mathcal{R}$ is a tuple $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, where $\mathcal{A}$ is a set and $\to_i$ is a binary relation(called reduction) on $\mathcal{A}$ indexed by a finite nonempty set $\mathcal{I}$.   
\end{definition}

In an abstract reduction system $\mathcal{R}$, we write $a \to_i b$ if $a,b \in \mathcal{A}$ satisfy the relation $\to_i$, for convenient, $\to_i$ also denotes a subset of $\mathcal{A}\times \mathcal{A}$ such that $(a,b) \in \to_i$ if $a \to_i b$. 

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the reflexive transitive closure of $\to_i$ is written as $\twoheadrightarrow_i$ or $\stackrel{*}{\to}_i$, is defined by: 
\begin{itemize}
\item $m \twoheadrightarrow_i m$. 
\item $ m \twoheadrightarrow_i n$ if $m \to_i n $.
\item $ m \twoheadrightarrow_i l$ if $m \twoheadrightarrow_i n, n \twoheadrightarrow_i l $.
\end{itemize}
  
\end{definition}

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the convertibility relation $=_i$ is defined as the equivalence relation generated by $\to_i$:   
\begin{itemize}
\item $ m =_i n$ if $m \twoheadrightarrow_i n $.
\item $ n =_i m$ if $m =_i n $. 
\item $ m =_i l$ if $m =_i n, n =_i l$.
\end{itemize}

\end{definition}

\begin{definition}
 We say $a$ is \textit{reducible} if there is a $b$ such that $a \to_i b$. So $a$ is in $i$-\textit{normal form} if and only if $a$ is not reducible. We say $b$ is a normal form of $a$ with respect to $\to_i$ if $a \twoheadrightarrow_i b$ and $b$ is not reducible. $a$ and $b$ are joinable if there is $c$ such that $a \twoheadrightarrow_i c$ and $b \twoheadrightarrow_i c$. An abstract reduction system is strongly normalizing if there are no infinite
reduction path.
\end{definition}


\begin{definition}
\label{c-r}
  Given an abstract reduction system $(\mathcal{A}, \{ \to_i\}_{i\in \mathcal{I}})$, let $\to$ denote $\bigcup_{i\in \mathcal{I}} \to_i$, let $=$ denote the equivalence relation generated by $\to$.
\begin{itemize}
\item Confluence: For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then there exist $d \in \mathcal{A}$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

\item Church-Rosser: For any $a,b \in \mathcal{A}$, if $a = b$, then there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$.

\end{itemize}
\end{definition}

\noindent The two properties above can be expressed by following diagrams:

\
\begin{center}
\begin{tabular}{lll}
\begin{diagram}[size=1.5em,textflow]
 & & a & & \\
 & \ldOnto & & \rdOnto &  \\
 b & &  &  & c \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & d & & \\
\end{diagram}

&

&
\begin{diagram}[size=1.5em,textflow]
 a & & = &  & b \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & c & & \\
\end{diagram}

\end{tabular}
\end{center}

\begin{lemma}
\label{Conf-CR}
  An abstract reduction system $\mathcal{R}$ is confluent iff it is Church-Rosser.
\end{lemma}
\begin{proof}
  Assume the same notation as defintion \ref{c-r}. 

 ``$\Leftarrow$'': Assume $\mathcal{R}$ is Church-Rosser. For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then this means $b = c$. By Church-Rosser, there is a $d \in \mathcal{A}$, such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

``$\Rightarrow$'': Assume $\mathcal{R}$ is Confluent. For any $a,b \in \mathcal{A}$, if $a = b$, then we show there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$ by induction on the generation of $a = b$:  

If $a \twoheadrightarrow b \Rightarrow a = b$, then let $c$ be $b$.

If $b = a \Rightarrow a = b$, by induction, there is a $c$ such that $b \twoheadrightarrow c$ and $a \twoheadrightarrow c$. 

If $a = d, d = b \Rightarrow a = b$, by induction there is a $c_1$ such that $a \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_1$; there is a $c_2$ such that $d \twoheadrightarrow c_2$ and $b \twoheadrightarrow c_2$. So now 
we get $d \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_2$, by confluence, we have a $c$ such that $c_1 \twoheadrightarrow c$ and $c_2 \twoheadrightarrow c$. So $a \twoheadrightarrow c_1 \twoheadrightarrow c$ and $b \twoheadrightarrow c_2 \twoheadrightarrow c$. This process is illustrated by the following diagram:

\begin{diagram}[size=1.5em,textflow]
 a &            & = &            & d &           & =  &           & b &  \\
   & \rdDashtoo &   & \ldDashtoo &   & \rdDashtoo &   & \ldDashtoo & & \\
   &            & c_1 &          &   &            & c_2 &            & & \\
   &            &  & \rdDashtoo         &   &    \ldDashtoo        &  &            & & \\
   &            &     &          & c  &            &  &            & & \\
\end{diagram}

\end{proof}

The definition of $=$ depends on $\twoheadrightarrow$, the definition of $\twoheadrightarrow$ depends on $\to$, 
confluence is often easier to prove compare to Church-Rosser, in the sense that it is easier to anaylze $\twoheadrightarrow$ compare to $=$. Now let us see some consequences of confluence. 

\begin{corollary}
  If $\mathcal{R}$ is confluent, then every element in $\mathcal{A}$ has at most one normal form.
\end{corollary}
\begin{proof}
  Assume $a \in \mathcal{A}$, $b,c$ are two diferent normal forms for $a$. So we have $a \twoheadrightarrow b$
and $a \twoheadrightarrow c$, by confluence, there exist a $d$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. But $b,c$ are normal form, this implies $b$ and $c$ are the same as $d$, which contradicts that they are two different normal form. 
\end{proof}

\begin{definition}
  For an abstract reduction system $\mathcal{R}$, it is trivial if 
for any $a , b \in \mathcal{A}$, $a = b$.
\end{definition}

This notion of triviality can be generalized to a logical one, namely, replace the algorithmic
$=$ to the logical notion of equality. We will see later we take this as the notion of absurdity to bypass a dilemma. 


\begin{corollary}
  If $\mathcal{R}$ is confluent and there are at least two different normal forms, then $\mathcal{R}$ is
not trivial.
\end{corollary}

For a nontrivial reduction system, we also say it is algorithmic consistent. We can see that confluence is only a sufficient condition, it would be nice to have other methods to establish 
algorithmic consistency, especially when the confluence fails. 

 
\subsection{Lambda Calculus and Lambda Encoding}
We use $x,y,z,s,n,x_1, x_2, ...$ to denote individual variable, $t,t', a,b, t_1, t_2, ... $ to denote term, $\equiv$ to denote syntactic equality. $[t'/x]t$ to denote substituting the variable $x$ in $t$ for $t'$. The syntax and reduction for lambda calculus is given as following.

\begin{definition}[Lambda Calculus]

\

\noindent Term  $t \ ::= \ x \ | \ \lambda x.t \ | \ t\  t'$ 

\noindent Reduction  $(\lambda x.t)t' \to_{\beta} [t'/x]t$ 
\end{definition}

\noindent For example, $(\lambda x.x\ x)(\lambda x.x\ x)$, $\lambda y.y$ are concrete
terms in lambda calculus.  For a term $\lambda x.t$, we call $\lambda$ the \textit{binder}, $x$ is \textit{binded }, called \textit{bind variable}. If a variable is not binded, we say it is a \textit{free} variable. We will treat terms up to $\alpha$-equivalence, meaning, for any
term $t$, one can always rename the binded variables in $t$. So for example, $\lambda x.x\ x$ is
the same as $\lambda y.y\ y$, and $\lambda x.\lambda y.x\ y$ is the same as $\lambda z.\lambda x .z\ x$. $(\lambda x.\lambda y.x\ y)\underline{((\lambda z.z)z_1)} \to_{\beta} \underline{(\lambda x.\lambda y.x\ y)z_1} \to_{\beta} \lambda y.z_1\ y$ is a valid reduction sequence in lambda calculus. Note that for reader's convenient we underline the part we are going to carry out the reduction(we will not do this again) and we call the underline term \textit{redex}. For a comprehensive introducton on lambda calculus, we refer to \cite{Barendregt:1985}. 

A few words needed to be added to the notion of \textit{reduction}. Lambda calculus itself at first was intended to be a foundational system with only the ontology of functions(cite). Just as na\"ive set theory, for every formula can be comprehended by a set, i.e. $a \in \{x | \phi(x)\} \Leftrightarrow  \phi(a)$, in Church's theory, every function $f$ has the corresponding \textit{course-of-value} (denotes by $\lambda x.f[x]$), thus $(\lambda x.f[x]) a = f[a]$. By the way, one can find similar treatment in Frege's work(cite) and Russell's substitutional theory(cite). Russell's paradoxical method applied when we translate $\{x | x \notin x \}$ to $A:= \lambda x. N (x x)$. Then $A\ A = N(A\ A)$. Let $B := A\ A$, $N:= \lambda x.(x \supset \bot)$. Thus we have $B = B \supset \bot$, namely $B = \neg B$. Thus we reach a antinomy. Thus this idea of trying to embed logic into an untyped lambda calculus is not feasible. What make lambda calculus stand out but not Frege and Russell's formalism is a change of point of view, as a logical theory, lambda calculus fell as others, but to veiw it as the process of computation, then the paradoxical formula simply becomes a sort of diverging computation. 

\begin{theorem}[Confluence]
  $\to_{\beta}$ is confluent.
\end{theorem}

This theorem shows that lambda calculus is indeed consistent as a computational system. 
\begin{definition}[Church Numeral]
\label{churchnum}
\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s (n\ s\ z)$ 

\end{definition}

From above we know $1 \ := \mathsf{S}\ 0 \equiv (\lambda n.\lambda s.\lambda z. s (n\ s\ z))(\lambda s.\lambda z. z) \to_{\beta} \lambda s.\lambda z. s ((\lambda s.\lambda z. z) s\ z) \to_{\beta} \lambda s.\lambda z. s\ z$.  Note that the last part of above reductions occur underneath the lambda abstractions. Similarly we can get $2\ :=  \lambda s.\lambda z. s \ s\ z$. 

Informally, we can interpret lambda term as both data and function, so instead of thinking data $2$ as  
data, one can think of it as a higher order function $h$, which take in a function $f$ and a data $a$
as arguments, then apply the function $f$ to $a$ two times. 

One can define a notion of \textit{iterator} $\mathsf{It}\ n\ f\ t \ := n\ f \ t$. So $\mathsf{It}\ 0 \ f\ t =_{\beta} t $ and $\mathsf{It}\ (\mathsf{S}\ u) \ f\ t  =_{\beta} f (\mathsf{It}\ u \ f\ t) $. So now we can use iterator to define $\mathsf{Plus} \ n\ m := \mathsf{It}\ n\ \mathsf{S}\ m$.

\begin{definition}[Scott Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s\ n$ 

\end{definition}

We can see $1 \ := \lambda s.\lambda z. (s\ 0)$, $2 \ := \lambda s.\lambda z. (s\ 1)$. 
One can define a notion of \textit{recursor}. But before defining that, we give one
version of the \textit{fix point operator} $\mathsf{Fix} := \lambda f.(\lambda x.f\ (x\ x)) (\lambda x.f\ (x\ x))$. The reason it is called fix point operator is when it applied to a lambda expression, it give a
fix point of that lambda expression(recall informally each lambda expression is both data and function).
So $\mathsf{Fix} \ g \to_{\beta} (\lambda x.g\ (x\ x)) (\lambda x.g\ (x\ x)) \to_{\beta} g ((\lambda x.g\ (x\ x))\ (\lambda x.g\ (x\ x)) ) =_{\beta} g\ (\mathsf{Fix}\ g) $. Fix point operator enable lambda calculus to capture \textit{meta} level recursive function $\Phi(x) := ... \Phi ... x...$ as the fixed point of the term $\mathsf{Fix}(\lambda f.\lambda x. ... f... x...)$. For example, if one define $f(x) := f \ x$, then $(\mathsf{Fix}(\lambda f.\lambda x.f x)) x \to_{\beta}((\lambda f.\lambda x.f x) (\mathsf{Fix}(\lambda f.\lambda x.f x))) x \to_{\beta}  (\mathsf{Fix}(\lambda f.\lambda x.f x)) x \to_{\beta}...$. 

Since fix point operator is expressable with a lambda expression,
the direct consequence is we can define recursor: $\mathsf{Rec}\ := \ \mathsf{Fix}\ \lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v$. So we get $\mathsf{Rec}\ 0\ f\ v {\twoheadrightarrow_{\beta}} v$ and $\mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v {\twoheadrightarrow_{\beta}} f\ (\mathsf{Rec}\ n\ f\ v)\ n$. In a similar fashion, one can define $\mathsf{Plus} \ n\ m\ := \mathsf{Rec} \ n \ (\lambda x.\lambda y.\mathsf{S}\ x)\ m$. 

%Thus $\mathsf{Rec}\ 0\ f\ v \leadsto (\lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v) \ \mathsf{Rec}\ 0\ f\ v \leadsto 0 \ (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ v  \leadsto v$. And $\mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v \leadsto (\lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v) \ \mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v \leadsto (\mathsf{S}\ n) \ (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ v  \leadsto (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ n \leadsto f\ (\mathsf{Rec}\ n\ f\ v)\ n $. 

The predecessor function can be easily defined as $\mathsf{Pred}\ n\ :=  \mathsf{Rec}\ n\ (\lambda x.\lambda y.y)\ 0$. It only takes constant time (w.r.t. the number of beta reduction steps) to calculate the predessesor. But this function is tricky to define with Church encoding, one need to first define recursor with iterator, then use recursor to define $\mathsf{Pred}$. To calculate $\mathsf{Pred}\ n$ with Church encoding, one has to perform at least $n$ steps, so it takes linear time \cite{Girard:1989}. 



\subsection{Intuitionistic Type Theory \`a la Curry}
The idea of intuitionistic type theory is build around \textit{Curry-Howard} correspondence, meaning, one can view \textit{types} as intuitionistic formula, lambda term as \textit{proofs}.
Of course, we restrict ourself to consider only the minimal fragment of intuitionistic logic, which does not include law of excluded middle, the principle of explosion, existential quantifiers, disjunction and conjunction. 

Within the scope of intuitionistic type theory, there are two different point of views on how 
one should present the form of proof.  Lambda terms should be \textit{exactly} the proof
, thus one can recover the proof of the formula from term alone. We called this intuitionistic type theory \`a la Church. While for intuitionistic type theory \`a la Curry, we do not insist
on lambda terms should represent the exact proof, in this case, the typing derivation represent
the proof, and the lambda terms represent the computational content of the proof. Of course, over the years, the boundary between the two views is getting blur, while the distintions are still there. In this thesis, we will consider only the Curry style, namely, typing derivation correspond to proof, terms are pure lambda term, corresponding to computational content of the proof. Provability 
corresponds to wether given a type has lambda term that is typable to that type. 

\subsection{Simply Typed Lambda Calculus}

%We use $A, B, C, X,Y, Z, ...$ to denote type variable, $T, S, U ...$ to denote any type. 

\begin{definition}
\

\noindent Type $T \ :: =  \ B \ | \ T_1 \to T_2 $

\noindent Context $\Gamma \ ::= \ \cdot \ | \ \Gamma, x : T $
\end{definition}


We call $T_1 \to T_2$ \textit{arrow type}, \textit{Typing} is a procedure to associate a term with a type. Typing is usually described by a set of rules, indicating how to associate
a term $t$ with a type $T$ in given context $\Gamma$, denoted by $\Gamma \vdash t:T$.  We present \textit{simply typed lambda calculus} below.

\begin{definition}[Simply Typed]

\

\

\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

&

\infer[\textit{Abs}]{\Gamma \vdash \lambda x.t :T_1 \to
T_2}{\Gamma, x:T_1 \vdash t: T_2}

&
\infer[\textit{App}]{\Gamma \vdash t\ t': T_2}{\Gamma
\vdash t:T_1 \to T_2 & \Gamma \vdash t': T_1}

\\
\end{tabular}
\label{typing-rules}
\end{definition}

Logically speaking, simply typed lambda calculus corresponds to minimal intuitionistic propositional logic \cite{hindley1997basic}. It is easy to see the correspondence, one just ignore the
lambda terms, consider the rule and types only, then we have the following inference rules for
intuitionistic minimal logic.

\begin{definition}[Minimal Logic]

\

\

\begin{tabular}{lll}
    
\infer[\textit{Assumption}]{\Gamma \vdash T}{T \in \Gamma}

&

\infer[\textit{E(liminating)H(ypothesis)}]{\Gamma \vdash T_1 \to
T_2}{\Gamma, T_1 \vdash  T_2}

&
\infer[\textit{M(odus)P(onens)}]{\Gamma \vdash  T_2}{\Gamma
\vdash T_1 \to T_2 & \Gamma \vdash  T_1}

\\
\end{tabular}

\end{definition}

One read $\Gamma \vdash T$ as, under the assumption $\Gamma$, one can prove $T$. We called this
close relation between intuitionistic logic and typed lambda calculus Curry-Howard correspondence. 

Simply typed lambda calculus provides a basic framework for many sophisticated type systems. 
It is quite restrictive from both logical and programming point of view, since it only corresponds to minimal propositional logic  and it
only accepts a small set of strong normalizing terms. It has two properties, namely, type preservation and strongly normalization. For proofs of these two theorem we refer to \cite{Pierce:2002}. 

\begin{theorem}[Type Preservation]
  If $\Gamma \vdash t:T$ and $t \to_{\beta} t'$, then $\Gamma \vdash t':T$.
\end{theorem}

\begin{theorem}
  If $\Gamma \vdash t:T$, then $t$ is strongly normalizing.
\end{theorem}

\subsection{System \textbf{F}}

Introduced by Girard(cite), independently by Reynold(cite), system \textbf{F} is an extension
of simply typed lambda calculus. The only additions on types are a variable type $X$, representing predicate variable with arity 0 and a \textit{polymorphic type} $\Pi X.T$, representing quantification on all predicates with arity 0, namely, formulas. Note that here $\Pi$ is a binder. The additional typing rules are follows:

\

\begin{tabular}{lll}
    
\infer[\textit{Gen}]{\Gamma \vdash t:\Pi X.T}{\Gamma \vdash t:T & X \notin FV(\Gamma)}

&
&

\infer[\textit{Inst}]{\Gamma \vdash t :[T'/X]T
}{\Gamma \vdash t: \Pi X. T}

\\

\\
\end{tabular}


\noindent $X \notin FV(\Gamma)$ means $X$ is not a free type variable in the types of the typing context $\Gamma$. Under Curry-Howard correspondence, \textbf{F} corresponds to minimal second order propositional logic. 

For example, given above typing rule, we can asscociate identity function
with a polymorphic type, i.e. $\cdot \vdash \lambda x.x : \Pi X.X \to X$. And we also have 
$\cdot \vdash \lambda x.x : T \to T$ for any type $T$. 

We can type the Church numerals in definition \ref{churchnum}. Define  $\mathsf{Nat} \ := \Pi X. (X \to X) \to X \to X$. One can type the constructors $0$ and $\mathsf{S}$
as following.

\

\infer{\infer{\cdot \vdash \lambda s.\lambda z.z : \Pi X. (X \to X) \to X \to X }{\cdot \vdash \lambda s.\lambda z.z :  (X \to X) \to X \to X}}{\infer{s:X \to X, z:X \vdash z:X}{}}

\

\noindent For space reason, we only list $\cdot \vdash 0 : \mathsf{Nat}$, similarly one will can type:\\
\noindent $\cdot \vdash \mathsf{S} : \mathsf{Nat} \to \mathsf{Nat}$ \\
\noindent $\cdot \vdash \mathsf{It} : \Pi X. \mathsf{Nat} \to (X \to X) \to X \to X$ \\
\noindent $\cdot \vdash \mathsf{Plus} :  \mathsf{Nat} \to \mathsf{Nat} \to \mathsf{Nat}$ 

\

In system \textbf{F}, the computational content of the intuitionistic proof is obvious. Recalled we claimed earlier that in type theory \`a la Curry, proof corresponds to derivation of typing, lambda term corresponds to the computational content of the proof. Now let us examine this principle in \textbf{F}. We know that $\mathsf{Nat}$ is really an abreviation for the formula $\Pi X. (X \to X) \to X \to X$ and $\mathsf{S}$ is an abreviation for the pure lambda term $\lambda n.\lambda s.\lambda z. s\ (n\ s\ z)$. We know that we can construct a derivation for $\cdot \vdash \mathsf{S} : \mathsf{Nat} \to \mathsf{Nat}$, really this means the computational content of 
the proof of the formula $\mathsf{Nat} \to \mathsf{Nat}$ is $\mathsf{S}$, namely, $\lambda n.\lambda s.\lambda z. s\ (n\ s\ z)$, interestingly, is also the notion of successor for Church numerals. In Church style interpretation, proof is \textit{identical} as program, formula is \textit{identical} as formula. Curry style interpretation is more refined in the sense that we separate the notion of proof and program, namely proof is \textit{identical} as \textit{derivation}, program is the computational content of the proof, formula is still \textit{identical} as formula. This refinement is crutial, we shall see this notion of computational content of the proof again in system $\mathfrak{G}$ in section \ref{comp}.

System \textbf{F} and Church encoding fit together really well, indeed, being able to define inductive data type within the type system is one of the motivations for devising system \textbf{F}\cite{Girard:1989}. The quantification proposition $\Pi X.T$ is considered \textit{impredicative}\footnote{For the reader who is familiar with Russell's type theory, we want to point out that the word ``impredicative'' in this thesis have different meaning compare to the one in Russell's theory. In Russell's work, impredicative and predicative are the property of a term; while in our work, it is used to describe the behavior of a formula.} in the sense that $X$ can be instantiated by any proposition, including itself. System \textbf{F} is also type preserving and strongly normalizing \cite{Girard:1989}. 

\subsection{Introducing Axioms}
\label{ext}

System \textbf{F} is \textit{pure}, in the sense that is a purely as a second order propositional logic without any axioms. Computationally, it is rather expressive, as Girard himself observed, any provably total function in second order Heyting Arithmetic($\mathbf{HA}_2$) can be represented as term in system \textbf{F}(cite). While for logical expressiveness, \textbf{F} is not as
ideal as we thought, since all the formula in \textbf{F} are propositional, there is no way to talk about individuals, thus for instance, one can not even have a formula to express induction principle. 

Various attempts have been tried to add a notion of individuals to system \textbf{F}(cite: Howard, Martin-L\"of.), similarly, there are several ways to add axioms about individuals. Under the principle we proposed for intuitionistic type theory \`a la Curry, only modus ponens and hypothesis elimination has computational content, so axioms, quanfications, instantiations will not have computational content. So the way to introduce axioms in this thesis is through the following
rule: 

\

\infer{\Gamma \vdash t: T'}{ \Gamma \vdash t: T & T \Leftrightarrow T'}

\

\noindent This rule expresses axiom of the form $T \Leftrightarrow T'$. For example, 
axiom of comprehension: $a \ep \{ x| \phi(x)\} \Leftrightarrow \phi(a)$. And of course,
a form of axiom of extensionality: $A(x) \Leftrightarrow A(y)$ if $x =_{\beta} y$. Note
that formally, axiom of extensionality should be $\forall x.\forall y.(x = y) \to A(x) \Leftrightarrow A(y)$. The informal notion is that if two individuals $x$ and $y$ are equal, then it must be that they have the same property $A$(in fact, any property). So it is just that we lift 
the formal axiom of extensionality to meta level, and replace the strong indentity $=$ with 
a weaker version $=_{\beta}$\footnote{Note that we here we just integrate the conversion rule, we are not trying to do logic inside lambda calculus, see Kleene-Rosser paradox.}. At object level, we can define $x = y$ as $\Pi P. P(x) \to P(y)$. 
This is what commonly done in the context of intuitionistic type theory. As we said, when we try to deal with individual, it is inevitable to add axioms, but as long as these axioms does not contribute to computation, and in principle justifiable, then we can accept that the way we add
axiom is appropriate.   

\section{Lambda Encoding with Mutual Recursion}
\label{self}

\subsection{Calculus of Construction}
\subsection{A Digression about Recursive Types}


\section{Lambda Encoding with Comprehension}
\label{comp}
In this chapter, we will see the provable total arithmetic function $f$ in second order
theory are \textit{Church realizable}, namely, the computational contents of proofs of totality are a (Church encoded) lambda terms $t$ such that $t\ \bar{a} =_{\beta} \overline{f(a)}$ for any numerals $a$ and its Church encoded term $\bar{a}$. Of course, as inventor(well, one of) of System $\mathbf{F}$, Girard must know this result, and Leivant also have observed this(cite), we just rediscover a known fact. However, we make a small step further, we discover that inside the second order theory of lambda calculus(system $\mathfrak{G}$), there is an internal functional programming language that
support polymorphic dependently typed total programming. This property of having an internal language is specific to system $\mathfrak{G}$ with Church encodings.  

We will first present an inconsistent system, namely, Frege's $\mathfrak{F}$, to motivate our construction of arithmetic with lambda calculus. Then we give a second order theory of lambda calculus, we call it $\mathfrak{G}$, it is simpler compare to Girard's formulation of $\mathbf{HA}_2$ \`a la Takeuti, with a change of domain from Peano numbers to lambda calculus and replacing the Peano axioms with only a rule to express axiom of extentionality(section \ref{ext}).
 The system $\mathfrak{G}$ can reasoning \textit{externally} about lambda calculus, thus any computable functions. Also, it has an internal language that directly support \textit{internal} reasoning, namely, a total-polymorphic-dependently typed programming.    

 Of course, since all 9 Peano's axioms can be derived from $\mathfrak{G}$, and the consistency proof of $\mathbf{PA}_2$ or $\mathbf{HA}_2$ can not formulate with themself, this shed a doubt on the consistency status of $\mathfrak{G}$. However, from the \textbf{meta level}, one can see that the equational theory of lambda calculus is nontrivial(section \ref{ars}), thus we conjecture that $\mathfrak{G}$ is consistent.

\subsection{Frege's System $\mathfrak{F}$}
\label{frege}
Certain inconsistent systems and its corresponding antinomies are invaluable, because not only their antinomies serve as criterions for maintaining certain sense of consistency, but also, perhaps more importantly, they gives us examples to see how the powerful machinaries can be adopted to reduce a large part of mathematics to these systems. Frege's system (\`a la Hatcher) certainly belong to this category. In fact, the thesis itself is inspired by the Fregean construction of numbers. We will get ourself familiar with Frege's system $\mathfrak{F}$, and see how to do basic arithmetic with $\mathfrak{F}$ and how the paradox arise. 

We identify six syntactical categories, namely, \textit{domain term}, \textit{set}\footnote{Do not confused this with the ``set'' in ZFC, here is just a name for a syntactical category}, \textit{formula}, \textit{type}, \textit{proof term}\footnote{By that I mean the terms that corresponds to the computational content of the proofs, not that ``proof term'' \textit{is} proof.}, \textit{pure lambda term}. 

\begin{definition}[Syntax]
\

\noindent \textit{Domain Terms/Set} $b \ :: = \ u \ | \ \iota u.F$

\noindent \textit{Formula/Type} $F \ ::= \bot \ | \ B \ | \ b \ep b' \ | \ \ F_1 \to F_2 \ | \ \forall u.F \ | \ F \wedge F'$

\noindent \textit{Proof Terms} $t \ ::= \ x \ | \ \lambda x .t \ | \ t t' \ | \  \langle t_1, t_2 \rangle \ | \ \pi_1 t \ | \ \pi_2 t$

\noindent \textit{Proof Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x: F$

\end{definition} 

\begin{definition}[Deduction Rules]
\

\footnotesize{
\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:F}{x:F \in \Gamma}

&
\infer[\textit{Conv}]{\Gamma \vdash t: F_2}{\Gamma \vdash 
t: F_1 &  F_1 \cong F_2}

&

\infer[\textit{Forall}]{\Gamma \vdash t: \forall u.F}
{\Gamma \vdash t: F &  u \notin \mathsf{FV}(\Gamma)}

\\
\\
\infer[\textit{Instantiate}]{\Gamma \vdash t: [b/u]F}{\Gamma
\vdash t: \forall u.F}

&

%% \infer[\textit{Poly}]{\Gamma \vdash  p :\Pi X.F}
%% {\Gamma \vdash p: F & X \notin \mathsf{FV}(\Gamma)}

%% &
%% \infer[\textit{Inst}]{\Gamma \vdash p:[F'/X]F}{\Gamma \vdash p: \Pi X.F}

\infer[\textit{Func}]{\Gamma \vdash  \lambda x.t: F_1\to F_2}
{\Gamma, x: F_1 \vdash t: F_2}

&

\infer[\textit{App}]{\Gamma \vdash t t': F_2}{\Gamma
\vdash t: F_1 \to F_2 & \Gamma \vdash  t': F_1}
\\
\\

\infer{\Gamma \vdash \pi_i t: F_i}{\Gamma
\vdash  t: F_1 \wedge F_2 }

&

\infer{\Gamma \vdash \langle t_1, t_2 \rangle: F_1 \wedge F_2}{\Gamma
\vdash t_1: F_1 & \Gamma \vdash t_2: F_2 }

\end{tabular}
}
\end{definition}

\noindent \textbf{Note}: $\cong$ is defined as reflexive transitve and symmetric closure of 
$\to_{c}$. System $\mathfrak{F}$ here originally formalized by Hatcher, we just put his system
into natural deducton style.

\begin{definition}[Comprehension]

\

\infer{ b \ep (\iota u.F) \to_{c} [b/u]F}{}

\end{definition}

The power behides Fregean number construction is this comprehension axiom. Definition of 
number, deriving induction principle for number, all rely on comprehension. Of course, 
this paticular comprehension is so powerful that it leads to contradictory. There are many
ways to restrict the use of comprehensions\footnote{Russell's type theory, Quine's stratefication. Zermelo's $\mathbf{ZF}$.}, in next section, we will pose a restricted version, which separate the notion of domain terms from set, thus achieve a sense of stratification.   


\begin{definition}[Equality]
  $a = b := \forall z. (z \ep a \equiv z \ep b)$.
\end{definition}
For convenient, we write $a \equiv b := a \to b .\wedge . b \to a$. We also write $a \not = b$
for $a = b \to \bot$, $\exists a. A $ for $ (\forall a.(A \to \bot)) \to \bot$. Now we can proceed to reduce na\"ive set theory into $\mathfrak{F}$.

\begin{definition}[Na\"ive Set Theory]
\

  $\Lambda := \iota x.(x  = x \to \bot)$.

  $\{ b\} := \iota y. y = b$

  $\bar{c} := \iota y. (y \ep c \to \bot)$.

  $ a \cap b := \iota z. (z \ep a \wedge z \ep b)$

 $ a \cup b := \iota z. (z \ep a \to \bot . \wedge . z \ep b \to \bot) \to \bot$
\end{definition}

\begin{theorem}
\

  $\vdash \forall x.(x = x)$

  $\vdash \forall x.(x \ep \Lambda \to \bot)$.
\end{theorem}
We can take $x \ep \Lambda$ as our notion of contradictory instead of $\bot$. 

\begin{definition}[Fregean Numbers]
\

  $N := \iota x. \forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to x \ep c$.

  $0 := \{ \Lambda \}$.

  $S\ a := \iota y. \exists z.(z \ep y . \wedge . (y \cap \overline{\{z\}}) \ep a )$.
\end{definition}

\begin{theorem}
  \

$\vdash 0 \ep N$.
\end{theorem}
\begin{proof}
  We want to prove $\forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to 0 \ep c$. Let 
$s: \forall y.(y \ep c \to S y \ep c), z: 0 \ep c$. The lambda term of the proof would be
$\lambda s.\lambda z.z$.
\end{proof}

\begin{theorem}
  $\vdash \lambda n.\lambda s.\lambda z.s (n\ s\ z):\forall y. (y \ep N \to S y \ep N)$.
\end{theorem}
\begin{proof}
  Let $n: y \ep N, s: \forall y.(y \ep c \to S y \ep c), z: 0 \ep c$.
\end{proof}

\begin{theorem}[Induction]
  $\vdash \lambda a. \lambda b. \lambda d. d\ a\ b : \forall c.(\forall y.(y \ep c \to S y \ep c)) \to 0 \ep c \to \forall x.(x \ep N \to x \ep c)$.
\end{theorem}
\begin{proof}
  Let $a:\forall y.(y \ep c \to S y \ep c), b: 0 \ep c, d: x\ep N$.
\end{proof}

Now we see the constructive aspect of Fregean's construction, this is the spirit we will follow
in this thesis. Namely, there is an algorithmic interpretation for constructive proof of totality of certain kind of function. For example, the proof of $S$ is total, namely, $\forall y. (y \ep N \to S y \ep N)$, is Church numeral's successor $\suc$. In fact, one could continue to develop the \textit{analysis} within the $\mathfrak{F}$. As we have already seen from the basic development of natural numbers and our observatoin, one should at least admit the merit of \textit{naturality} of the construction. Of course, the system itself is inconsistent. Using Quine's terminology, $b\ep b'$ is un-stratified. The rest of story is known by most of logician and mathematician, that is, the following formula is provable in system $\mathfrak{F}$.
Let $A := (\iota u_1. u_1 \not \in u_1) \ep (\iota u_1. u_1 \not \in u_1) = A \to \bot$.

$ \vdash \lambda x.xx: A \to \bot$ and $ \vdash \lambda x.xx: (A \to \bot) \to \bot$

\noindent However, as we repeatly emphasis, it is the constructionistic spirit that we should take upon ourself, the antinomy only serve as criterion of boundary. And it is worthnoting that intuitionistic is irrelavant to prevent inconsistency here.
 


\subsection{Lambda Calculus with Second Order Logic}

We will see a formalization of second order theory of lambda calculus(system $\mathfrak{G}$), of course, in real experience, we took several detours to get to $\mathfrak{G}$, we also want to 
emphasis that these detours give us insights to develop the internal language of $\mathfrak{G}$, so they are not worthless.

\subsubsection{System $\mathfrak{G}_0$}
\label{gnull}
\begin{definition}[Syntax]

\

\noindent \textit{Terms} $t \ :: = \ x \ | \ \lambda x.t \ | \ t t'$

\noindent \textit{Types} $T \ ::= \ X \ | \ \Pi X.T \ | \ \ T_1 \to T_2 \ | \ \forall x.T \ | \ \iota x.T \ | \ t \ep T $

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x:T$

\end{definition} 

A PTS-typed-theoretician will not find these definition of type, term, context any
surprising\footnote{They may even find it cumbersome because it does not collapse the notion
of type and term together.}. Indeed, $\iota x.T$ and $t \ep T$ is just another version of $\lambda x.T$
and $T t$ in their opinion. Of course, it may even disturbs him/her a little because
it is a \textit{Curry-variant} and does not have dependent type $\Pi x:T.T'$.  

A logician will quickly find it is peculiar because, it collapse the notion of sets, formula and type altogether. If one try to veiw type as formula, what does type like $\iota x.T_1 \to \iota x.T_1$ correspond to? And also, does it make sense to accept formula likes $t \ep \Pi X. (X \to X)$ ? Indeed, these are the questions that we could not answer when we come up with these syntax. Part of the reason we reach $\mathfrak{G}$ formulation is trying to answer these questions. Observe type like $\iota x.T_1 \to \iota x.T_1$, we want to interpret it not as formula, but as 
a description of function from the set $\iota x.T_1$ to itself, but this explaination is too weak that in the sense that we still do not know the meaning of formula like $\iota x.T \to t \ep T'$. 

We want to remark that $\mathfrak{G}_0$ is intended to collapse the notion of proof terms and domain terms(lambda calculus). So it does have a sense of domain and $t \ep T$ express the stratefication, exactly to avoid Russell's antinomy. 

\begin{definition}[Typing Rules]
\

\footnotesize{
\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

&
\infer[\textit{Conv}]{\Gamma \vdash t : T_2}{\Gamma \vdash t:
T_1 &  T_1 \cong T_2}

&

\infer[\textit{toFormula}]{\Gamma \vdash t: t \ep (\iota x.T)}{\Gamma
\vdash t : \iota x.T}

\\
\\

\infer[\textit{toSet}]{\Gamma\vdash t : \iota x.T}{\Gamma \vdash t: t \ep (\iota x.T)}

&
\infer[\textit{Forall}]{\Gamma \vdash t : \forall x.T}
{\Gamma \vdash t: T &  x \notin \mathsf{FV}(\Gamma)}

&
\infer[\textit{Instantiate}]{\Gamma \vdash t :[t'/x]T_2}{\Gamma
\vdash t: \forall x.T}

\\
\\

\infer[\textit{Poly}]{\Gamma \vdash  t :\Pi X.T}
{\Gamma \vdash t: T & X \notin \mathsf{FV}(\Gamma)}

&
\infer[\textit{Inst}]{\Gamma \vdash t:[T'/X]T}{\Gamma \vdash t: \Pi X.T}

&

\infer[\textit{Func}]{\Gamma \vdash \lambda x.t : T_1\to T_2}
{\Gamma, x:T_1 \vdash t: T_2}

\\
\\
\infer[\textit{App}]{\Gamma \vdash t t':T_2}{\Gamma
\vdash t: T_1 \to T_2 & \Gamma \vdash t': T_1}

\end{tabular}
}
\end{definition}

\noindent \textbf{Note}: $\cong$ is defined as reflexive transitve and symmetric closure of 
$\to_{\beta}\cup \to_{\iota}$.
\begin{definition}[Beta Reductions]

\


\begin{tabular}{ll}

\infer{(\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

\infer{t \ep (\iota x.T) \to_{\iota} [t/x]T}{}

\end{tabular}
  
\end{definition}


Again, a PTS-typed-theoretician will find this strange, it does not do kinding, then at 
type-level you will not have termination. To this we answer yes, type level termination 
is the last thing we want. And the toSet and toFormula rule, it is un-Gentzen like, in
the sense that every type construct should have its own introduction and elimination rule,
but the two rules fail to maintain it. The toSet and toFormula rules are the right
target, yet the attack from this Gentzen-like point of view is from the wrong side, because
in Gentzen's style, only \textit{logical connective} has its own introduction and elimination
rule, for axiomatic predicate symbols like $=, \ep$, they do not need to obey this formation.

A logician will simply ask, what does toFormula and toSet(we called them formula-set reciprocity, or simply reciprocity) rule mean? As Girard once said: ``The best possible law has value only if one can justify it, i.e., show the effect of non-observance.''. Can we justify them in the sense of when they are absent? At the point when we proposed these typing rules, we find out we can not justify the reciprocity. On the other hand, we were at awe about the power of reciprocity bring, namely, dependently typed programming. And we can show strong normalization of $\mathfrak{G}_0$ by maping it to system $\mathbf{F}$. 


  
\begin{definition}
\

  $F(X) := X$

  $F(T_1 \to T_2) := F(T_1) \to F(T_2)$

  $F(\Pi X.T) := \Pi X.F(T)$

  $F(\forall x.T) := F(T)$

  $F(\iota x.T) := F(T)$

  $F(t \ep T) := F(T)$
\end{definition}

\begin{theorem}
\label{const}
  If $\Gamma \vdash t:T$ in $\mathfrak{G}_0$, then $F(\Gamma) \vdash t:F(T)$ in system $\mathbf{F}$. 
\end{theorem}
\begin{proof}
  Simply by induction.
\end{proof}

Now let us take a look at how one can prove some of the Peano's axioms within $\mathfrak{G}_0$. 
We define $\mathsf{Nat}$ similar to the one in section \ref{frege}. 


\begin{definition}[Church Numerals]
\

  \noindent $\mathsf{Nat} := \iota x. \Pi C.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to x \ep C$

\noindent $\mathsf{S} \ := \lambda n. \lambda s.\lambda z. s \ (n\ s\ z)$

\noindent $0\  := \lambda s. \lambda z.z$

\noindent With $s:\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), z: 0 \ep C, n: \mathsf{Nat}$, $0$ is typable to $\mathsf{Nat}$, $\mathsf{S}$ is typable to $\mathsf{Nat} \to \mathsf{Nat}$.
\end{definition}

Note that with reciprocity, we have  $\vdash \lambda n. \lambda s.\lambda z. s \ (n\ s\ z) : \mathsf{Nat} \to \mathsf{Nat}$. We can also type $\vdash \lambda n. \lambda s.\lambda z. s \ (n\ s\ z) : \forall x.( x\ep \mathsf{Nat} \to \suc x \ep \mathsf{Nat})$ with $s:\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), z: 0 \ep C, n: x\ep \mathsf{Nat}$. This behavior of $\mathfrak{G}_0$ is highly desirable, on the one hand, one can proof theorem about function \textit{externally}, namely theorem like $\forall x.( x\ep \mathsf{Nat} \to \suc x \ep \mathsf{Nat})$; on the other hand, one can exibit this property \textit{internally}, so a judgement like $\vdash \suc : \mathsf{Nat} \to \mathsf{Nat}$ to express the same thing. This behavior \textit{is} the idea behide
the concept of dependently type programming.  



\begin{definition}[Induction]
\

\noindent  $\mathsf{Id} :  \Pi C. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$

\noindent $\mathsf{Id} := \lambda s. \lambda z. \lambda n. n\ s\ z$

\noindent with $s:\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), z: 0 \ep C, n: m \ep \mathsf{Nat}$.
\end{definition}

\begin{definition}[Addition]
\

\noindent  $\mathsf{add}\ u\ v := \mathsf{Id}\ \mathsf{S}\ v \ u$


\end{definition}

\noindent The typing procedure for addition: let $C$ be $\iota x.\mathsf{Nat}$ for $\mathsf{Id}$, so $\mathsf{Id}: (\mathsf{Nat} \to \mathsf{Nat}) \to \mathsf{Nat} \to \forall m(m \ep \mathsf{Nat} \to \mathsf{Nat})$. Note that $\mathsf{S}: \mathsf{Nat} \to \mathsf{Nat}, v :\mathsf{Nat}$. And $u: \mathsf{Nat} \equiv u \ep \mathsf{Nat}$. 

\begin{definition}
  $\mathsf{Void} := \iota x. \Pi C. x \ep C$
\end{definition}

\begin{definition}[Leibniz Equality]
  $a = b:= \Pi C. a \ep C \to b \ep C$.
\end{definition}

Note the different of the Leibiniz equality and the equality in section \ref{frege}. 
Leibniz equality simply takes the axiom of extensionality(which is $x = y \to \Pi C. (x \ep C \to y \ep C)$) as the notion of equality. 

With all these definition, one can go ahead and
proceed to prove all Peano's 9 axioms inside $\mathfrak{G}_0$ and do dependently typed programming such as vector encoding as well. We shall refer reader who interested in this development to  the appendix \ref{devep}. 

So it seems that $\mathfrak{G}_0$ is practical enough to be our reasons for not answering
logician questions. And also it is consistent in the sense that $\mathsf{Void}$ type is uninhabited. But we feels like we have the obligations to justify formula-set reciprocity, which eventually leads to the development of $\mathfrak{G}$. 

\subsubsection{System $\mathfrak{G}$}
\begin{definition}
\

\noindent \textit{Formula/Type} $T \ ::= \  X^0 \ | \ t \ep S \ | \ \Pi X^1.T \ | \ \ T_1 \to T_2 \ | \ \forall x.T \ | \ \Pi X^0.T$ 

\noindent \textit{Set/Objects} $S \ ::= X^1 \ | \ \iota x.T$

%% \noindent \textit{Morphism} $M \ ::= t \ep S \ | \ \forall x.(x\ep S \to M)$

\noindent \textit{Proof Terms/Domain Terms/Pure Lambda Terms} $t \ :: = \ x \ | \ \lambda x.t \ | \ t t'$

%\noindent \textit{Proof Terms} $p \ ::= \ a \ | \ \lambda a .p \ | \ p p'$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x:T$

%\noindent \textit{Records} $\Delta \ :: = \ \cdot \ | \ \Delta, a: x \ep S$

\end{definition} 

$X^0$ is a predicate variable that has arity $0$(namely, the type variable in system \textbf{F}), while $X^1$ is a predicate variable with
arity $1$, it is essentially a variable that discribe set. $\iota x.T$ is the set forming abstraction, it allows one to form a set out of a formula. Unlike $\mathfrak{G}_0$, we separate
the notion of set and formula, they are now not the same thing in $\mathfrak{G}$. Set can only 
occur inside of a formula, they do not have their own rule and identity outside of a formula. 
So logically, the formulas are precisely the second order formula \`a la Takeuti, the only difference is that we replace the number domain with lambda calculus. 


\begin{definition}[Typing Rules]
\

\footnotesize{
\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

&
\infer[\textit{Conv}]{\Gamma \vdash t : T_2}{\Gamma \vdash t:
T_1 &  T_1 \cong T_2}

&

\infer[\textit{Forall}]{\Gamma \vdash t : \forall x.T}
{\Gamma \vdash t: T &  x \notin \mathsf{FV}(\Gamma)}

\\
\\
\infer[\textit{Instantiate}]{\Gamma \vdash t :[t'/x]T}{\Gamma
\vdash t: \forall x.T}
&

\infer[\textit{Poly}]{\Gamma \vdash  t :\Pi X^i.T}
{\Gamma \vdash t: T & X^i \notin \mathsf{FV}(\Gamma) & i= 0,1}

&
\infer[\textit{Inst0}]{\Gamma \vdash t:[T'/X^0]T}{\Gamma \vdash t: \Pi X^0.T}

\\
\\

\infer[\textit{Func}]{\Gamma \vdash \lambda x.t : T_1\to T_2}
{\Gamma, x:T_1 \vdash t: T_2}

&

\infer[\textit{App}]{\Gamma \vdash t t':T_2}{\Gamma
\vdash t: T_1 \to T_2 & \Gamma \vdash t': T_1}

&


\infer[\textit{Inst1}]{\Gamma \vdash t:[S/X^1]T}{\Gamma \vdash t: \Pi X^1.T}

\end{tabular}
}
\end{definition}

\noindent \textbf{Note}: $\cong$ is defined as reflexive transitve and symmetric closure of 
$\to_{\beta}\cup \to_{\iota}$.

\begin{definition}[Functional Extensionality and Comprehension]

\


\begin{tabular}{ll}

\infer{(\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

\infer{t \ep (\iota x.T) \to_{\iota} [t/x]T}{}

\end{tabular}
  
\end{definition}

For the rule Poly, there is no surprising there, Inst0 allows us to instantiate $X^0$ with 
any formula, this is what the instantiation in system \textbf{F}, while Inst1 allow us to instantiate a set variable $X^1$ with a set $S$. We have the similar comprehension scheme and beta reduciton as $\mathfrak{G}_0$. Perhaps the biggest change from $\mathfrak{G}_0$ to $\mathfrak{G}$ is that we do not have formula-set reciprocity in $\mathfrak{G}$. As we mentioned before, we 
want to justify the reciprocity principle without reciprocity. 

\begin{definition}[Internal Functional Language]
\

\noindent Internal Types $U\ := X^1 \ | \ \iota x.Q \ | \ \Pi x:U.U \ | \ \Delta X^1.U$.

\noindent Internal Formula $Q \ := X^0 \ | \ t \ep U \ | \ \Pi X^0.Q \ | \ Q \to Q' \ | \ \forall x.Q \ | \  \Pi X^1. Q$

\noindent Internal Context $\Psi\ :=  \ \cdot \ | \ \Psi, x:U$.

\end{definition}

The concepts behind the internal types are similar to what we already known about dependent types and polymorphic types in functional programming language. We intend to \textit{interpret} internal type $U$ as sets $S$ in $\mathfrak{G}$. Be awared that we are not trying to give a \textit{set-theoretic} model for polymorphism \footnote{ And we are never going to do that in this thesis and not because of Reynold's results, even if he show that polymorphism \textit{has} a set-theoretic model, please see Girard's blind spot for the reasons. }, which has been shown impossible by Reynold. The reason we add internal formula is that we want $[U'/X^1]U$ to be well-defined, namely, $[U'/X^1]U$ is still a well-formed internal type.  

In fact, we will show every internal type $U$ corresponds to a set $S$ and vice versa. We called the process of transforming $S$ to $U$ \textit{internalization} and from $U$ to $S$ \textit{externalization}. 
   

\begin{definition}[Internalization]
\

  $\intern{\cdot}$ is a maping from sets to internal types, formulas to internal formulas.

  $\intern{X^1} := X^1$

  $\intern{\iota f. \forall x. (x \ep S' \to f\ x \ep S)} := \Pi x:\intern{S'}.\intern{S}$, where $f$ is fresh.

  $\intern{\iota x. (\Pi X^1. x \ep S)} := \Delta X^1.\intern{S}$, where $x$ is fresh.

  $\intern{\iota x.T} := \iota x.\intern{T}$

  $\intern{X^0} := X^0$

  $\intern{t\ep S} := t \ep \intern{S}$

  $\intern{T \to T'} := \intern{T} \to \intern{T}$

  $\intern{\Pi X^i.T} := \Pi X^i.\intern{T}$.

  $\intern{\forall x.T} := \forall x.\intern{T}$.

  $\intern{x:x\ep S, \Gamma} := x: \intern{S}, \intern{\Gamma}$


\end{definition}

The interesting case is the internalization of $\iota f. \forall x. (x \ep S' \to f\ x \ep S)$ and $\iota x. (\Pi X^1. x \ep S)$. The first case \textit{express} the set of total function $f$ from $S'$ to $S$, while allowing $S$ indexed by $x$. The second case simply describe a polymorphic set keeping $X^1$ parameterized. 
  
%% \noindent Note that for any $x:y \ep S \in \Gamma$ where $\Gamma \vdash t: t \ep S'$, we can rename to $x: x \ep S \in \Gamma$, with $\Gamma \vdash \underline{t} : \underline{t} \ep \underline{S'}$, then one can apply following internalization function to go to the internal world. 

\begin{definition}[Internal Typing]
\

\begin{tabular}{lll}
\infer[toSet]{\intern{\Gamma} \Vdash t: \intern{S}}{\Gamma \vdash t:t\ep S}

&    
\infer{\Psi \Vdash x : U}{x:U \in \Psi }

&

\infer{\Psi \Vdash \lambda x.t : \Pi x:U. U'}
{\Psi, x: U \Vdash t : U'}


\\
\\

\infer{\Psi \Vdash t : \Delta X^1. U}
{\Psi \Vdash t : U & X^1 \notin FV(\Psi)}

&

\infer{\Psi \Vdash t : [U'/X] U}
{\Psi \Vdash t : \Delta X^1.U}

&
\infer{\Psi \Vdash t t' :[t'/x]U}{\Psi
\Vdash t:  \Pi x: U'.U & \Psi \Vdash t': U'}


%% \infer{\Delta \vdash \lambda y.[t/x]t':S_1 \longrightarrow S_3}{\Delta
%% \vdash \lambda y.t: S_1 \longrightarrow S_2 & \Delta \vdash \lambda x.t': S_2 \longrightarrow S_3}

\end{tabular}
\end{definition}

The toSet rule above transform a judgement $\Gamma \vdash t:T$ in $\mathfrak{G}$ into its internal language. The others rules are intuitive clear, they corresponds to functional programming 
concepts such as dependent product and polymorphism. 

It is not good enough to just transform to internal language, one would want to go back to $\mathfrak{G}$ whenever he want. Thus we have the following externalization process.

\begin{definition}[Externalization]
\

  $\interp{\cdot}$ is a maping from internal types to sets, internal formula to formula.

  $\interp{X^1} := X^1$

  $\interp{\iota x.Q} := \iota x.\interp{Q}$

  $\interp{\Pi x:U'.U} := \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$, where $f$ is fresh.

  $\interp{\Delta X^1.U} := \iota x. (\Pi X^1. x \ep \interp{U})$, where $x$ is fresh.

  $\interp{X^0} := X^0$

  $\interp{t\ep U} := t \ep \interp{U}$

  $\interp{Q \to Q'} := \interp{Q} \to \interp{Q}$

  $\interp{\Pi X^i.Q} := \Pi X^i.\interp{Q}$.

  $\interp{\forall x.Q} := \forall x.\interp{Q}$.

  $\interp{x:U, \Psi} := x: x\ep \interp{U}, \interp{\Psi}$
\end{definition}

One can immediate see that externalization is exact the opposite of internalization. So understand one of these two concepts is enough, they are in a sense isomorphic. We can see the isomorphism in the follwoing lemma.
\begin{lemma}
\label{id}
  $\interp{\intern{S}} = S$ and $\intern{\interp{U}} = U$.
\end{lemma}

\begin{proof}
By induction. 
\end{proof}
\begin{lemma}
\label{subterm}
$[t'/x]\interp{U} = \interp{[t'/x]U}$ and $[\interp{U'}/X] \interp{U} = \interp{[U'/X]U}$.
\end{lemma}
\begin{proof}
  By induction on structure of $U$.
\end{proof}

The following theorem give us the ability to go back to $\mathfrak{G}$ from its internal language. More specifically, it shows that the following rule (corresponds to the toFormula rule in section \ref{gnull}) is admissable: 

\

\infer[toFormula]{\interp{\Psi} \vdash t: t\ep \interp{U}}{\Psi \Vdash t: U} 

\

\noindent We call the process of internalization and externalization in $\mathfrak{G}$ \textit{reciprocity}.
 
\begin{theorem}[Externalization]
\label{ext}
  If $\Psi \Vdash t: U$, then $\interp{\Psi} \vdash t: t\ep \interp{U}$.
\end{theorem}
\begin{proof}
\noindent  By induction on the derivation. 

\noindent \textbf{Base Case}:

\

\infer{\Psi \Vdash x : U}{x:U \in \Psi }

\

\noindent $\interp{\Psi} \vdash x : x \ep \interp{U}$, since $x: x\ep \interp{U} \in \interp{\Psi}$.

\

\noindent \textbf{Base Case}:

\

\infer{\intern{\Gamma} \Vdash t: \intern{S}}{\Gamma \vdash t:t\ep S}

\

\noindent By lemma \ref{id}.

\

\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash \lambda x.t : \Pi x:U. U'}
{\Psi, x: U \Vdash t : U'}

\

\noindent By induction, we have $\interp{\Psi}, x:x\ep \interp{U} \vdash t : t \ep \interp{U'}$.
So $\interp{\Psi} \vdash \lambda x.t : x\ep \interp{U} \to t \ep \interp{U'}$, then by Forall
rule, we have $\interp{\Psi} \vdash \lambda x.t : \forall x.(x\ep \interp{U} \to t \ep \interp{U'})$. By comprehension rule and beta-reduction, we get $\interp{\Psi} \vdash \lambda x.t : \lambda x.t \ep \iota f.\forall x.(x\ep \interp{U} \to f \ x \ep \interp{U'})$. And we also know that $\interp{\Pi x:U.U'} := \iota f. \forall x. (x \ep \interp{U} \to f\ x \ep \interp{U'})$. So it is the case.

\

\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash t t' :[t'/x]U}{\Psi
\Vdash t:  \Pi x: U'.U & \Psi \Vdash t': U'}

\

\noindent By induction, we have $\interp{\Psi} \vdash t:t \ep \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$ and $ \interp{\Psi} \vdash t':t'\ep \interp{U'}$. By comprehension, we have $\interp{\Psi} \vdash t : \forall x. (x \ep \interp{U'} \to t\ x \ep \interp{U})$. Instantiate $x$ with $t'$, we have $\interp{\Psi} \vdash t: t' \ep \interp{U'} \to t\ t' \ep [t'/x] \interp{U}$. So by App rule, we have $\interp{\Psi} \vdash t t': t t' \ep [t'/x]\interp{U}$. By lemma \ref{subterm}, we know that $[t'/x]\interp{U} = \interp{[t'/x]U}$. So $\interp{\Psi} \vdash t t': t t' \ep \interp{[t'/x]U}$.

\


\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash t : \Delta X^1. U}
{\Psi \Vdash t : U & X^1 \notin FV(\Psi)}

\

\noindent By induction, one has $\interp{\Psi} \vdash t : t \ep \interp{U}$. So one has 
$\interp{\Psi} \vdash t : \Pi X^1. t \ep \interp{U}$. So by comprehension, one has $\interp{\Psi} \vdash t : t\ep \iota x. \Pi X^1. x \ep \interp{U}$. 

\

\noindent \textbf{Step Case}:

\


\infer{\Psi \Vdash t : [U'/X] U}
{\Psi \Vdash t : \Delta X^1.U}

\

\noindent By induction, one has $\interp{\Psi} \vdash t: t \ep \iota x. \Pi X^1. x \ep \interp{U}$. By comprehension, we have $\interp{\Psi} \vdash t: \Pi X^1. t \ep \interp{U}$. So by instantiation, we have $\interp{\Psi} \vdash t: t \ep [\interp{U'}/X] \interp{U}$. Since by lemma \ref{subterm}, we know $[\interp{U'}/X] \interp{U} = \interp{[U'/X]U}$. So it is the case. 
\end{proof}

\subsubsection{Generalized Reciprocity}

In our previous development of internal language for $\mathfrak{G}$, one will need judgement
of the form $\Gamma \vdash t:t\ep S$, where the proof term $t$ is the same as the $t\ep S$, to enter the internal world, this means only Church numerals have this privilege. Now we show how in general one can exploit a notion of reciprocity without confine himself to Church encoding. We
will relax the requirement to enter the internal world by $\Gamma \vdash t':t\ep S$, where $t$ and $t'$ are not nessesarily the same. First we will need to just modified a little privious
definition. 

\begin{definition}[Modifications]
\

\noindent Internal Context $\Psi\ :=  \ \cdot \ | \ \Psi, x: t \ep U$.

\noindent  $\interp{x:t \ep U, \Psi} := x: t\ep \interp{U}, \interp{\Psi}$

\noindent  $\intern{x:t\ep S, \Gamma} := x: t\ep \intern{S}, \intern{\Gamma}$
\end{definition}

\begin{definition}[Generalized Internal Typing]
\

\begin{tabular}{lll}
\infer[toSet]{\intern{\Gamma} \Vdash t': t \ep \intern{S}}{\Gamma \vdash t':t\ep S}

&    
\infer{\Psi \Vdash x : t \ep U}{x: t \ep U \in \Psi }

&

\infer{\Psi \Vdash \lambda y.t' : \lambda x.t\ep \Pi x:U. U'}
{\Psi, y: x \ep U \Vdash t' : t \ep U'}


\\
\\

\infer{\Psi \Vdash t' :t\ep \Delta X^1. U}
{\Psi \Vdash t' : t\ep U & X^1 \notin FV(\Psi)}

&

\infer{\Psi \Vdash t' : t\ep [U'/X] U}
{\Psi \Vdash t' :t \ep \Delta X^1.U}

&
\infer{\Psi \Vdash t' t'' : t_1 t_2 \ep [t_2/x]U}{\Psi
\Vdash t': t_1 \ep \Pi x: U'.U & \Psi \Vdash t'': t_2 \ep U'}


%% \infer{\Delta \vdash \lambda y.[t/x]t':S_1 \longrightarrow S_3}{\Delta
%% \vdash \lambda y.t: S_1 \longrightarrow S_2 & \Delta \vdash \lambda x.t': S_2 \longrightarrow S_3}

\end{tabular}
\end{definition}
 
\begin{theorem}[Externalization]
  If $\Psi \Vdash t': t\ep U$, then $\interp{\Psi} \vdash t': t\ep \interp{U}$.
\end{theorem}
\begin{proof}
  The proof is the same as theorem \ref{ext}
\end{proof}
%% The best possible law has value only if one can justify it, i.e., show the effect of non-observance. Girard

This generalized version of reciprocity is highly desirable in the sense that now Scott encoding
and its derivative can exploit recirocity. For example, for Scott encoding $0$ and $\suc$, one
will have a proof for $0 \ep \mathsf{Nat}$ and $\suc \ep \Pi x: \mathsf{Nat}. \mathsf{Nat}$, and then one can elaborate an inductive proof of $\mathsf{add} \ep \Pi x:\mathsf{Nat}. \Pi y: \mathsf{Nat}. \mathsf{Nat}$ from the definition of $\mathsf{add}$(similar definitions can be found in Ocaml's \textit{match} expression and Haskell's \textit{case} expression.). 
\subsection{Reasoning about Programs}

\subsubsection{Preliminary}
Most of this section come from Barendregt's lambda book.
\begin{definition}[Solvability]
\

  \begin{itemize}
  \item   A closed lambda term $t$($\mathrm{FV}(t) = \emptyset$) is solvable if
there exists $t_1,..., t_n$ such that $t t_1 ... t_n =_{\beta} \lambda x.x$.
\item An arbitrary term $t$ is solvable if the closure $\lambda x_1...\lambda x_n.t$, where
$\{x_1,...,x_n\} = \mathrm{FV}(t)$, is solvable.
  \item $t$ is unsolvable iff $t$ is not solvable.
  \end{itemize}
\end{definition}

\begin{definition}[Head Normal Form]
  A term $t$ is a head normal form if $t$ is of the form $\lambda x_1....\lambda x_n.x t_1...t_m$, where $n,m \geq 0$.
\end{definition}

\begin{theorem}[Wadsworth]
  $t$ is solvable iff $t$ has a head normal form. In particular, all terms in
normal forms are solvable, and unsolvable terms have no normal form.
\end{theorem}

\begin{theorem}[Genericity]
  For a unsolvable term $t$, if $t_1 t =_{\beta} t_2$, where $t_2$ in normal form, then
for any $t'$, we have $t_1 t' =_{\beta} t$.
\end{theorem}

So unsolvable in general is computational irrelavance, thus it is reasonable to equate
all unsolvable terms. 

\begin{definition}[Omega-Reduction]
Let $\Omega$ be $(\lambda x.xx)\lambda x.xx$, then $t \to_{\omega} \Omega$ iff $t$ is unsolvable and $t \not \equiv \Omega$.
\end{definition}

We add Omega-reduction as part of the term reduction in $\mathfrak{G}$.

\begin{theorem}
  $\to_{\beta} \cup \to_{\omega}$ is Church-Rosser.
\end{theorem}

\subsubsection{Reasoning about Programs}

We now define another notion of contradictory: $\bot := \forall x. x = \Omega$. Note that this will imply $\forall x.\forall y. x = y$, thus we can safely take it as contradictory.

\begin{theorem}
  $\vdash \forall n. ( n \ep \mathsf{Nat} \to (n = \Omega \to \bot))$.
\end{theorem}
\begin{proof}
  We will prove this by induction. Recall the induction theorem:  $\Pi C^1. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$. We instantiate $C$ with $\iota z. (z = \Omega \to \bot)$, by comprehension, we then have $(\forall y . ( (y = \Omega \to \bot  ) \to (\mathsf{S} y = \Omega \to \bot)) \to (0 = \Omega \to \bot) \to \forall m. (m \ep \mathsf{Nat} \to (m = \Omega \to \bot))$. It is enough to show that $0 = \Omega \to \bot$ and $\mathsf{S} y = \Omega \to \bot$. Let us say we use Scott numerals. Thus $0 := \lambda s.\lambda z.z$ and $\suc y := \lambda s.\lambda z.s y$. Assume $0 = \Omega = \lambda x_1.\lambda x_2.\Omega$, let $F := \lambda u. u \ p\ q$. Assume $q \ep X^1$, then $F \ 0 \ep X^1$. Also $F \ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, so $\Omega \ep X^1$. Thus we just show
$\forall X^1. (q \ep X^1 \to \Omega \ep X^1)$, which means $\forall q. q = \Omega$, thus contradiction. So $0 = \Omega \to \bot$. Now let us show $\mathsf{S} y = \Omega \to \bot$. Assume $\lambda s.\lambda z.sy = \Omega = \lambda x_1.\lambda x_2.\Omega$. Let $F := \lambda n.n\ (\lambda p.q)\ z$. Assume $q \ep X^1$, then $F\ (\lambda s.\lambda z.s y) \ep X^1$, thus $F\ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, meaning $\Omega \ep X^1$. So we just show $\Pi X^1. (q \ep X \to \Omega \ep X)$. Thus $\forall q. q = \Omega$, contradiction. So $\mathsf{S} y = \Omega \to \bot$.  
\end{proof}

Above theorem means that all the member of $\mathsf{Nat}$ has a normal form. Thus established 
the fact that for a number function $t: \mathsf{Nat} \to \mathsf{Nat}$, $t$ will terminate 
at all the input from $\mathsf{Nat}$. 

\subsection{Leibniz Equality in $\mathfrak{G}$}

\begin{theorem}
  \label{sep}
  Let $t_1, t_2$ be two closed beta-eta normal forms, then there exists a closed term 
  $F$ such that:
  
  $F t_1 t_2 =_{\beta} \mathsf{True} \equiv \lambda x.\lambda y.x$ if $t_1 \equiv t_2$. 
  
  $F t_1 t_2 =_{\beta} \mathsf{False} \equiv \lambda x.\lambda y.y$ if $t_1 \not \equiv t_2$. 
  
\end{theorem}

\noindent This theorem comes from Barendregt's lambda book, page 396. 

\begin{theorem}
  \label{neg}
  If $t_1$ and $t_2$ are distinct beta-eta normal forms, then $\vdash (t_1 = t_2) \to \bot$.
\end{theorem}
\begin{proof}
  Assume $t_1 = t_2$. By theorem \ref{sep}, we know that $F t_1 t_1 = \mathsf{True}$. Thus 
  we have $F t_1 t_2 = \mathsf{True} = \mathsf{False}$. This will lead to a contradiction.
\end{proof}

\begin{theorem}
  Assume $t_1, t_2$ are solvable terms. If $\vdash t_1 = t_2$ in $\mathfrak{G}$, then $t_1 =_{\beta\eta} t_2$. 
\end{theorem}
\begin{proof}
  Since $\mathfrak{G}$ is consistent, then by contraposition of theorem \ref{neg}. 
\end{proof}

So this development shows that the notion of Leibniz equality is the strongest one that one can get. When we think about Leibniz law of identity in generals, it is also strongest version of 
equality, namely, identity. So intuitively, Leibniz equality in $\mathfrak{G}$ should be corresponded to a notion of intensional identity. And the conversion rule allows us to treat $\beta\eta\Omega$ equivalence as the intensional identity in $\mathfrak{G}$. 

\subsection{Summary}
We show the development of $\mathfrak{G}$ from Frege's $\mathfrak{F}$ and the imperfect system
$\mathfrak{G}_0$. Inside $\mathfrak{G}$, we exibit a concept of reciprocity through internalization and externalization. The reciprocity makes the logical system $\mathfrak{G}$ highly suitable for mordern polymorphic and dependently typed functional programming. More excitingly, one does not even need the primitive assumptions of datatype and pattern matching, they are directly supported through Scott encoding.  

\section{Conclusion and Future Works}
\label{conc}
\cite{Girard:1989}
\bibliographystyle{plain}
\bibliography{thesis}

\appendix
\section{Developments inside System $\mathfrak{G}_0$}
\label{devep}

\begin{lemma}[Reflexitivity of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. (a = a)$.
\end{lemma}
\begin{proof}
  Obvious.
\end{proof}

\begin{lemma}[Symmetry of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. (a = b \to b = a)$.
\end{lemma}
\begin{proof}
  Assume $\Pi C. a\ep C \to b \ep C$(1), we want to show $ b \ep A \to a \ep A$ for any $A$. Instantiate $C$ in (1) with $\iota x.  (x \ep A \to a \ep A)$. By comprehension, we get 
$(a \ep A \to a \ep A) \to (b \ep A \to a \ep A)$. And we know that $a \ep A \to a \ep A$ is derivable in our system, so by MP(modus ponens) we get $b \ep A \to a \ep A$. 
\end{proof}

\begin{lemma}[Transitivity of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall c. a = b \to b = c \to a = c$.
\end{lemma}
\begin{proof}
For any $a,b,c$, assume $a = b$($\Pi C. a \ep C \to b \ep C$), $b = c$($\Pi C. b \ep C \to c \ep C$), we want to show $a = c$($ a \ep A \to c \ep A$ for any $A$). One can see that this is by 
syllogism. 
\end{proof}

\begin{lemma}[Congruence of Equality]
\label{cong}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall f .( a = b \to f\ a = f\ b)$.
\end{lemma}

\begin{theorem}
There is a $t$ such that  $\cdot \vdash t : \forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$. 
\end{theorem}
\begin{proof}
We want to show $\forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$.
 Let $P := \iota x. \mathsf{add}\ x\ 0 = x$. Instantiate $\mathsf{Id}$ with $P$, we get 
$  \forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y) \to\mathsf{add}\ 0\ 0 = 0 \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. We just have to inhabit 
$\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y)$ and $\mathsf{add}\ 0\ 0 = 0$. For the base case, we want to show $\Pi C. \mathsf{add}\ 0\ 0 \ep C \to 0 \ep C$. Assume $\mathsf{add}\ 0\ 0 \ep C$, since $\mathsf{add}\ 0\ 0 \to_{\beta} 0$, by conversion, we get $0 \ep C$. For the step case is a bit complicated, assume $\mathsf{add}\ y\ 0 = y$, we want to show $\mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y$. Since $\mathsf{add}\ y\ 0 \to_{\beta} y\ \mathsf{S}\ 0$, so by conversion we have $y \ \mathsf{S}\ 0 = y$. And $\mathsf{add}\ (\mathsf{S}y)\ 0 \to_{\beta} \mathsf{S}(y \ \mathsf{S}\ 0)$, so we are tring to prove $\mathsf{S}(y \ \mathsf{S}\ 0) = \mathsf{S}y$, which is by lemma \ref{cong}. 
\end{proof}
\begin{lemma}[Object-level Conversion]
\label{oconv}
  There is a $t$ such that  $\cdot \vdash t: \forall a. \forall b. \Pi P. ( a \ep P \to a = b \to b \ep P)$. 
\end{lemma}
\begin{proof}
  By modus ponens.
\end{proof}
\begin{theorem}
   There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. (a \ep \mathsf{Nat} \to a = b \to b \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
  Let $P := \iota x.x\ep \mathsf{Nat}$ for lemma \ref{oconv}. 
\end{proof}

\begin{theorem}[Unprovability I]
There is no such an $t$ that  $\cdot \vdash t:  1 = 0 \to \mathsf{Void}$. 
\end{theorem}

\begin{proof}
By the erasure theorem, if such $t$ exist, it would implies $(\Pi C. C \to C) \to \Pi X.X$ is 
inhabited. Thus $\Pi X.X$ is inhabited. 
\end{proof}

\begin{theorem}[Unprovability II]
There is no such an $t$ that  $\cdot \vdash t:\mathsf{Void}$. 
\end{theorem}

\noindent Above unprovability suggests that our system as a logic system is seemingly has a drawback same as $\mathbf{F}$, i.e. unable to interpret $0 \not = 1$ properly. We will see that it is actually not the case for our system. 

\subsection{The Notion of Contradictory}

\begin{definition}
  $\bot := \forall x. \forall y. (x = y)$.
\end{definition}

\noindent The meaning of this definition is obvious, every term is the same. Note that 
the erasure of $F(\bot) \equiv \Pi X. X \to X$. So it is inhabited in System \textbf{F}. But 
can one prove that $\bot$ is uninhabited in our system. 

\begin{theorem}[Logical Complexitivity]
\label{logic}
There is no such an $t$ that  $\cdot \vdash t:\bot$. 
\end{theorem}
\begin{proof}
  By theorem \ref{const}, we know that if there is such an $t$, it must be of the
abstraction form, so $u: x \in C \vdash t' : y \in C$ for any $x, y, C$. And now 
according to our typing rule, there are no ways to construct a term of type $y \in C$
under the assumption that $u : x \in C$. 
\end{proof}

\noindent This theorem show that although computationally, our system is the same as system
\textbf{F}, logically, it is strictly richer, we just identify a property that can not be
inhabited in our system but inhabited in system \textbf{F}. Now with this new notion of \textit{contradictory}, we can prove $0 = 1 \to \bot$.

\begin{theorem}
 There is a term $t$ such that $\cdot \vdash 0 = 1 \to \bot$.
\end{theorem}
\begin{proof}
  Assume $\Pi C. 0 \ep C \to 1 \ep C$ $\dagger$, we want to prove for any $x,y, A$, $ x \ep A \to y \ep A$. Assume $x \ep A$(1). We now instantiate $C$ with $\iota u. (((\lambda n. n\ (\lambda z.y)\ x)\ u) \ep A)$ in $\dagger$. By comprehension and beta reduction, we get $x \ep A \to y \ep A$(2). By modus ponens of (1), (2), we get $y \ep A$. So we just exibit an abstract proof
term $t$.   
\end{proof}

\noindent \textbf{Remarks}: 
\begin{itemize}
\item The theorem above show that at least one axiom of $\textbf{HA}_2$
can be proved in our system and also has a well behaved translation to system $\mathbf{F}$, namely, $F(0 = 1 \to \bot) \equiv (\Pi C. C \to C) \to (\Pi C. C \to C)$. 

\item It also shows that Girard's mapping from \textbf{F} with ``junk'' to \textbf{F} is very
well conceived, because that mapping will map his notion of contradictory, $\Pi X.X$ to $\Pi X. X \to X$, which is exactly the erasure of our notion of contradictory.  
\end{itemize}

\begin{theorem}
 There is a term $t$ such that $\cdot \vdash \forall m. ( m \ep \mathsf{Nat} \to \mathsf{S}m =  m \to \bot)$.
\end{theorem}

\subsection{Injectivity of $\mathsf{S}$}
\noindent Just as the $\mathsf{pred}$ function for Church numerals is really hard to define, 
proving injectivity in our system is considered the hardest theorem to be proved.

\begin{definition}[Predecessor, Kleene]
$\mathsf{pred} := \lambda n.\lambda f. \lambda x. n \ (\lambda g.\lambda h. h\ (g\ f)) (\lambda u. x) (\lambda u.u)$.  
\end{definition}

\begin{lemma}
\label{tcong}
   $\cdot \vdash t: \forall a. \forall b. (a = b \to \lambda s.\lambda z. s\ (a\ s\ z) = \lambda s.\lambda z. s\ (b\ s\ z))$. 
\end{lemma}
\begin{proof}
Assume $\Pi C. a \ep C \to b \ep C$ $\dagger$, we want to show that $\lambda s.\lambda z. s\ (a\ s\ z) \ep A \to \lambda s.\lambda z. s\ (b\ s\ z) \ep A$ for any $A$. Instantiate $C$ by $\iota x.(\lambda s.\lambda z. s\ (x\ s\ z) \ep A)$ in $\dagger$, by comprehension, we have $\lambda s.\lambda z. s\ (a\ s\ z) \ep A \to \lambda s.\lambda z. s\ (b\ s\ z) \ep A$. 
\end{proof}

\begin{lemma}[Intermediate Result]
\label{inter}
  $\cdot \vdash t: \forall m .(m \ep \mathsf{Nat} \to \lambda f.\lambda x.(m\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = m)$. 
\end{lemma}

\begin{proof}
  We prove this by induction. Let $P:= \iota q. \lambda f.\lambda x.(q\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = q$. Instantiate $C$ in $\mathsf{Id}$ with $P$, we get $  \forall y . ( y\ \ep P \to  (\mathsf{S} y)\ep P) \to  0 \ep P \to \forall m. (m \ep \mathsf{Nat} \to  m \ep P)$. We just need to show $0 \ep P$ and $\forall y . ( y\ \ep P \to  (\mathsf{S} y)\ep P)$. For the base case, we want to prove $\lambda f.\lambda x.(0\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = 0$, this is easily done by evaluation. For the step case, for any $y$, we want to show $y\ \ep P \to  (\mathsf{S} y)\ep P$. Assume $y \ep P$, we need to show $(\mathsf{S} y)\ep P$. By comprehension and beta reduction, we are assuming $\lambda f.\lambda x.(y\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = y$ (1), we want to show $\lambda f.\lambda x.((\lambda s.\lambda z.s\ (y\ s\ z))\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = \lambda s.\lambda z.s\ (y\ s\ z)$ (2). By lemma \ref{tcong} and (1), we get $\lambda s.\lambda z.s\ (y\ s\ z) = \lambda s.\lambda z.s\ ((\lambda f.\lambda x.(y\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f)\ s\ z)$ (3). Then by beta reductions we see the right hand side of (3) is Leibniz equals to the left hand side of (2). So by transitivity and symmetry of the equality we prove (2). Thus we exibit the abstract term $t$. 
\end{proof}

\begin{lemma}[Predecessor]
\label{pre}
  $\cdot \vdash t: \forall m . (m \ep \mathsf{Nat} \to \mathsf{pred} (\mathsf{S} m) = m)$.
\end{lemma}
\begin{proof}
Since $\mathsf{pred} (\mathsf{S} m) \to_{\beta}^* \lambda f.\lambda x.(m\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f$, by lemma \ref{inter}, we get what we want.  
\end{proof}
\begin{lemma}[Congruence of Equality]
\label{cong}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall f .( a = b \to f\ a = f\ b)$.
\end{lemma}
\begin{proof}
  Assume $\Pi C. a \ep C \to b \ep C$($a = b$). Let $C := \iota x. f x \ep P$ with $P$ free. Instantiate $C$ for the 
assumption, we get $a \ep (\iota x. f x \ep P) \to b \ep (\iota x. f x \ep P)$. By conversion, 
we get $f\ a \ep P \to f\ b \ep P$. So by polymorphic generalization, we get $f\ a = f\ b$. Closing the hypothesis and doing a bunch of generalization, we get what we want.
\end{proof}

\begin{theorem}
  $\cdot \vdash t: \forall n.\forall m. (n \ep \mathsf{Nat} \to m \ep \mathsf{Nat} \to \mathsf{S}m = \mathsf{S}n \to m = n)$. 
\end{theorem}
\begin{proof}
Assume $n \ep \mathsf{Nat}, m \ep \mathsf{Nat}, \mathsf{S}m = \mathsf{S}n$, we want to show $m = n$. Instantiate $a$ with $\mathsf{S}m$, instantiate $b$ with $\mathsf{S}n$, $f$ with $\mathsf{pred}$ in lemma \ref{cong}. By modus ponens, we have $\mathsf{pred}(\mathsf{S}m) = \mathsf{pred}(\mathsf{S}n)$. Thus by lemma \ref{pre}, we have $m = n$.
\end{proof}

\noindent \textbf{Remarks}: The prove of this theorem really benefits a lot from that we do not 
need to type the $\mathsf{pred}$ function(even we may be able to type it, but we do not need to)

\subsection{Scott Encoding}

\begin{definition}[Scott numerals]
  \noindent $\mathsf{Nat} := \iota x. \Pi C.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to x \ep C$

\noindent $\mathsf{S} \ := \lambda n. \lambda s.\lambda z. s \ n$

\noindent $0\  := \lambda s. \lambda z.z$

\end{definition}

\noindent \textbf{Note}:  $0$ is typable to $\mathsf{Nat}$, but $\mathsf{S}$ is not typable to $\mathsf{Nat} \to \mathsf{Nat}$. Also note that the proof of $1 \ep \mathsf{Nat}$ is actually Church numerals 1 ! This explain why Church numerals are special, it is in a sense \textit{initial}, meaning, any kind of encoding of $\bar{n}, \mathsf{Nat}$, as long as the definition of $\mathsf{Nat}$ has the same form as Church encoding, then the proof of $\bar{n} \ep \mathsf{Nat}$ will
be Church numeral $n$. 


\begin{definition}[Induction]
\

\noindent  $\mathsf{Id} :  \Pi C. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$

\noindent $\mathsf{Id} := \lambda s. \lambda z. \lambda n. n\ s\ z$

\noindent with $s:\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), z: 0 \ep C, n: m \ep \mathsf{Nat}$.
\end{definition}

\begin{theorem}
  $\cdot \vdash t : 0 \ep \mathsf{Nat}$.
\end{theorem}
\begin{proof}
  Obvious.
\end{proof}

\begin{theorem}

  $\cdot \vdash t: \forall m. (m \ep \mathsf{Nat} \to \mathsf{S}m \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
  By induction. Let $P:= \iota x. \mathsf{S} x \ep \mathsf{Nat}$. Instantiate $C$ in $\mathsf{Id}$ with $P$, we get $  \forall y . ( \mathsf{S}y\ \ep \mathsf{Nat} \to  \mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat}) \to  0 \ep \mathsf{Nat} \to \forall m. (m \ep \mathsf{Nat} \to  \mathsf{S}m \ep \mathsf{Nat})$. So we just need to show $\forall y . ( \mathsf{S}y\ \ep \mathsf{Nat} \to  \mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat})$ and $0 \ep \mathsf{Nat}$. The base case is immediate. To show the step case, let us assume $\mathsf{S}y\ \ep \mathsf{Nat}$ for any $y$, we need to show 
$\mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat}$. By comprehension, we are assuming $\Pi C. (\forall y. (y \ep C) \to (\mathsf{S}y) \ep C) \to 0 \ep C \to (\mathsf{S}y) \ep C$ $\dagger$, we want to show $ (\forall y. (y \ep A) \to (\mathsf{S}y) \ep A) \to 0 \ep A \to (\mathsf{S} \mathsf{S} y) \ep A$ for any $A$. Assume $(\forall y. (y \ep A) \to (\mathsf{S}y) \ep A)$(1) and $0 \ep A$, we need to show $(\mathsf{S} \mathsf{S} y) \ep A$. Instantiate $C$ with $A$ in $\dagger$, we get $(\forall y. (y \ep A) \to (\mathsf{S}y) \ep A) \to 0 \ep A \to (\mathsf{S}y) \ep A$. By modus ponens, we get $(\mathsf{S}y) \ep A$. Instantiate $y$ in (1) with $\mathsf{S} y$, we get $\mathsf{S} y \ep A \to \mathsf{S} \mathsf{S} y \ep A$. Thus by modus ponens, we get $\mathsf{S} \mathsf{S} y \ep A$. Thus we exibit such an abstract term $t$. 
\end{proof}

\begin{definition}[Recursive Equation]
  $\mathsf{add} :=  \lambda n. \lambda m.n\ (\lambda p. \mathsf{add}\ p\ (\mathsf{S} m))\ m$
\end{definition}

\noindent We know that the above recursive equation can be solved by fixpoint. But we do not 
bother to solve it. The way we treat it is use it as a kind of build in beta equality, when
every we see a $\mathsf{add}$, we one step unfold it. 

\begin{theorem}
There is a $t$ such that  $\cdot \vdash t : \forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$. 
\end{theorem}
\begin{proof}

We want to show $\forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$.
 Let $P := \iota x. \mathsf{add}\ x\ 0 = x$. Instantiate $\mathsf{Id}$ with $P$, we get 
$  \forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y) \to\mathsf{add}\ 0\ 0 = 0 \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. We just have to inhabit 
$\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y)$ and $\mathsf{add}\ 0\ 0 = 0$. For the base case, we want to show $\Pi C. \mathsf{add}\ 0\ 0 \ep C \to 0 \ep C$. Assume $\mathsf{add}\ 0\ 0 \ep C$, since $\mathsf{add}\ 0\ 0 \to_{\beta} 0$, by conversion, we get $0 \ep C$. For the step case is a bit complicated, assume $\mathsf{add}\ y\ 0 = y$, we want to show $\mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y$. Since $\mathsf{add}\ y\ 0 \to_{\beta} y\ (\lambda p.\mathsf{add}\ p\ (\suc 0))\ 0$,  And $\mathsf{add}\ (\mathsf{S}y)\ 0 \to_{\beta} \mathsf{add} \ y \ (\suc 0) \leftarrow_{\beta}^* \suc (\mathsf{add}\ y\ 0)$. So lemma \ref{cong} will give us this. 
\end{proof}

\begin{definition}
  $\mathsf{pred} := \lambda n.n\ (\lambda p.p)\ 0$
\end{definition}


\subsection{Dependent Product}

We extend $\mathfrak{G}$ with three new type construct: $\Pi x:T.T'$ and $T\ t, \lambda x.T$ andreplace the Func and App rule with the following two new typing rules: 

\

\begin{tabular}{ll}
\infer[\textit{Indx}]{\Gamma \vdash \lambda x.t : \Pi x: T_1.T_2}
{\Gamma, x:T_1 \vdash t: T_2}

&
\infer[\textit{App}]{\Gamma \vdash t t':[t'/x]T_2}{\Gamma
\vdash t: \Pi x:T_1.T_2 & \Gamma \vdash t': T_1}
  
\end{tabular}

\

\noindent And we need another type level reduction rule:

\infer{(\lambda x.T)t \to_{\beta} [t/x]T}{}
 
\noindent We also write $T_1 \to T_2$ if $x \notin \mathsf{FV}(T_2)$ for $\Pi x:T_1.T_2$. 

\noindent \textbf{Remarks}
\begin{itemize}
\item  We want to investigate index product because we want to see if it is possible to obtain formula-set reciprocity for vector data type, which is canon in dependent type programming language. In this section we assume natural numbers as Church numerals.

\item We want to identify three kinds of quantification, $\forall x.T$,  $\Pi x:T_1.T_2$ and $\forall x. x \ep T_1 \to T_2$. The first one is strongest in the sense that it quantifies over all term; the second one quantifies over all terms of type $T_1$, the third one quantifies over the terms that has a self type $T_1$. 
\end{itemize}

\

\begin{definition}[Vector]
\

\noindent  $\mathsf{vec}(U, n) := \iota x. \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to \mathsf{nil} \ep C 0 \to x \ep C n$

\noindent $\nil := \lambda y. \lambda x.x : \vecc(U, 0)$

\noindent $\cons := \lambda n.\lambda v. \lambda l. \lambda y. \lambda x.y \ n\ v\ (l \ y\ x) : \Pi n: \mathsf{Nat}.U \to \vecc (U, n) \to \vecc (U, \suc n)$.

\noindent where $n: \mathsf{Nat}, v: U, l: \vecc (U, n), y:\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m)), x: \nil \ep C0 $. 


\end{definition}

\begin{proof}
\noindent It is easy to see that $\nil$ is typable to $\vecc (U, 0)$. Now we show how $\cons$ is typable to $\Pi n: \mathsf{Nat}.U \to \vecc (U, n) \to \vecc (U, \suc n)$. We can see that $l\ y\ x: l \ep C n$. After the instantiation, the type of $y \ n\ v:  l \ep C n \to (\mathsf{cons}\ n\ v\ l) \ep C (\mathsf{S}n)$. So $y\ n\ v \ (l\ y\ x): (\mathsf{cons}\ n\ v\ l) \ep C (\mathsf{S}n)$. So $\lambda y. \lambda x. y\ n\ v \ (l\ y\ x) : \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to  \nil \ep C0 \to  \lambda y. \lambda x. y\ n\ v \ (l\ y\ x) \ep C(\suc n)$. So $\lambda y. \lambda x. y\ n\ v \ (l\ y\ x) : \vecc (U, \suc n)$. So $ \cons : \Pi n:\mathsf{Nat}. U \to \vecc(U, n) \to \vecc(U, \suc n)$.
  
\end{proof}


\noindent The above development suggests that dependent type can be included in the framework
of system $\mathfrak{G}$ and we just need to modify the erasure function $F(\Pi x:T_1.T_2) := F(T_1) \to F(T_2)$ and $F(T t) := F(T)$ and $F(\lambda x.T) = F(T)$, then we still can go back to system \textbf{F}. More importantly, our vector encoding has the formula-set reciprocity, this is a highly desirable property that 
enable us to do dependent programming effectively in $\mathfrak{G}$. 

\begin{definition}[Induction Principle]
\

  \noindent  $\mathsf{ID}(U, n) := \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to \mathsf{nil} \ep C 0 \to \forall x(x \ep \vecc(U,n) \to x \ep Cn)$

\noindent $\mathsf{ID}(U,n) := \lambda s. \lambda z. \lambda n. n\ s\ z$

\noindent Let $s : (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))), z: \mathsf{nil} \ep C 0, n: x \ep \vecc(U,n)$
\end{definition}

\begin{definition}[append]
\

\noindent $\app := \lambda n_1. \lambda n_2. \lambda l_1. \lambda l_2. l_1\ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v)\ l_2$.

\noindent For $n+n_2$ we mean $\mathsf{add}\ n\ n_2$. We can use induction to define append
as well.

\noindent $\app := \lambda n_1. \lambda n_2. \mathsf{ID}(U, n_1) (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v)\ l_2 \ l_1$. 
\end{definition}

\begin{proof}
  We want to show $\app : \Pi n_1:\mathsf{Nat}. \Pi n_2:\mathsf{Nat}. \vecc(U, n_1) \to \vecc(U, n_2) \to \vecc(U, n_1+n_2)$. Observe that $\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v: \Pi n:\mathsf{Nat}. \Pi x:U. v \ep \vecc(U, n+n_2) \to \vecc(U, n+n_2+1) $. We instantiate $C :=  \lambda y.(\iota x.\vecc(U, y + n_2))$ , where $x$ free over $\vecc(U, y + n_2)$, in $\mathsf{ID}(U, n_1)$, by comprehension and beta reductions, we get $\mathsf{ID}(U, n_1) : \forall y. (\Pi m: \mathsf{Nat}. \Pi u:U.  \vecc(U, m+n_2) \to  \vecc (U, \mathsf{S}m+n_2)) \to \vecc(U, 0+n_2)  \to \forall x(x \ep \vecc(U,n_1) \to  \vecc(U, n_1+n_2))$. So $\mathsf{ID}(U, n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) : \vecc(U, 0+n_2)  \to \forall x(x \ep \vecc(U,n_1) \to  \vecc(U, n_1+n_2))$. Of course we assume $l_1: \vecc(U, n_1), l_2:\vecc(U, n_2)$, so $\mathsf{ID}(U, n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) \ l_2 \ l_1: \vecc(U, n_1+n_2)$. 
\end{proof}

\begin{theorem}[Associativity]
  $\cdot \vdash t: \forall (n_1. n_2. n_3. v_1. v_2.v_3). (n_1 \ep \mathsf{Nat} \to n_2 \ep \mathsf{Nat} \to n_3 \ep \mathsf{Nat} \to v_1 \ep \vecc(U, n_1) \to v_2 \ep \vecc(U, n_2)) \to v_3 \ep \vecc(U, n_3) \to \app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3$
\end{theorem}

\begin{proof}
  Assume $x_1: n_1 \ep \mathsf{Nat}, x_2: n_2 \ep \mathsf{Nat}, x_3: n_3 \ep \mathsf{Nat}, y_2: v_2 \ep \vecc(U, n_2)) , y_3: v_3 \ep \vecc(U, n_3)$. We want to show $\forall v_1. (v_1 \ep \vecc(U, n_1) \to \app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3)$. Let $P:= \lambda z.\iota y. (\app \ z\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (z + n_2) \ n_3\ (\app \ z \ n_2 \ y \ v_2) \ v_3)$. We instantiate the $C$ in $\mathsf{ID}(U,n_1)$ with $P$, we have $\mathsf{ID}(U,n_1):  (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep P m \to (\mathsf{cons}\ m\ u\ y) \ep P (\mathsf{S}m))) \to \mathsf{nil} \ep P 0 \to \forall x(x \ep \vecc(U,n_1) \to x \ep P n_1)$. So we just need to prove base case: $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$ and step case: $\Pi m: \mathsf{Nat}. \Pi u:U.  (\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3)\to (\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3)$. For the base case, $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \app\ n_2\ n_3 \ v_2 \ v_3 \leftarrow_{\beta}^* \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$. For the step case, we assume $\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3$(IH), we want to show $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3$(Goal). We know that $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \cons\ (m+n_2+n_3)\ u \ (y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3))$, where $\mathcal{X}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_2+n_3)\ x\ v$. The left hand side of (IH) can be beta reduced to $(y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3))$. The right hand side of the (Goal) can be reduced to $\app \ (\suc m + n_2) \ n_3 (\cons\ (m+n_2)\ u \ (y\ \mathcal{C}\ v_2)) v_3 \to_{\beta}^* \cons \ (m+n_2+n_3) \ u \ ((y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3)$, where $\mathcal{C}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v, \mathcal{Q}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_3)\ x\ v$. The right hand side of (IH) can be reduced to $((y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3)$. 
So (IH) can be simplified to $y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3) = (y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3$. Congruence over the $f:= \cons \ (m+n_2+n_3) \ u$ give us the (Goal). 
\end{proof}

\section{Developments inside System $\mathfrak{G}$}

\end{document}
