\documentclass{article} 
\usepackage{url} 
\usepackage{hyperref}
\usepackage{stmaryrd}
\usepackage{manfnt}
\usepackage{fullpage}
\usepackage{proof}
\usepackage{savesym}
\usepackage{amssymb} 
%% \savesymbol{mathfrak}
%\usepackage{MnSymbol} Overall mnsymbol is depressing.
%\restoresymbol{MN}{mathfrak}
\usepackage{xcolor} 
%\usepackage{mathrsfs}
\usepackage{amsmath, amsthm}
%\usepackage{diagrams}
\makeatletter
\newsavebox{\@brx}
\newcommand{\llangle}[1][]{\savebox{\@brx}{\(\m@th{#1\langle}\)}%
  \mathopen{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\newcommand{\rrangle}[1][]{\savebox{\@brx}{\(\m@th{#1\rangle}\)}%
  \mathclose{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\makeatother


\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
% My own macros
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 

\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newcommand{\ep}[0]{\epsilon} 
\newcommand{\nil}[0]{\mathsf{nil}} 
\newcommand{\cons}[0]{\mathsf{cons}} 
\newcommand{\vecc}[0]{\mathsf{vec}} 
\newcommand{\suc}[0]{\mathsf{S}} 
\newcommand{\app}[0]{\mathsf{app}} 
\newcommand{\interp}[1]{\llbracket #1 \rrbracket} 
\newcommand{\intern}[1]{\llangle #1 \rrangle} 
\newcommand*\template[1]{\(\langle\)#1\(\rangle\)}
%% \newarrowfiller{dasheq} {==}{==}{==}{==}
%% \newarrow {Mapsto} |--->
%% \newarrow {Line} -----
%% \newarrow {Implies} ===={=>}
%% \newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
%% \newarrow {Onto} ----{>>}
%% \newarrow {Dashto}{}{dash}{}{dash}{>}
%% \newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\newtheorem{prop}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}


\begin{document}
%\pagestyle{empty}
\title{System $\mathfrak{G}$}
\author{Peng Fu \\
Computer Science, The University of Iowa}
\date{Last edited: \today}


\maketitle \thispagestyle{empty}


\section{System $\mathfrak{G}$}
\begin{definition}
\

\noindent \textit{Formula/Type} $T \ ::= \  X^0 \ | \ t \ep S \ | \ \Pi X^1.T \ | \ \ T_1 \to T_2 \ | \ \forall x.T \ | \ \Pi X^0.T$ 

\noindent \textit{Set/Objects} $S \ ::= X^1 \ | \ \iota x.T$

%% \noindent \textit{Morphism} $M \ ::= t \ep S \ | \ \forall x.(x\ep S \to M)$

\noindent \textit{Proof Terms/Domain Terms/Pure Lambda Terms} $t \ :: = \ x \ | \ \lambda x.t \ | \ t t'$

%\noindent \textit{Proof Terms} $p \ ::= \ a \ | \ \lambda a .p \ | \ p p'$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x:T$

%\noindent \textit{Records} $\Delta \ :: = \ \cdot \ | \ \Delta, a: x \ep S$

\end{definition} 

$X^0$ is a predicate variable that has arity $0$(namely, the type variable in system \textbf{F}), while $X^1$ is a predicate variable with
arity $1$, it is essentially a variable that discribe set. $\iota x.T$ is the set forming abstraction, it allows one to form a set out of a formula. Unlike $\mathfrak{G}_0$, we separate
the notion of set and formula, they are now not the same thing in $\mathfrak{G}$. Set can only 
occur inside of a formula, they do not have their own rule and identity outside of a formula. 
So logically, the formulas are precisely the second order formula \`a la Takeuti, the only difference is that we replace the number domain with lambda calculus. 


\begin{definition}[Typing Rules]
\

\footnotesize{
\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

&
\infer[\textit{Conv}]{\Gamma \vdash t : T_2}{\Gamma \vdash t:
T_1 &  T_1 \cong T_2}

&

\infer[\textit{Forall}]{\Gamma \vdash t : \forall x.T}
{\Gamma \vdash t: T &  x \notin \mathsf{FV}(\Gamma)}

\\
\\
\infer[\textit{Instantiate}]{\Gamma \vdash t :[t'/x]T}{\Gamma
\vdash t: \forall x.T}
&

\infer[\textit{Poly}]{\Gamma \vdash  t :\Pi X^i.T}
{\Gamma \vdash t: T & X^i \notin \mathsf{FV}(\Gamma) & i= 0,1}

&
\infer[\textit{Inst0}]{\Gamma \vdash t:[T'/X^0]T}{\Gamma \vdash t: \Pi X^0.T}

\\
\\

\infer[\textit{Func}]{\Gamma \vdash \lambda x.t : T_1\to T_2}
{\Gamma, x:T_1 \vdash t: T_2}

&

\infer[\textit{App}]{\Gamma \vdash t t':T_2}{\Gamma
\vdash t: T_1 \to T_2 & \Gamma \vdash t': T_1}

&


\infer[\textit{Inst1}]{\Gamma \vdash t:[S/X^1]T}{\Gamma \vdash t: \Pi X^1.T}

\end{tabular}
}
\end{definition}

\noindent \textbf{Note}: $\cong$ is defined as reflexive transitve and symmetric closure of 
$\to_{\beta}\cup \to_{\iota}$.

\begin{definition}[Functional Extensionality and Comprehension]

\


\begin{tabular}{ll}

\infer{(\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

\infer{t \ep (\iota x.T) \to_{\iota} [t/x]T}{}

\end{tabular}
  
\end{definition}

For the rule Poly, there is no surprising there, Inst0 allows us to instantiate $X^0$ with 
any formula, this is what the instantiation in system \textbf{F}, while Inst1 allow us to instantiate a set variable $X^1$ with a set $S$. We have the similar comprehension scheme and beta reduciton as $\mathfrak{G}_0$. Perhaps the biggest change from $\mathfrak{G}_0$ to $\mathfrak{G}$ is that we do not have formula-set reciprocity in $\mathfrak{G}$. As we mentioned before, we 
want to justify the reciprocity principle without reciprocity. 

\begin{definition}[Internal Functional Language]
\

\noindent Internal Types $U\ := X^1 \ | \ \iota x.Q \ | \ \Pi x:U.U \ | \ \Delta X^1.U$.

\noindent Internal Formula $Q \ := X^0 \ | \ t \ep U \ | \ \Pi X^0.Q \ | \ Q \to Q' \ | \ \forall x.Q \ | \  \Pi X^1. Q$

\noindent Internal Context $\Psi\ :=  \ \cdot \ | \ \Psi, x:U$.

\end{definition}

The concepts behind the internal types are similar to what we already known about dependent types and polymorphic types in functional programming language. We intend to \textit{interpret} internal type $U$ as sets $S$ in $\mathfrak{G}$. Be awared that we are not trying to give a \textit{set-theoretic} model for polymorphism \footnote{ And we are never going to do that in this thesis and not because of Reynold's results, even if he show that polymorphism \textit{has} a set-theoretic model, please see Girard's blind spot for the reasons. }, which has been shown impossible by Reynold. The reason we add internal formula is that we want $[U'/X^1]U$ to be well-defined, namely, $[U'/X^1]U$ is still a well-formed internal type.  

In fact, we will show every internal type $U$ corresponds to a set $S$ and vice versa. We called the process of transforming $S$ to $U$ \textit{internalization} and from $U$ to $S$ \textit{externalization}. 
   

\begin{definition}[Internalization]
\

  $\intern{\cdot}$ is a maping from sets to internal types, formulas to internal formulas.

  $\intern{X^1} := X^1$

  $\intern{\iota f. \forall x. (x \ep S' \to f\ x \ep S)} := \Pi x:\intern{S'}.\intern{S}$, where $f$ is fresh.

  $\intern{\iota x. (\Pi X^1. x \ep S)} := \Delta X^1.\intern{S}$, where $x$ is fresh.

  $\intern{\iota x.T} := \iota x.\intern{T}$

  $\intern{X^0} := X^0$

  $\intern{t\ep S} := t \ep \intern{S}$

  $\intern{T \to T'} := \intern{T} \to \intern{T}$

  $\intern{\Pi X^i.T} := \Pi X^i.\intern{T}$.

  $\intern{\forall x.T} := \forall x.\intern{T}$.

  $\intern{x:x\ep S, \Gamma} := x: \intern{S}, \intern{\Gamma}$


\end{definition}

The interesting case is the internalization of $\iota f. \forall x. (x \ep S' \to f\ x \ep S)$ and $\iota x. (\Pi X^1. x \ep S)$. The first case \textit{express} the set of total function $f$ from $S'$ to $S$, while allowing $S$ indexed by $x$. The second case simply describe a polymorphic set keeping $X^1$ parameterized. 
  
%% \noindent Note that for any $x:y \ep S \in \Gamma$ where $\Gamma \vdash t: t \ep S'$, we can rename to $x: x \ep S \in \Gamma$, with $\Gamma \vdash \underline{t} : \underline{t} \ep \underline{S'}$, then one can apply following internalization function to go to the internal world. 

\begin{definition}[Internal Typing]
\

\begin{tabular}{lll}
\infer[toSet]{\intern{\Gamma} \Vdash t: \intern{S}}{\Gamma \vdash t:t\ep S}

&    
\infer{\Psi \Vdash x : U}{x:U \in \Psi }

&

\infer{\Psi \Vdash \lambda x.t : \Pi x:U. U'}
{\Psi, x: U \Vdash t : U'}


\\
\\

\infer{\Psi \Vdash t : \Delta X^1. U}
{\Psi \Vdash t : U & X^1 \notin FV(\Psi)}

&

\infer{\Psi \Vdash t : [U'/X] U}
{\Psi \Vdash t : \Delta X^1.U}

&
\infer{\Psi \Vdash t t' :[t'/x]U}{\Psi
\Vdash t:  \Pi x: U'.U & \Psi \Vdash t': U'}


%% \infer{\Delta \vdash \lambda y.[t/x]t':S_1 \longrightarrow S_3}{\Delta
%% \vdash \lambda y.t: S_1 \longrightarrow S_2 & \Delta \vdash \lambda x.t': S_2 \longrightarrow S_3}

\end{tabular}
\end{definition}

The toSet rule above transform a judgement $\Gamma \vdash t:T$ in $\mathfrak{G}$ into its internal language. The others rules are intuitive clear, they corresponds to functional programming 
concepts such as dependent product and polymorphism. 

It is not good enough to just transform to internal language, one would want to go back to $\mathfrak{G}$ whenever he want. Thus we have the following externalization process.

\begin{definition}[Externalization]
\

  $\interp{\cdot}$ is a maping from internal types to sets, internal formula to formula.

  $\interp{X^1} := X^1$

  $\interp{\iota x.Q} := \iota x.\interp{Q}$

  $\interp{\Pi x:U'.U} := \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$, where $f$ is fresh.

  $\interp{\Delta X^1.U} := \iota x. (\Pi X^1. x \ep \interp{U})$, where $x$ is fresh.

  $\interp{X^0} := X^0$

  $\interp{t\ep U} := t \ep \interp{U}$

  $\interp{Q \to Q'} := \interp{Q} \to \interp{Q}$

  $\interp{\Pi X^i.Q} := \Pi X^i.\interp{Q}$.

  $\interp{\forall x.Q} := \forall x.\interp{Q}$.

  $\interp{x:U, \Psi} := x: x\ep \interp{U}, \interp{\Psi}$
\end{definition}

One can immediate see that externalization is exact the opposite of internalization. So understand one of these two concepts is enough, they are in a sense isomorphic. We can see the isomorphism in the follwoing lemma.
\begin{lemma}
\label{id}
  $\interp{\intern{S}} = S$ and $\intern{\interp{U}} = U$.
\end{lemma}

\begin{proof}
By induction. 
\end{proof}
\begin{lemma}
\label{subterm}
$[t'/x]\interp{U} = \interp{[t'/x]U}$ and $[\interp{U'}/X] \interp{U} = \interp{[U'/X]U}$.
\end{lemma}
\begin{proof}
  By induction on structure of $U$.
\end{proof}

The following theorem give us the ability to go back to $\mathfrak{G}$ from its internal language. More specifically, it shows that the following rule (corresponds to the toFormula rule in section \ref{gnull}) is admissable: 

\

\infer[toFormula]{\interp{\Psi} \vdash t: t\ep \interp{U}}{\Psi \Vdash t: U} 

\

\noindent We call the process of internalization and externalization in $\mathfrak{G}$ \textit{reciprocity}.
 
\begin{theorem}[Externalization]
\label{ext}
  If $\Psi \Vdash t: U$, then $\interp{\Psi} \vdash t: t\ep \interp{U}$.
\end{theorem}
\begin{proof}
\noindent  By induction on the derivation. 

\noindent \textbf{Base Case}:

\

\infer{\Psi \Vdash x : U}{x:U \in \Psi }

\

\noindent $\interp{\Psi} \vdash x : x \ep \interp{U}$, since $x: x\ep \interp{U} \in \interp{\Psi}$.

\

\noindent \textbf{Base Case}:

\

\infer{\intern{\Gamma} \Vdash t: \intern{S}}{\Gamma \vdash t:t\ep S}

\

\noindent By lemma \ref{id}.

\

\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash \lambda x.t : \Pi x:U. U'}
{\Psi, x: U \Vdash t : U'}

\

\noindent By induction, we have $\interp{\Psi}, x:x\ep \interp{U} \vdash t : t \ep \interp{U'}$.
So $\interp{\Psi} \vdash \lambda x.t : x\ep \interp{U} \to t \ep \interp{U'}$, then by Forall
rule, we have $\interp{\Psi} \vdash \lambda x.t : \forall x.(x\ep \interp{U} \to t \ep \interp{U'})$. By comprehension rule and beta-reduction, we get $\interp{\Psi} \vdash \lambda x.t : \lambda x.t \ep \iota f.\forall x.(x\ep \interp{U} \to f \ x \ep \interp{U'})$. And we also know that $\interp{\Pi x:U.U'} := \iota f. \forall x. (x \ep \interp{U} \to f\ x \ep \interp{U'})$. So it is the case.

\

\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash t t' :[t'/x]U}{\Psi
\Vdash t:  \Pi x: U'.U & \Psi \Vdash t': U'}

\

\noindent By induction, we have $\interp{\Psi} \vdash t:t \ep \iota f. \forall x. (x \ep \interp{U'} \to f\ x \ep \interp{U})$ and $ \interp{\Psi} \vdash t':t'\ep \interp{U'}$. By comprehension, we have $\interp{\Psi} \vdash t : \forall x. (x \ep \interp{U'} \to t\ x \ep \interp{U})$. Instantiate $x$ with $t'$, we have $\interp{\Psi} \vdash t: t' \ep \interp{U'} \to t\ t' \ep [t'/x] \interp{U}$. So by App rule, we have $\interp{\Psi} \vdash t t': t t' \ep [t'/x]\interp{U}$. By lemma \ref{subterm}, we know that $[t'/x]\interp{U} = \interp{[t'/x]U}$. So $\interp{\Psi} \vdash t t': t t' \ep \interp{[t'/x]U}$.

\


\noindent \textbf{Step Case}:

\

\infer{\Psi \Vdash t : \Delta X^1. U}
{\Psi \Vdash t : U & X^1 \notin FV(\Psi)}

\

\noindent By induction, one has $\interp{\Psi} \vdash t : t \ep \interp{U}$. So one has 
$\interp{\Psi} \vdash t : \Pi X^1. t \ep \interp{U}$. So by comprehension, one has $\interp{\Psi} \vdash t : t\ep \iota x. \Pi X^1. x \ep \interp{U}$. 

\

\noindent \textbf{Step Case}:

\


\infer{\Psi \Vdash t : [U'/X] U}
{\Psi \Vdash t : \Delta X^1.U}

\

\noindent By induction, one has $\interp{\Psi} \vdash t: t \ep \iota x. \Pi X^1. x \ep \interp{U}$. By comprehension, we have $\interp{\Psi} \vdash t: \Pi X^1. t \ep \interp{U}$. So by instantiation, we have $\interp{\Psi} \vdash t: t \ep [\interp{U'}/X] \interp{U}$. Since by lemma \ref{subterm}, we know $[\interp{U'}/X] \interp{U} = \interp{[U'/X]U}$. So it is the case. 
\end{proof}

\section{Generalized Reciprocity}

In our previous development of internal language for $\mathfrak{G}$, one will need judgement
of the form $\Gamma \vdash t:t\ep S$, where the proof term $t$ is the same as the $t\ep S$, to enter the internal world, this means only Church numerals have this privilege. Now we show how in general one can exploit a notion of reciprocity without confine himself to Church encoding. We
will relax the requirement to enter the internal world by $\Gamma \vdash t':t\ep S$, where $t$ and $t'$ are not nessesarily the same. First we will need to just modified a little privious
definition. 

\begin{definition}[Modifications]
\

\noindent Internal Context $\Psi\ :=  \ \cdot \ | \ \Psi, x: t \ep U$.

\noindent  $\interp{x:t \ep U, \Psi} := x: t\ep \interp{U}, \interp{\Psi}$

\noindent  $\intern{x:t\ep S, \Gamma} := x: t\ep \intern{S}, \intern{\Gamma}$
\end{definition}

\begin{definition}[Generalized Internal Typing]
\

\begin{tabular}{lll}
\infer[toSet]{\intern{\Gamma} \Vdash t': t \ep \intern{S}}{\Gamma \vdash t':t\ep S}

&    
\infer{\Psi \Vdash x : t \ep U}{x: t \ep U \in \Psi }

&

\infer{\Psi \Vdash \lambda y.t' : \lambda x.t\ep \Pi x:U. U'}
{\Psi, y: x \ep U \Vdash t' : t \ep U'}


\\
\\

\infer{\Psi \Vdash t' :t\ep \Delta X^1. U}
{\Psi \Vdash t' : t\ep U & X^1 \notin FV(\Psi)}

&

\infer{\Psi \Vdash t' : t\ep [U'/X] U}
{\Psi \Vdash t' :t \ep \Delta X^1.U}

&
\infer{\Psi \Vdash t' t'' : t_1 t_2 \ep [t_2/x]U}{\Psi
\Vdash t': t_1 \ep \Pi x: U'.U & \Psi \Vdash t'': t_2 \ep U'}


%% \infer{\Delta \vdash \lambda y.[t/x]t':S_1 \longrightarrow S_3}{\Delta
%% \vdash \lambda y.t: S_1 \longrightarrow S_2 & \Delta \vdash \lambda x.t': S_2 \longrightarrow S_3}

\end{tabular}
\end{definition}
 
\begin{theorem}[Externalization]
  If $\Psi \Vdash t': t\ep U$, then $\interp{\Psi} \vdash t': t\ep \interp{U}$.
\end{theorem}
\begin{proof}
  The proof is the same as theorem \ref{ext}
\end{proof}
%% The best possible law has value only if one can justify it, i.e., show the effect of non-observance. Girard

This generalized version of reciprocity is highly desirable in the sense that now Scott encoding
and its derivative can exploit recirocity. For example, for Scott encoding $0$ and $\suc$, one
will have a proof for $0 \ep \mathsf{Nat}$ and $\suc \ep \Pi x: \mathsf{Nat}. \mathsf{Nat}$, and then one can elaborate an inductive proof of $\mathsf{add} \ep \Pi x:\mathsf{Nat}. \Pi y: \mathsf{Nat}. \mathsf{Nat}$ from the definition of $\mathsf{add}$(similar definitions can be found in Ocaml's \textit{match} expression and Haskell's \textit{case} expression.). 
\section{Reasoning about Programs}

\subsection{Preliminary}
Most of this section come from Barendregt's lambda book.
\begin{definition}[Solvability]
\

  \begin{itemize}
  \item   A closed lambda term $t$($\mathrm{FV}(t) = \emptyset$) is solvable if
there exists $t_1,..., t_n$ such that $t t_1 ... t_n =_{\beta} \lambda x.x$.
\item An arbitrary term $t$ is solvable if the closure $\lambda x_1...\lambda x_n.t$, where
$\{x_1,...,x_n\} = \mathrm{FV}(t)$, is solvable.
  \item $t$ is unsolvable iff $t$ is not solvable.
  \end{itemize}
\end{definition}

\begin{definition}[Head Normal Form]
  A term $t$ is a head normal form if $t$ is of the form $\lambda x_1....\lambda x_n.x t_1...t_m$, where $n,m \geq 0$.
\end{definition}

\begin{theorem}[Wadsworth]
  $t$ is solvable iff $t$ has a head normal form. In particular, all terms in
normal forms are solvable, and unsolvable terms have no normal form.
\end{theorem}

\begin{theorem}[Genericity]
  For a unsolvable term $t$, if $t_1 t =_{\beta} t_2$, where $t_2$ in normal form, then
for any $t'$, we have $t_1 t' =_{\beta} t$.
\end{theorem}

So unsolvable in general is computational irrelavance, thus it is reasonable to equate
all unsolvable terms. 

\begin{definition}[Omega-Reduction]
Let $\Omega$ be $(\lambda x.xx)\lambda x.xx$, then $t \to_{\omega} \Omega$ iff $t$ is unsolvable and $t \not \equiv \Omega$.
\end{definition}

We add Omega-reduction as part of the term reduction in $\mathfrak{G}$.

\begin{theorem}
  $\to_{\beta} \cup \to_{\omega}$ is Church-Rosser.
\end{theorem}

\subsection{Reasoning about Programs}

We now define another notion of contradictory: $\bot := \forall x. x = \Omega$. Note that this will imply $\forall x.\forall y. x = y$, thus we can safely take it as contradictory.

\begin{theorem}
  $\vdash \forall n. ( n \ep \mathsf{Nat} \to (n = \Omega \to \bot))$.
\end{theorem}
\begin{proof}
  We will prove this by induction. Recall the induction theorem:  $\Pi C^1. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$. We instantiate $C$ with $\iota z. (z = \Omega \to \bot)$, by comprehension, we then have $(\forall y . ( (y = \Omega \to \bot  ) \to (\mathsf{S} y = \Omega \to \bot)) \to (0 = \Omega \to \bot) \to \forall m. (m \ep \mathsf{Nat} \to (m = \Omega \to \bot))$. It is enough to show that $0 = \Omega \to \bot$ and $\mathsf{S} y = \Omega \to \bot$. Let us say we use Scott numerals. Thus $0 := \lambda s.\lambda z.z$ and $\suc y := \lambda s.\lambda z.s y$. Assume $0 = \Omega = \lambda x_1.\lambda x_2.\Omega$, let $F := \lambda u. u \ p\ q$. Assume $q \ep X^1$, then $F \ 0 \ep X^1$. Also $F \ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, so $\Omega \ep X^1$. Thus we just show
$\forall X^1. (q \ep X^1 \to \Omega \ep X^1)$, which means $\forall q. q = \Omega$, thus contradiction. So $0 = \Omega \to \bot$. Now let us show $\mathsf{S} y = \Omega \to \bot$. Assume $\lambda s.\lambda z.sy = \Omega = \lambda x_1.\lambda x_2.\Omega$. Let $F := \lambda n.n\ (\lambda p.q)\ z$. Assume $q \ep X^1$, then $F\ (\lambda s.\lambda z.s y) \ep X^1$, thus $F\ (\lambda x_1.\lambda x_2.\Omega) \ep X^1$, meaning $\Omega \ep X^1$. So we just show $\Pi X^1. (q \ep X \to \Omega \ep X)$. Thus $\forall q. q = \Omega$, contradiction. So $\mathsf{S} y = \Omega \to \bot$.  
\end{proof}

Above theorem means that all the member of $\mathsf{Nat}$ has a normal form. Thus established 
the fact that for a number function $t: \mathsf{Nat} \to \mathsf{Nat}$, $t$ will terminate 
at all the input from $\mathsf{Nat}$. 


\cite{Girard:1989}
\bibliographystyle{plain}
\bibliography{system-g}

\appendix
\section{Developments inside System $\mathfrak{G}_0$}
\label{devep}

\begin{lemma}[Reflexitivity of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. (a = a)$.
\end{lemma}
\begin{proof}
  Obvious.
\end{proof}

\begin{lemma}[Symmetry of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. (a = b \to b = a)$.
\end{lemma}
\begin{proof}
  Assume $\Pi C. a\ep C \to b \ep C$(1), we want to show $ b \ep A \to a \ep A$ for any $A$. Instantiate $C$ in (1) with $\iota x.  (x \ep A \to a \ep A)$. By comprehension, we get 
$(a \ep A \to a \ep A) \to (b \ep A \to a \ep A)$. And we know that $a \ep A \to a \ep A$ is derivable in our system, so by MP(modus ponens) we get $b \ep A \to a \ep A$. 
\end{proof}

\begin{lemma}[Transitivity of Equality]
\label{symm}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall c. a = b \to b = c \to a = c$.
\end{lemma}
\begin{proof}
For any $a,b,c$, assume $a = b$($\Pi C. a \ep C \to b \ep C$), $b = c$($\Pi C. b \ep C \to c \ep C$), we want to show $a = c$($ a \ep A \to c \ep A$ for any $A$). One can see that this is by 
syllogism. 
\end{proof}

\begin{lemma}[Congruence of Equality]
\label{cong}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall f .( a = b \to f\ a = f\ b)$.
\end{lemma}

\begin{theorem}
There is a $t$ such that  $\cdot \vdash t : \forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$. 
\end{theorem}
\begin{proof}
We want to show $\forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$.
 Let $P := \iota x. \mathsf{add}\ x\ 0 = x$. Instantiate $\mathsf{Id}$ with $P$, we get 
$  \forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y) \to\mathsf{add}\ 0\ 0 = 0 \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. We just have to inhabit 
$\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y)$ and $\mathsf{add}\ 0\ 0 = 0$. For the base case, we want to show $\Pi C. \mathsf{add}\ 0\ 0 \ep C \to 0 \ep C$. Assume $\mathsf{add}\ 0\ 0 \ep C$, since $\mathsf{add}\ 0\ 0 \to_{\beta} 0$, by conversion, we get $0 \ep C$. For the step case is a bit complicated, assume $\mathsf{add}\ y\ 0 = y$, we want to show $\mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y$. Since $\mathsf{add}\ y\ 0 \to_{\beta} y\ \mathsf{S}\ 0$, so by conversion we have $y \ \mathsf{S}\ 0 = y$. And $\mathsf{add}\ (\mathsf{S}y)\ 0 \to_{\beta} \mathsf{S}(y \ \mathsf{S}\ 0)$, so we are tring to prove $\mathsf{S}(y \ \mathsf{S}\ 0) = \mathsf{S}y$, which is by lemma \ref{cong}. 
\end{proof}
\begin{lemma}[Object-level Conversion]
\label{oconv}
  There is a $t$ such that  $\cdot \vdash t: \forall a. \forall b. \Pi P. ( a \ep P \to a = b \to b \ep P)$. 
\end{lemma}
\begin{proof}
  By modus ponens.
\end{proof}
\begin{theorem}
   There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. (a \ep \mathsf{Nat} \to a = b \to b \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
  Let $P := \iota x.x\ep \mathsf{Nat}$ for lemma \ref{oconv}. 
\end{proof}

\begin{theorem}[Unprovability I]
There is no such an $t$ that  $\cdot \vdash t:  1 = 0 \to \mathsf{Void}$. 
\end{theorem}

\begin{proof}
By the erasure theorem, if such $t$ exist, it would implies $(\Pi C. C \to C) \to \Pi X.X$ is 
inhabited. Thus $\Pi X.X$ is inhabited. 
\end{proof}

\begin{theorem}[Unprovability II]
There is no such an $t$ that  $\cdot \vdash t:\mathsf{Void}$. 
\end{theorem}

\noindent Above unprovability suggests that our system as a logic system is seemingly has a drawback same as $\mathbf{F}$, i.e. unable to interpret $0 \not = 1$ properly. We will see that it is actually not the case for our system. 

\subsection{The Notion of Contradictory}

\begin{definition}
  $\bot := \forall x. \forall y. (x = y)$.
\end{definition}

\noindent The meaning of this definition is obvious, every term is the same. Note that 
the erasure of $F(\bot) \equiv \Pi X. X \to X$. So it is inhabited in System \textbf{F}. But 
can one prove that $\bot$ is uninhabited in our system. 

\begin{theorem}[Logical Complexitivity]
\label{logic}
There is no such an $t$ that  $\cdot \vdash t:\bot$. 
\end{theorem}
\begin{proof}
  By theorem \ref{const}, we know that if there is such an $t$, it must be of the
abstraction form, so $u: x \in C \vdash t' : y \in C$ for any $x, y, C$. And now 
according to our typing rule, there are no ways to construct a term of type $y \in C$
under the assumption that $u : x \in C$. 
\end{proof}

\noindent This theorem show that although computationally, our system is the same as system
\textbf{F}, logically, it is strictly richer, we just identify a property that can not be
inhabited in our system but inhabited in system \textbf{F}. Now with this new notion of \textit{contradictory}, we can prove $0 = 1 \to \bot$.

\begin{theorem}
 There is a term $t$ such that $\cdot \vdash 0 = 1 \to \bot$.
\end{theorem}
\begin{proof}
  Assume $\Pi C. 0 \ep C \to 1 \ep C$ $\dagger$, we want to prove for any $x,y, A$, $ x \ep A \to y \ep A$. Assume $x \ep A$(1). We now instantiate $C$ with $\iota u. (((\lambda n. n\ (\lambda z.y)\ x)\ u) \ep A)$ in $\dagger$. By comprehension and beta reduction, we get $x \ep A \to y \ep A$(2). By modus ponens of (1), (2), we get $y \ep A$. So we just exibit an abstract proof
term $t$.   
\end{proof}

\noindent \textbf{Remarks}: 
\begin{itemize}
\item The theorem above show that at least one axiom of $\textbf{HA}_2$
can be proved in our system and also has a well behaved translation to system $\mathbf{F}$, namely, $F(0 = 1 \to \bot) \equiv (\Pi C. C \to C) \to (\Pi C. C \to C)$. 

\item It also shows that Girard's mapping from \textbf{F} with ``junk'' to \textbf{F} is very
well conceived, because that mapping will map his notion of contradictory, $\Pi X.X$ to $\Pi X. X \to X$, which is exactly the erasure of our notion of contradictory.  
\end{itemize}

\begin{theorem}
 There is a term $t$ such that $\cdot \vdash \forall m. ( m \ep \mathsf{Nat} \to \mathsf{S}m =  m \to \bot)$.
\end{theorem}

\subsection{Injectivity of $\mathsf{S}$}
\noindent Just as the $\mathsf{pred}$ function for Church numerals is really hard to define, 
proving injectivity in our system is considered the hardest theorem to be proved.

\begin{definition}[Predecessor, Kleene]
$\mathsf{pred} := \lambda n.\lambda f. \lambda x. n \ (\lambda g.\lambda h. h\ (g\ f)) (\lambda u. x) (\lambda u.u)$.  
\end{definition}

\begin{lemma}
\label{tcong}
   $\cdot \vdash t: \forall a. \forall b. (a = b \to \lambda s.\lambda z. s\ (a\ s\ z) = \lambda s.\lambda z. s\ (b\ s\ z))$. 
\end{lemma}
\begin{proof}
Assume $\Pi C. a \ep C \to b \ep C$ $\dagger$, we want to show that $\lambda s.\lambda z. s\ (a\ s\ z) \ep A \to \lambda s.\lambda z. s\ (b\ s\ z) \ep A$ for any $A$. Instantiate $C$ by $\iota x.(\lambda s.\lambda z. s\ (x\ s\ z) \ep A)$ in $\dagger$, by comprehension, we have $\lambda s.\lambda z. s\ (a\ s\ z) \ep A \to \lambda s.\lambda z. s\ (b\ s\ z) \ep A$. 
\end{proof}

\begin{lemma}[Intermediate Result]
\label{inter}
  $\cdot \vdash t: \forall m .(m \ep \mathsf{Nat} \to \lambda f.\lambda x.(m\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = m)$. 
\end{lemma}

\begin{proof}
  We prove this by induction. Let $P:= \iota q. \lambda f.\lambda x.(q\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = q$. Instantiate $C$ in $\mathsf{Id}$ with $P$, we get $  \forall y . ( y\ \ep P \to  (\mathsf{S} y)\ep P) \to  0 \ep P \to \forall m. (m \ep \mathsf{Nat} \to  m \ep P)$. We just need to show $0 \ep P$ and $\forall y . ( y\ \ep P \to  (\mathsf{S} y)\ep P)$. For the base case, we want to prove $\lambda f.\lambda x.(0\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = 0$, this is easily done by evaluation. For the step case, for any $y$, we want to show $y\ \ep P \to  (\mathsf{S} y)\ep P$. Assume $y \ep P$, we need to show $(\mathsf{S} y)\ep P$. By comprehension and beta reduction, we are assuming $\lambda f.\lambda x.(y\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = y$ (1), we want to show $\lambda f.\lambda x.((\lambda s.\lambda z.s\ (y\ s\ z))\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f = \lambda s.\lambda z.s\ (y\ s\ z)$ (2). By lemma \ref{tcong} and (1), we get $\lambda s.\lambda z.s\ (y\ s\ z) = \lambda s.\lambda z.s\ ((\lambda f.\lambda x.(y\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f)\ s\ z)$ (3). Then by beta reductions we see the right hand side of (3) is Leibniz equals to the left hand side of (2). So by transitivity and symmetry of the equality we prove (2). Thus we exibit the abstract term $t$. 
\end{proof}

\begin{lemma}[Predecessor]
\label{pre}
  $\cdot \vdash t: \forall m . (m \ep \mathsf{Nat} \to \mathsf{pred} (\mathsf{S} m) = m)$.
\end{lemma}
\begin{proof}
Since $\mathsf{pred} (\mathsf{S} m) \to_{\beta}^* \lambda f.\lambda x.(m\ (\lambda g.\lambda h. h\ (g\ f))\ (\lambda u.x))\ f$, by lemma \ref{inter}, we get what we want.  
\end{proof}
\begin{lemma}[Congruence of Equality]
\label{cong}
 There is a $t$ such that $\cdot \vdash t : \forall a. \forall b. \forall f .( a = b \to f\ a = f\ b)$.
\end{lemma}
\begin{proof}
  Assume $\Pi C. a \ep C \to b \ep C$($a = b$). Let $C := \iota x. f x \ep P$ with $P$ free. Instantiate $C$ for the 
assumption, we get $a \ep (\iota x. f x \ep P) \to b \ep (\iota x. f x \ep P)$. By conversion, 
we get $f\ a \ep P \to f\ b \ep P$. So by polymorphic generalization, we get $f\ a = f\ b$. Closing the hypothesis and doing a bunch of generalization, we get what we want.
\end{proof}

\begin{theorem}
  $\cdot \vdash t: \forall n.\forall m. (n \ep \mathsf{Nat} \to m \ep \mathsf{Nat} \to \mathsf{S}m = \mathsf{S}n \to m = n)$. 
\end{theorem}
\begin{proof}
Assume $n \ep \mathsf{Nat}, m \ep \mathsf{Nat}, \mathsf{S}m = \mathsf{S}n$, we want to show $m = n$. Instantiate $a$ with $\mathsf{S}m$, instantiate $b$ with $\mathsf{S}n$, $f$ with $\mathsf{pred}$ in lemma \ref{cong}. By modus ponens, we have $\mathsf{pred}(\mathsf{S}m) = \mathsf{pred}(\mathsf{S}n)$. Thus by lemma \ref{pre}, we have $m = n$.
\end{proof}

\noindent \textbf{Remarks}: The prove of this theorem really benefits a lot from that we do not 
need to type the $\mathsf{pred}$ function(even we may be able to type it, but we do not need to)

\subsection{Scott Encoding}

\begin{definition}[Scott numerals]
  \noindent $\mathsf{Nat} := \iota x. \Pi C.(\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C  \to x \ep C$

\noindent $\mathsf{S} \ := \lambda n. \lambda s.\lambda z. s \ n$

\noindent $0\  := \lambda s. \lambda z.z$

\end{definition}

\noindent \textbf{Note}:  $0$ is typable to $\mathsf{Nat}$, but $\mathsf{S}$ is not typable to $\mathsf{Nat} \to \mathsf{Nat}$. Also note that the proof of $1 \ep \mathsf{Nat}$ is actually Church numerals 1 ! This explain why Church numerals are special, it is in a sense \textit{initial}, meaning, any kind of encoding of $\bar{n}, \mathsf{Nat}$, as long as the definition of $\mathsf{Nat}$ has the same form as Church encoding, then the proof of $\bar{n} \ep \mathsf{Nat}$ will
be Church numeral $n$. 


\begin{definition}[Induction]
\

\noindent  $\mathsf{Id} :  \Pi C. (\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C)) \to 0 \ep C \to \forall m. (m \ep \mathsf{Nat} \to m \ep C)$

\noindent $\mathsf{Id} := \lambda s. \lambda z. \lambda n. n\ s\ z$

\noindent with $s:\forall y . ( (y \ep C) \to (\mathsf{S} y) \ep C), z: 0 \ep C, n: m \ep \mathsf{Nat}$.
\end{definition}

\begin{theorem}
  $\cdot \vdash t : 0 \ep \mathsf{Nat}$.
\end{theorem}
\begin{proof}
  Obvious.
\end{proof}

\begin{theorem}

  $\cdot \vdash t: \forall m. (m \ep \mathsf{Nat} \to \mathsf{S}m \ep \mathsf{Nat})$.
\end{theorem}
\begin{proof}
  By induction. Let $P:= \iota x. \mathsf{S} x \ep \mathsf{Nat}$. Instantiate $C$ in $\mathsf{Id}$ with $P$, we get $  \forall y . ( \mathsf{S}y\ \ep \mathsf{Nat} \to  \mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat}) \to  0 \ep \mathsf{Nat} \to \forall m. (m \ep \mathsf{Nat} \to  \mathsf{S}m \ep \mathsf{Nat})$. So we just need to show $\forall y . ( \mathsf{S}y\ \ep \mathsf{Nat} \to  \mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat})$ and $0 \ep \mathsf{Nat}$. The base case is immediate. To show the step case, let us assume $\mathsf{S}y\ \ep \mathsf{Nat}$ for any $y$, we need to show 
$\mathsf{S}(\mathsf{S} y)\ep \mathsf{Nat}$. By comprehension, we are assuming $\Pi C. (\forall y. (y \ep C) \to (\mathsf{S}y) \ep C) \to 0 \ep C \to (\mathsf{S}y) \ep C$ $\dagger$, we want to show $ (\forall y. (y \ep A) \to (\mathsf{S}y) \ep A) \to 0 \ep A \to (\mathsf{S} \mathsf{S} y) \ep A$ for any $A$. Assume $(\forall y. (y \ep A) \to (\mathsf{S}y) \ep A)$(1) and $0 \ep A$, we need to show $(\mathsf{S} \mathsf{S} y) \ep A$. Instantiate $C$ with $A$ in $\dagger$, we get $(\forall y. (y \ep A) \to (\mathsf{S}y) \ep A) \to 0 \ep A \to (\mathsf{S}y) \ep A$. By modus ponens, we get $(\mathsf{S}y) \ep A$. Instantiate $y$ in (1) with $\mathsf{S} y$, we get $\mathsf{S} y \ep A \to \mathsf{S} \mathsf{S} y \ep A$. Thus by modus ponens, we get $\mathsf{S} \mathsf{S} y \ep A$. Thus we exibit such an abstract term $t$. 
\end{proof}

\begin{definition}[Recursive Equation]
  $\mathsf{add} :=  \lambda n. \lambda m.n\ (\lambda p. \mathsf{add}\ p\ (\mathsf{S} m))\ m$
\end{definition}

\noindent We know that the above recursive equation can be solved by fixpoint. But we do not 
bother to solve it. The way we treat it is use it as a kind of build in beta equality, when
every we see a $\mathsf{add}$, we one step unfold it. 

\begin{theorem}
There is a $t$ such that  $\cdot \vdash t : \forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$. 
\end{theorem}
\begin{proof}

We want to show $\forall n. (n \ep \mathsf{Nat} \to \mathsf{add}\ n\ 0 = n)$.
 Let $P := \iota x. \mathsf{add}\ x\ 0 = x$. Instantiate $\mathsf{Id}$ with $P$, we get 
$  \forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y) \to\mathsf{add}\ 0\ 0 = 0 \to \forall m. (m \ep \mathsf{Nat} \to m \ep P)$. We just have to inhabit 
$\forall y . ( \mathsf{add}\ y\ 0 = y \to \mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y)$ and $\mathsf{add}\ 0\ 0 = 0$. For the base case, we want to show $\Pi C. \mathsf{add}\ 0\ 0 \ep C \to 0 \ep C$. Assume $\mathsf{add}\ 0\ 0 \ep C$, since $\mathsf{add}\ 0\ 0 \to_{\beta} 0$, by conversion, we get $0 \ep C$. For the step case is a bit complicated, assume $\mathsf{add}\ y\ 0 = y$, we want to show $\mathsf{add}\ (\mathsf{S} y)\ 0 = \mathsf{S} y$. Since $\mathsf{add}\ y\ 0 \to_{\beta} y\ (\lambda p.\mathsf{add}\ p\ (\suc 0))\ 0$,  And $\mathsf{add}\ (\mathsf{S}y)\ 0 \to_{\beta} \mathsf{add} \ y \ (\suc 0) \leftarrow_{\beta}^* \suc (\mathsf{add}\ y\ 0)$. So lemma \ref{cong} will give us this. 
\end{proof}

\begin{definition}
  $\mathsf{pred} := \lambda n.n\ (\lambda p.p)\ 0$
\end{definition}


\subsection{Dependent Product}

We extend $\mathfrak{G}$ with three new type construct: $\Pi x:T.T'$ and $T\ t, \lambda x.T$ andreplace the Func and App rule with the following two new typing rules: 

\

\begin{tabular}{ll}
\infer[\textit{Indx}]{\Gamma \vdash \lambda x.t : \Pi x: T_1.T_2}
{\Gamma, x:T_1 \vdash t: T_2}

&
\infer[\textit{App}]{\Gamma \vdash t t':[t'/x]T_2}{\Gamma
\vdash t: \Pi x:T_1.T_2 & \Gamma \vdash t': T_1}
  
\end{tabular}

\

\noindent And we need another type level reduction rule:

\infer{(\lambda x.T)t \to_{\beta} [t/x]T}{}
 
\noindent We also write $T_1 \to T_2$ if $x \notin \mathsf{FV}(T_2)$ for $\Pi x:T_1.T_2$. 

\noindent \textbf{Remarks}
\begin{itemize}
\item  We want to investigate index product because we want to see if it is possible to obtain formula-set reciprocity for vector data type, which is canon in dependent type programming language. In this section we assume natural numbers as Church numerals.

\item We want to identify three kinds of quantification, $\forall x.T$,  $\Pi x:T_1.T_2$ and $\forall x. x \ep T_1 \to T_2$. The first one is strongest in the sense that it quantifies over all term; the second one quantifies over all terms of type $T_1$, the third one quantifies over the terms that has a self type $T_1$. 
\end{itemize}

\

\begin{definition}[Vector]
\

\noindent  $\mathsf{vec}(U, n) := \iota x. \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to \mathsf{nil} \ep C 0 \to x \ep C n$

\noindent $\nil := \lambda y. \lambda x.x : \vecc(U, 0)$

\noindent $\cons := \lambda n.\lambda v. \lambda l. \lambda y. \lambda x.y \ n\ v\ (l \ y\ x) : \Pi n: \mathsf{Nat}.U \to \vecc (U, n) \to \vecc (U, \suc n)$.

\noindent where $n: \mathsf{Nat}, v: U, l: \vecc (U, n), y:\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m)), x: \nil \ep C0 $. 


\end{definition}

\begin{proof}
\noindent It is easy to see that $\nil$ is typable to $\vecc (U, 0)$. Now we show how $\cons$ is typable to $\Pi n: \mathsf{Nat}.U \to \vecc (U, n) \to \vecc (U, \suc n)$. We can see that $l\ y\ x: l \ep C n$. After the instantiation, the type of $y \ n\ v:  l \ep C n \to (\mathsf{cons}\ n\ v\ l) \ep C (\mathsf{S}n)$. So $y\ n\ v \ (l\ y\ x): (\mathsf{cons}\ n\ v\ l) \ep C (\mathsf{S}n)$. So $\lambda y. \lambda x. y\ n\ v \ (l\ y\ x) : \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to  \nil \ep C0 \to  \lambda y. \lambda x. y\ n\ v \ (l\ y\ x) \ep C(\suc n)$. So $\lambda y. \lambda x. y\ n\ v \ (l\ y\ x) : \vecc (U, \suc n)$. So $ \cons : \Pi n:\mathsf{Nat}. U \to \vecc(U, n) \to \vecc(U, \suc n)$.
  
\end{proof}


\noindent The above development suggests that dependent type can be included in the framework
of system $\mathfrak{G}$ and we just need to modify the erasure function $F(\Pi x:T_1.T_2) := F(T_1) \to F(T_2)$ and $F(T t) := F(T)$ and $F(\lambda x.T) = F(T)$, then we still can go back to system \textbf{F}. More importantly, our vector encoding has the formula-set reciprocity, this is a highly desirable property that 
enable us to do dependent programming effectively in $\mathfrak{G}$. 

\begin{definition}[Induction Principle]
\

  \noindent  $\mathsf{ID}(U, n) := \Pi C. (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))) \to \mathsf{nil} \ep C 0 \to \forall x(x \ep \vecc(U,n) \to x \ep Cn)$

\noindent $\mathsf{ID}(U,n) := \lambda s. \lambda z. \lambda n. n\ s\ z$

\noindent Let $s : (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep C m \to (\mathsf{cons}\ m\ u\ y) \ep C (\mathsf{S}m))), z: \mathsf{nil} \ep C 0, n: x \ep \vecc(U,n)$
\end{definition}

\begin{definition}[append]
\

\noindent $\app := \lambda n_1. \lambda n_2. \lambda l_1. \lambda l_2. l_1\ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v)\ l_2$.

\noindent For $n+n_2$ we mean $\mathsf{add}\ n\ n_2$. We can use induction to define append
as well.

\noindent $\app := \lambda n_1. \lambda n_2. \mathsf{ID}(U, n_1) (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v)\ l_2 \ l_1$. 
\end{definition}

\begin{proof}
  We want to show $\app : \Pi n_1:\mathsf{Nat}. \Pi n_2:\mathsf{Nat}. \vecc(U, n_1) \to \vecc(U, n_2) \to \vecc(U, n_1+n_2)$. Observe that $\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v: \Pi n:\mathsf{Nat}. \Pi x:U. v \ep \vecc(U, n+n_2) \to \vecc(U, n+n_2+1) $. We instantiate $C :=  \lambda y.(\iota x.\vecc(U, y + n_2))$ , where $x$ free over $\vecc(U, y + n_2)$, in $\mathsf{ID}(U, n_1)$, by comprehension and beta reductions, we get $\mathsf{ID}(U, n_1) : \forall y. (\Pi m: \mathsf{Nat}. \Pi u:U.  \vecc(U, m+n_2) \to  \vecc (U, \mathsf{S}m+n_2)) \to \vecc(U, 0+n_2)  \to \forall x(x \ep \vecc(U,n_1) \to  \vecc(U, n_1+n_2))$. So $\mathsf{ID}(U, n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) : \vecc(U, 0+n_2)  \to \forall x(x \ep \vecc(U,n_1) \to  \vecc(U, n_1+n_2))$. Of course we assume $l_1: \vecc(U, n_1), l_2:\vecc(U, n_2)$, so $\mathsf{ID}(U, n_1) \ (\lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v) \ l_2 \ l_1: \vecc(U, n_1+n_2)$. 
\end{proof}

\begin{theorem}[Associativity]
  $\cdot \vdash t: \forall (n_1. n_2. n_3. v_1. v_2.v_3). (n_1 \ep \mathsf{Nat} \to n_2 \ep \mathsf{Nat} \to n_3 \ep \mathsf{Nat} \to v_1 \ep \vecc(U, n_1) \to v_2 \ep \vecc(U, n_2)) \to v_3 \ep \vecc(U, n_3) \to \app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3$
\end{theorem}

\begin{proof}
  Assume $x_1: n_1 \ep \mathsf{Nat}, x_2: n_2 \ep \mathsf{Nat}, x_3: n_3 \ep \mathsf{Nat}, y_2: v_2 \ep \vecc(U, n_2)) , y_3: v_3 \ep \vecc(U, n_3)$. We want to show $\forall v_1. (v_1 \ep \vecc(U, n_1) \to \app \ n_1\ (n_2+n_3)\ v_1 \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (n_1 + n_2) \ n_3\ (\app \ n_1 \ n_2 \ v_1 \ v_2) \ v_3)$. Let $P:= \lambda z.\iota y. (\app \ z\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (z + n_2) \ n_3\ (\app \ z \ n_2 \ y \ v_2) \ v_3)$. We instantiate the $C$ in $\mathsf{ID}(U,n_1)$ with $P$, we have $\mathsf{ID}(U,n_1):  (\forall y. (\Pi m: \mathsf{Nat}. \Pi u:U. y \ep P m \to (\mathsf{cons}\ m\ u\ y) \ep P (\mathsf{S}m))) \to \mathsf{nil} \ep P 0 \to \forall x(x \ep \vecc(U,n_1) \to x \ep P n_1)$. So we just need to prove base case: $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$ and step case: $\Pi m: \mathsf{Nat}. \Pi u:U.  (\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3)\to (\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3)$. For the base case, $\app \ 0\ (n_2+n_3)\ \nil \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \app\ n_2\ n_3 \ v_2 \ v_3 \leftarrow_{\beta}^* \app \ (0 + n_2) \ n_3\ (\app \ 0 \ n_2 \ \nil \ v_2) \ v_3$. For the step case, we assume $\app \ m\ (n_2+n_3)\ y \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (m + n_2) \ n_3\ (\app \ m \ n_2 \ y \ v_2) \ v_3$(IH), we want to show $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) = \app \ (\suc m + n_2) \ n_3\ (\app \ \suc m \ n_2 \ (\mathsf{cons}\ m\ u\ y) \ v_2) \ v_3$(Goal). We know that $\app \ \suc m\ (n_2+n_3)\ (\mathsf{cons}\ m\ u\ y) \ (\app\ n_2\ n_3 \ v_2 \ v_3) \to_{\beta}^* \cons\ (m+n_2+n_3)\ u \ (y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3))$, where $\mathcal{X}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_2+n_3)\ x\ v$. The left hand side of (IH) can be beta reduced to $(y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3))$. The right hand side of the (Goal) can be reduced to $\app \ (\suc m + n_2) \ n_3 (\cons\ (m+n_2)\ u \ (y\ \mathcal{C}\ v_2)) v_3 \to_{\beta}^* \cons \ (m+n_2+n_3) \ u \ ((y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3)$, where $\mathcal{C}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_2)\ x\ v, \mathcal{Q}:= \lambda n. \lambda x.\lambda v. \cons  (n+n_3)\ x\ v$. The right hand side of (IH) can be reduced to $((y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3)$. 
So (IH) can be simplified to $y\ \mathcal{X}\ (\app\ n_2\ n_3 \ v_2 \ v_3) = (y\ \mathcal{C}\ v_2) \ \mathcal{Q}\ v_3$. Congruence over the $f:= \cons \ (m+n_2+n_3) \ u$ give us the (Goal). 
\end{proof}

\section{Developments inside System $\mathfrak{G}$}

\end{document}
