\documentclass{article} 
\usepackage{url} 
\usepackage{manfnt}
\usepackage{fullpage}
\usepackage{proof}
\usepackage{amssymb} 
%\usepackage{latexsym}
\usepackage{xcolor} 
%\usepackage{mathrsfs}
\usepackage{amsmath, amsthm}
\usepackage{diagrams}

\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
% My own macros
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 

\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newarrowfiller{dasheq} {==}{==}{==}{==}
\newarrow {Mapsto} |--->
\newarrow {Line} -----
\newarrow {Implies} ===={=>}
\newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
\newarrow {Onto} ----{>>}
\newarrow {Dashto}{}{dash}{}{dash}{>}
\newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\newtheorem{prop}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}


\begin{document}
%\pagestyle{empty}
\title{Lambda Encoding, Types and Confluence }
\author{Peng Fu \\
Computer Science, The University of Iowa}
\date{Last edited: \today}


\maketitle \thispagestyle{empty}

\begin{abstract}
We review \textit{lambda encoded data} in both untyped and typed forms. Several methods to prove \textit{confluence} are surveyed. To address the problems of Church encoding arise in dependent type,  we propose a novel type system called $\mathsf{Selfstar}$, which not only enable us to type both Scott encoding and Church encoding data, but also allow us to derive corresponding induction principle and case analysis principle. 

\end{abstract}

\section{Introduction}
%Problem of dependent typed lambda encoding (2 pages)
% The first paragraph is for general audience.
\subsection{Backgrounds}
Modern computer stores instructions as well as its data in the memory \cite{von1993first}, leave the distinctions between data and instructions conceptually. Most programming language distincts the program and other data on which the program operate, but in LISP and its dialects, programs and data are essentially indistinguishable. Programs as data give one ability to manipulate programs, this leads to the idea of metaprogramming . Data as program have been less well-known, perhaps the exploition of buffer overflow \cite{dowd2006art} provides an intuitive example, where
the attacker provides a large trunk of data (including the malicious code) as input for a program such that the malicious code get executed. The idea of data as program can be expressed more
naturally with lambda calculus, for example, the data $2$ can be encoded as a lambda term to express the idea of doing something twice. For example $2\ (f, a) = f (f (a))$ means the function $2$ takes two arguments $f,a$, where $f$ is a function, returning the result of applying $f$ twice to $a$; while $2$ as data can be operated on, say $\mathsf{Plus}\ (2, 1) = 3$. Though this idea of data as program is familiared in functional programming language, it has not been widely adopted to handle algebraic data type. One of the purposes of this report is to explore the possibility of this approach.

\textit{Lambda calculus} provide a syntactic way to model programming language in the sense that
function application and function construction can be expressed explicitly. For example, given a function(or program), denoted by symbol $f$, the lambda expression corresponds to this function is $\lambda x.f\ x$, while calling this function on an argument denoted by symbol $a$ can be expressed by the lambda expression $(\lambda x.f\ x)\ a$. The only rule of reducing a lambda expression is called beta reduction, for our example, $(\lambda x.f\ x)\ a \to_{\beta} f\ a$. (This way of understanding lambda calculus is inspired from Frege's \cite{frege1967basic}.) It seems this kind of syntactic abstraction is quite limited, but surprisingly it is strong enough to capture many powerful computational concepts such as recursion, which we will introduce later in this report. Due to its computational power, lambda calculus give rise to the paradigm of functional programming.  

\textit{Type} is introduced to programming language with the purpose of reducing bugs. It is hard to give a direct answer of what the meaning of type is. But we should be able to say that it
provide a way to express certain assumptions about the programs and data, so that compiler can
check wether these assumptions are met when composing them together. For example, when a programmer write a function with type $\mathsf{int} \to \mathsf{int}$, he would expect the program to
take integer as input, and no need to worry about dealing with situation that giving a string
as an input.

In functional programming languages like Haskell, Ocaml, programmer can be implicit about these kind of \textit{type} assumption, the compiler can automatically deduce the \textit{type} according to the definition of the function, e.g. the type of function $f\ n = n+1$ could be automatic infered as $\mathsf{int} \to \mathsf{int}$. With the assumptions get heavy, say one want to write a function that takes in a number $n$ and return an indexed string array of size $n$(notationally, $\Pi n:\mathsf{Num}. \mathsf{Array}(\mathsf{String} , n)$), it is
often hard for the compiler to deduce the type from the program, in this case, certain amount 
of annotation is needed. With this notion of type, one will reasonable want that type is invariant during the execution of program. For example, let $f\ n = n+1$, $g\ n = n+2$, one would want
the return value of $f\ (g\ 3)$ to have type $\mathsf{int}$. This requirement reflects the confirmation of the type system and the actual execution, is a kind of \textit{soundness} property, we will formulate it later as type preservation property. 



\subsection{Motivations}
It is well known that natural number can be encoded as lambda terms using
Church encoding \cite{Church:1985} or Scott encoding (reported in \cite{CHS:72}).  So operations such as \textit{plus},
\textit{multiplication} can be performed by beta-reduction (syntactic 
substitution) on lambda terms. Not only for natural number, 
other interesting inductive data structures such as trees, lists, etc. (\cite{Barendregt:97}, chapter 11 in \cite{Girard:1989})
can also be represented in a similar fashion. For discussons on the prospect of
adopting Scott encoding in functional programming, we refer to Stump \cite{Stump:2009} and Jansen et al.\cite{Jansen:2011}.

Through Curry-Howard correspondent, type in typed lambda calculi corresponds to
the formula in intuitionistic logic, and typed term corresponds to the proof of its 
formula(type) \cite{Howard:1980}. Due to this feature, typed lambda calculi, especially 
dependent typed lambda calculi \cite{martin:1984} have been included
in the core language for interactive theorem provers such as Agda, Coq(\cite{Bove:2009}, \cite{Coq}); and
for experimental functional programming languages such as Epigram 2, Guru (\cite{McBride:2005}, \cite{Stump:2009-2}). 
Another part of the core is the add-on data type system, where various forms of data, including
but not limted to inductive \cite{Paulin:1993}, coinductive \cite{gimenez2005tutorial}, non-positive \cite{Pfenning:1988} datatype, are taken
as primitives. 
From language design's point of view, if one want to adopt a rigor design methodology and want to define a type safe functional core language,
then it is necessary to show the core language definitions satisfies
 \textit{type preservation} and \textit{progress} \cite{Wright:1994}. This requires substantial amount of
efforts to write proofs even though most of the proofs can be done by case analysis and induction. 
For minimal core language such as Barendregt's lambda cube \cite{Barendregt:92}, the type preservation argument is given. For core language defintion which extends the lambda cube(or extends part of it), it is necessary and practical to have a notion of datatype, but when datatype and pattern matching facillities are added to the core language, together with binders, bound variables, alpha-conversion problems \cite{aydemir2005mechanized}, it will substantially complicate the type preservation argument. For an example of this, see the Standard ML definition \cite{Milner:1997} and the type preservation report of Standard ML by VanInwegen \cite{vaninwegen1996machine}.  Furthermore, if one want the core language to be a 
total type theory, i.e. can be used for reasoning, then a \textit{termination} argument is required in order to show \textit{logical consistency} \cite{Gentzen:1964}. In this case, present of datatype make it hard to analyze the termination behavior of a \textit{well-typed} term. In fact, the core language of theorem prover Coq, \textit{Calculus of Inductive Construction}(CIC), which extends \textit{Calculus of Construction} \cite{Coquand:1988} with inductive datatype \cite{Coquand:1990} and restricted recursion, the strong normalization for well-typed term is still a conjecture \cite{Gregoire:2010}. 

%(problem with lambda encoding as programming language, or as logical system)
Above discussions leads to the consideration of lambda encoding data as an alternative 
to handle datatype. Church encoding data can be typed in system \textbf{F} \cite{Girard:72}, part of Barendregt's lambda cube, type preservation argument and strong normalization will not be an issue. The drawbacks of this approach, as summerized in \cite{Werner:92}, are ineffecient to define certain operation on dataype, e.g. the \textit{predecessor}, \textit{minus} function; induction principle is not derivable and unable to prove $0 \not = 1$. This gives reason to fallback to G\"odel's system \textbf{T} (chapter 7 in \cite{Girard:1989}), which takes boolean and natural number as primitives. Scott encoding does not suffer the ineffeciency problem arised in Church encoding, so as functional programming langauge, Scott encoding seems to be a better fit than Church encoding \cite{Jansen:2011}. Scott encoding was claimed to be typable in System \textbf{F} \cite{abadi93}, but it is unclear how to type recursve functions on such encoding in System \textbf{F}. 

We propose a novel type system called $\mathsf{Selfstar}$, which not only enable us to type Scott encoding and Church encoding data, but also allow us to derive corresponding induction principle and case analysis principle. This makes it a possible candidate for as core functional language.  

\subsection{Overveiw}

Definitions of abstract reduction system, lambda calculus, simple types are given in section \ref{Pre}. We present Scott and Church numerals in both untype and typed forms in section \ref{Types}. Dependent type system and the related problem with Church encoding are discussed in detail in section \ref{Dep}. Seciton \ref{Conf}, several methods to show \textit{confluence} are given. We give an outline of proving confluence for the term system of $\mathsf{Selfstar}$ (Section \ref{Local}). Relation of confluence to type preservation is discussed in Section \ref{Conf:Presv}.  We present system $\mathsf{Selfstar}$ (Section \ref{Self}), which not only enable us to type Scott encoding and Church encoding data, but also allow us to derive corresponding induction principle and case analysis principle. 

%$\mathsf{Selfstar}$ is
\section{Preliminaries}
\label{Pre}
%Scott and Church encoding(untyped, system F, Fw with recursive type, 5-6 pages)
\subsection{Abstract Reduction System}

We first introduce some basic concepts about \textit{abstract reduction system}, sometimes it is also called \textit{term rewriting system}, \textit{labelled transition systems}. 
%We follow the definitions in \cite{bezem2003term} and \cite{baader1999term}. 

\begin{definition}
 An abstract reduction system $\mathcal{R}$ is a tuple $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, where $\mathcal{A}$ is a set and $\to_i$ is a binary relation(called reduction) on $\mathcal{A}$ indexed by a finite nonempty set $\mathcal{I}$.   
\end{definition}

In an abstract reduction system $\mathcal{R}$, we write $a \to_i b$ if $a,b \in \mathcal{A}$ satisfy the relation $\to_i$, for convenient, $\to_i$ also denotes a subset of $\mathcal{A}\times \mathcal{A}$ such that $(a,b) \in \to_i$ if $a \to_i b$. 

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the reflexive transitive closure of $\to_i$ is written as $\twoheadrightarrow_i$ or $\stackrel{*}{\to}_i$, is defined by: 
\begin{itemize}
\item $m \twoheadrightarrow_i m$. 
\item $ m \twoheadrightarrow_i n$ if $m \to_i n $.
\item $ m \twoheadrightarrow_i l$ if $m \twoheadrightarrow_i n, n \twoheadrightarrow_i l $.
\end{itemize}
  
\end{definition}

\begin{definition}
Given abstract reduction system $(\mathcal{A}, \{ \to_{i}\}_{i \in \mathcal{I}})$, the convertibility relation $=_i$ is defined as the equivalence relation generated by $\to_i$:   
\begin{itemize}
\item $ m =_i n$ if $m \twoheadrightarrow_i n $.
\item $ n =_i m$ if $m =_i n $. 
\item $ m =_i l$ if $m =_i n, n =_i l$.
\end{itemize}

\end{definition}

\begin{definition}
 We say $a$ is \textit{reducible} if there is a $b$ such that $a \to_i b$. So $a$ is in $i$-\textit{normal form} if and only if $a$ is not reducible. We say $b$ is a normal form of $a$ with respect to $\to_i$ if $a \twoheadrightarrow_i b$ and $b$ is not reducible. $a$ and $b$ are joinable if there is $c$ such that $a \twoheadrightarrow_i c$ and $b \twoheadrightarrow_i c$. An abstract reduction system is strongly normalizing if there are no infinite
reduction path.
\end{definition}

\subsection{Lambda Calculus}
We use $x,y,z,s,n,x_1, x_2, ...$ to denote individual variable, $t,t', a,b, t_1, t_2, ... $ to denote term, $\equiv$ to denote syntactic equality. $[t'/x]t$ to denote substituting the variable $x$ in $t$ for $t'$. The syntax and reduction for lambda calculus is given as following.

\begin{definition}[Lambda Calculus]

\

\noindent Term  $t \ ::= \ x \ | \ \lambda x.t \ | \ t\  t'$ 

\noindent Reduction  $(\lambda x.t)t' \to_{\beta} [t'/x]t$ 
\end{definition}

\noindent For example, $(\lambda x.x\ x)(\lambda x.x\ x)$, $\lambda y.y$ are concrete
terms in lambda calculus.  For a term $\lambda x.t$, we call $\lambda$ the \textit{binder}, $x$ is \textit{binded }, called \textit{bind variable}. If a variable is not binded, we say it is a \textit{free} variable. We will treat terms up to $\alpha$-equivalence, meaning, for any
term $t$, one can always rename the binded variables in $t$. So for example, $\lambda x.x\ x$ is
the same as $\lambda y.y\ y$, and $\lambda x.\lambda y.x\ y$ is the same as $\lambda z.\lambda x .z\ x$. $(\lambda x.\lambda y.x\ y)\underline{((\lambda z.z)z_1)} \to_{\beta} \underline{(\lambda x.\lambda y.x\ y)z_1} \to_{\beta} \lambda y.z_1\ y$ is a valid reduction sequence in lambda calculus. Note that for reader's convenient we underline the part we are going to carry out the reduction(we will not do this again) and we call the underline term \textit{redex}. For a comprehensive introducton on lambda calculus, we refer to \cite{Barendregt:1985}. 

\subsection{Simple Types}

We use $A, B, C, X,Y, Z, ...$ to denote type variable, $T, S, U ...$ to denote any type. 

\begin{definition}
\

\noindent Type $T \ :: =  \ X \ | \ T_1 \to T_2 $

\noindent Context $\Gamma \ ::= \ \cdot \ | \ \Gamma, x : T $
\end{definition}

We call $T_1 \to T_2$ \textit{arrow type}, \textit{Typing} is a procedure to associate a term with a type. Typing is usually described by a set of rules, indicating how to associate
a term $t$ with a type $T$ in given context $\Gamma$, denoted by $\Gamma \vdash t:T$.  We present \textit{simply typed lambda calculus} below.

\begin{definition}

\

\

\begin{tabular}{lll}
    
\infer[\textit{Var}]{\Gamma \vdash x:T}{(x:T) \in \Gamma}

&

\infer[\textit{Abs}]{\Gamma \vdash \lambda x.t :T_1 \to
T_2}{\Gamma, x:T_1 \vdash t: T_2}

&
\infer[\textit{App}]{\Gamma \vdash t\ t': T_2}{\Gamma
\vdash t:T_1 \to T_2 & \Gamma \vdash t': T_1}

\\
\end{tabular}
\label{typing-rules}
\end{definition}

Simply typed lambda calculus provides a basic framework for many sophisticated type systems. 
It is quite restrictive from both logical and programming point of view, since logically it 
corresponds to minimal intuitionistic propositional logic \cite{hindley1997basic} and it
only accepts a small set of strong normalizing terms. It has two properties, namely, type preservation and strongly normalization. For proofs of these two theorem we refer to \cite{Pierce:2002}. 

\begin{theorem}[Type Preservation]
  If $\Gamma \vdash t:T$ and $t \to_{\beta} t'$, then $\Gamma \vdash t':T$.
\end{theorem}

\begin{theorem}
  If $\Gamma \vdash t:T$, then $t$ is strongly normalizing.
\end{theorem}

\section{Lambda Encoding with Types}
\label{Types}
\subsection{Church Encoding}

\begin{definition}[Church Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s (n\ s\ z)$ 

\end{definition}

From above we know $1 \ := \mathsf{S}\ 0 \equiv (\lambda n.\lambda s.\lambda z. s (n\ s\ z))(\lambda s.\lambda z. z) \to_{\beta} \lambda s.\lambda z. s ((\lambda s.\lambda z. z) s\ z) \to_{\beta} \lambda s.\lambda z. s\ z$.  Note that the last part of above reductions occur underneath the lambda abstractions. Similarly we can get $2\ :=  \lambda s.\lambda z. s \ s\ z$. 

Informally, we can interpret lambda term as both data and function, so instead of thinking data $2$ as  
data, one can think of it as a higher order function $h$, which take in a function $f$ and a data $a$
as arguments, then apply the function $f$ to $a$ two times. 

One can define a notion of \textit{iterator} $\mathsf{It}\ n\ f\ t \ := n\ f \ t$. So $\mathsf{It}\ 0 \ f\ t =_{\beta} t $ and $\mathsf{It}\ (\mathsf{S}\ u) \ f\ t  =_{\beta} f (\mathsf{It}\ u \ f\ t) $. So now we can use iterator to define $\mathsf{Plus} \ n\ m := \mathsf{It}\ n\ \mathsf{S}\ m$.

%We define $\mathsf{plus} \ := \lambda m.\lambda n.\lambda s.\lambda z. m\ s (n\ s\ z)$.
\subsection{Scott Encoding}
\begin{definition}[Scott Numeral]

\

\noindent $0 \ := \lambda s.\lambda z. z $ 

\noindent $\mathsf{S} \ := \lambda n.\lambda s.\lambda z. s\ n$ 

\end{definition}

We can see $1 \ := \lambda s.\lambda z. (s\ 0)$, $2 \ := \lambda s.\lambda z. (s\ 1)$. 
One can define a notion of \textit{recursor}. But before defining that, we give one
version of the \textit{fix point operator} $\mathsf{Fix} := \lambda f.(\lambda x.f\ (x\ x)) (\lambda x.f\ (x\ x))$. The reason it is called fix point operator is when it applied to a lambda expression, it give a
fix point of that lambda expression(recall informally each lambda expression is both data and function).
So $\mathsf{Fix} \ g \to_{\beta} (\lambda x.g\ (x\ x)) (\lambda x.g\ (x\ x)) \to_{\beta} g ((\lambda x.g\ (x\ x))\ (\lambda x.g\ (x\ x)) ) =_{\beta} g\ (\mathsf{Fix}\ g) $. 

Since fix point operator is expressable with a lambda expression,
the direct consequence is we can define recursor: $\mathsf{Rec}\ := \ \mathsf{Fix}\ \lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v$. So we get $\mathsf{Rec}\ 0\ f\ v {\twoheadrightarrow_{\beta}} v$ and $\mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v {\twoheadrightarrow_{\beta}} f\ (\mathsf{Rec}\ n\ f\ v)\ n$. In a similar fashion, one can define $\mathsf{Plus} \ n\ m\ := \mathsf{Rec} \ n \ (\lambda x.\lambda y.\mathsf{S}\ x)\ m$. 

%Thus $\mathsf{Rec}\ 0\ f\ v \leadsto (\lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v) \ \mathsf{Rec}\ 0\ f\ v \leadsto 0 \ (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ v  \leadsto v$. And $\mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v \leadsto (\lambda r. \lambda n. \lambda f. \lambda v. n \ (\lambda m. f \ (r\ m\ f\ v)\ m)\ v) \ \mathsf{Rec}\ (\mathsf{S}\ n)\ f\ v \leadsto (\mathsf{S}\ n) \ (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ v  \leadsto (\lambda m. f \ (\mathsf{Rec}\ m\ f\ v)\ m)\ n \leadsto f\ (\mathsf{Rec}\ n\ f\ v)\ n $. 

The predecessor function can be easily defined as $\mathsf{Pred}\ n\ :=  \mathsf{Rec}\ n\ (\lambda x.\lambda y.y)\ 0$. It only takes constant time (w.r.t. the number of beta reduction steps) to calculate the predessesor. But this function is tricky to define with Church encoding, one need to first define recursor with iterator, then use recursor to define $\mathsf{Pred}$. To calculate $\mathsf{Pred}\ n$ with Church encoding, one has to perform at least $n$ steps, so it takes linear time \cite{Girard:1989}. 

\subsection{Church Encoding in System \textbf{F}}

System \textbf{F} is an extension of simply typed lambda calculus, the only addition is a \textit{polymorphic type} $\forall X.T$. Note that here $\forall$ is a binder. The additional typing rules are follows:

\

\begin{tabular}{lll}
    
\infer[\textit{Gen}]{\Gamma \vdash t:\forall X.T}{\Gamma \vdash t:T & X \notin FV(\Gamma)}

&
&

\infer[\textit{Inst}]{\Gamma \vdash t :[T'/X]T
}{\Gamma \vdash t: \forall X. T}

\\

\\
\end{tabular}


\noindent $X \notin FV(\Gamma)$ means $X$ is not a free type variable in the types of the typing context $\Gamma$. For example, given above typing rule, we can asscociate identity function
with a polymorphic type, i.e. $\cdot \vdash \lambda x.x : \forall X.X \to X$. And we also have 
$\cdot \vdash \lambda x.x : T \to T$ for any type $T$. 

We define  $\mathsf{Nat} \ := \forall X. (X \to X) \to X \to X$. One can type the constructors $0$ and $\mathsf{S}$
as following.

\

\infer{\infer{\cdot \vdash \lambda s.\lambda z.z : \forall X. (X \to X) \to X \to X }{\cdot \vdash \lambda s.\lambda z.z :  (X \to X) \to X \to X}}{\infer{s:X \to X, z:X \vdash z:X}{}}

\

\noindent For space reason, we only list $\cdot \vdash 0 : \mathsf{Nat}$, similarly one will can type:\\
\noindent $\cdot \vdash \mathsf{S} : \mathsf{Nat} \to \mathsf{Nat}$ \\
\noindent $\cdot \vdash \mathsf{It} : \forall X. \mathsf{Nat} \to (X \to X) \to X \to X$ \\
\noindent $\cdot \vdash \mathsf{Plus} :  \mathsf{Nat} \to \mathsf{Nat} \to \mathsf{Nat}$ 

\

System \textbf{F} and Church encoding fit together really well, indeed, being able to define inductive data type within the type system is one of the motivations for devising system \textbf{F}\cite{Girard:1989}. Through Curry-Howard correspondent, one can view types in system \textbf{F} as intuitionistic proposition, the quantification proposition $\forall X.T$ is considered \textit{impredicative} in the sense that $X$ can be instantiated by any proposition, including itself, terms in system \textbf{F} becomes proof for the proposition. System \textbf{F} is also type preserving and strongly normalizing \cite{Girard:1989}. 

\subsection{Scott Encoding with Recursive Types}
It is not obvious to type Scott encoding with system \textbf{F}. We extend simply typed lambda calculus with \textit{recursive types} $\mu X.T$. Note that here $\mu$ is a binder.  The additional typing rules are follows:

\

\begin{tabular}{lll}
    
\infer[\textit{Fold}]{\Gamma \vdash t:\mu X.T}{\Gamma \vdash t:[\mu X.T/X]T}

&
&

\infer[\textit{unFold}]{\Gamma \vdash t :[\mu X.T/X]T
}{\Gamma \vdash t: \mu X. T}

\\

\\
\end{tabular}

\

With recursive type, define $\mathsf{Nat} \ := \mu X. (X \to U) \to U \to U$ for any type $U$. We introduce a notation $T \sim T'$ to mean there exist a derivation from $\Gamma \vdash t:T$ to $\Gamma \vdash t:T'$, is called \textit{morphing} relation. Thus $\mathsf{Nat} \sim (\mathsf{Nat} \to U) \to U \to U$.

\

\begin{tabular}{lll}

\infer{\infer{\cdot \vdash \lambda s.\lambda z.z : \mathsf{Nat} }{\cdot \vdash \lambda s.\lambda z.z :  (\mathsf{Nat} \to U) \to U \to U}}{\infer{s:\mathsf{Nat} \to U, z:U \vdash z:U}{}}

&

&
\infer{\infer{n:\mathsf{Nat} \vdash \lambda s.\lambda z.s \ n : \mathsf{Nat} }{n:\mathsf{Nat} \vdash \lambda s.\lambda z.s \ n :  (\mathsf{Nat} \to U) \to U \to U}}{\infer{n: \mathsf{Nat}, s:\mathsf{Nat} \to U, z:U \vdash s\ n:U}{}}

\\
\end{tabular}

\

\noindent $\cdot \vdash \mathsf{Fix}: (U \to U) \to U$ for any type $U$. 

\noindent $\cdot \vdash \mathsf{Rec}: \mathsf{Nat} \to (U \to \mathsf{Nat}\to U) \to U \to U$. 

\noindent $\cdot \vdash \mathsf{Plus}: \mathsf{Nat} \to \mathsf{Nat} \to \mathsf{Nat}$. 

\

Recursive types is powerful enough to capture the typing for Church encoding, define $\mathsf{Nat} \ := \mu X.(X \to X) \to X \to X$. Thus $\mathsf{Nat} \sim (\mathsf{Nat} \to \mathsf{Nat}) \to \mathsf{Nat} \to \mathsf{Nat}$.

\

\begin{tabular}{lll}

\infer{\infer{\cdot \vdash \lambda s.\lambda z.z : \mathsf{Nat} }{\cdot \vdash \lambda s.\lambda z.z :  (\mathsf{Nat} \to \mathsf{Nat}) \to \mathsf{Nat} \to \mathsf{Nat}}}{\infer{s:\mathsf{Nat} \to \mathsf{Nat}, z:\mathsf{Nat} \vdash z:\mathsf{Nat}}{}}

&

&
\infer{\infer{n:\mathsf{Nat} \vdash \lambda s.\lambda z.s \ (n\ s\ z) : \mathsf{Nat} }{n:\mathsf{Nat} \vdash \lambda s.\lambda z.s \ (n\ s\ z) :  (\mathsf{Nat} \to \mathsf{Nat}) \to \mathsf{Nat} \to \mathsf{Nat}}}{\infer{n: \mathsf{Nat}, s:\mathsf{Nat} \to \mathsf{Nat}, z:\mathsf{Nat} \vdash s\ (n\ s\ z):\mathsf{Nat}}{}}

\\
\end{tabular}

\

\noindent $\cdot\vdash \mathsf{It} : \mathsf{Nat} \to (\mathsf{Nat} \to \mathsf{Nat}) \to \mathsf{Nat} \to \mathsf{Nat} $. We can see that here, because of lack of support for polymorphic type, the iterator for Church encoding
numerals can only deal with numerals. Note that recursive types can not be interpreted as formula under 
Curry-Howard correspondent. 

Recursive types and its denotational semantics have been studied extensively in \cite{Winskel:1993}, \cite{Barendregt:92}. The recursive type system is type preserving but not strongly normalizing. 
%One can already see here the tension between logical expressiveness and computability. 

\section{Dependent Type}
\label{Dep}
%% 1. Dependent elimination
%% 2. Multi-level 
%% (1-2 pages)

In order to enable type to mention terms, we extend the types of system \textbf{F} 
with \textit{dependent type}(product type) $\Pi x:T.T'$ and \textit{index type} $T\ t$. 
The additional typing rules are: 

\

\begin{tabular}{lll}
    
\infer[\textit{Pi}]{\Gamma \vdash \lambda x.t:\Pi x: T'.T}{\Gamma,x:T' \vdash t:T & x \in FV(T)}

&
&

\infer[\textit{Elim}]{\Gamma \vdash t\ t' :[t'/x]T}{\Gamma \vdash t: \Pi x:T'.T & \Gamma \vdash t':T' }

\\

\\

\infer[\textit{Conv}]{\Gamma \vdash t:[t_2/x]T}{\Gamma \vdash t:[t_1/x]T & t_1 =_{\beta} t_2}

\\
\end{tabular}

\

So far the typing rule does not prevent us to write things like $(T_1 \to T_2) x$, so we introduce \textit{kind}(denoted by $\kappa$) and the process of \textit{kinding} to regulate the type we write. We extend the notion of context and change the form of the type $\forall X.T$ in system \textbf{F} to $\forall X:\kappa.T$ to allow a finer classification of type. We say type $T$ has kind $\kappa$ under the context $\Gamma$, denoted by $\Gamma \vdash T:\kappa$. 


\begin{definition}[Kind and Kinding]
\noindent Kind $\kappa \ := \ * \ | \ \xi x:T.\kappa$

\noindent Context $\Gamma \ := \ \cdot \ | \ \Gamma,x:T \ | \ \Gamma, X:\kappa$


\

\begin{tabular}{lll}
    
\infer[\textit{K-Var}]{\Gamma \vdash X:\kappa}{X:\kappa \in \Gamma}

&
&

\infer[\textit{K-App}]{\Gamma \vdash S\ t' :[t'/x]\kappa}{\Gamma \vdash S: \xi x:T'.\kappa & \Gamma \vdash t':T' & \Gamma \vdash T':* }

\\

\\

\infer[\textit{K-Pi}]{\Gamma \vdash \Pi x:T'.T:*}{\Gamma \vdash T':* & \Gamma, x:T'\vdash T:* }

&

&

\infer[\textit{K-Forall}]{\Gamma \vdash \forall X:\kappa.T:*}{\Gamma, X:\kappa \vdash T:*  }

\\
\\

\infer[\textit{K-Arrow}]{\Gamma \vdash T_1 \to T_2:*}{\Gamma \vdash T_1:* & \Gamma \vdash T_2:* }

&

&

\\
\end{tabular}
\end{definition}

\noindent Note that the $\xi$ in $\xi x:T.\kappa$ is a binder, with its scope in $\kappa$. 

\begin{definition}[Revised System \textbf{F} Typing]

\

\

  \begin{tabular}{lll}
    
\infer[\textit{Gen}]{\Gamma \vdash t:\forall X:\kappa.T}{\Gamma, X:\kappa \vdash t:T & X \notin FV(\Gamma)}

&
&

\infer[\textit{Inst}]{\Gamma \vdash t :[T'/X]T
}{\Gamma \vdash t: \forall X:\kappa. T & \Gamma \vdash T':\kappa}

\\

\\
\end{tabular}

\end{definition}

\

Now let us see an example. We already seen Church numeral $\mathsf{Nat}$ with operation $\mathsf{Plus}$ can 
be encoded in System \textbf{F}. With dependent type, we can do some lightweight external reasoning on 
our Church numerals. One can express \textit{Leibniz's law} as: $\mathsf{Eq}\ [A] \ x \ y := \forall C:(\xi z:A.*). C\ x \to C\ y$. It reads as: given any $x$ and $y$, they are in the relation $\mathsf{Eq}(x, y)$ if, given any predicate $C$, if $C(x)$, then $C(y)$ \cite{sep-identity-indiscernible}. Now we can show $\cdot \vdash \lambda x.x : \mathsf{Eq}\ [\mathsf{Nat}]\ (\mathsf{Plus}\ 1\ 1)\ 2$. It is derivable because the following derivation:

\

\infer[\textit{Def}]
{\cdot \vdash \lambda x.x : \mathsf{Eq}\ [\mathsf{Nat}]\ (\mathsf{Plus}\ 1\ 1)\ 2 }
{
  \infer[Gen]{\cdot \vdash \lambda x.x : \forall C:(\xi z:\mathsf{Nat}.*). C\ (\mathsf{Plus}\ 1\ 1) \to C\ 2} {\infer[\textit{Abs}]{C:(\xi z:\mathsf{Nat}.*) \vdash \lambda x.x : C\ (\mathsf{Plus}\ 1\ 1) \to C\ 2} 
         {   \infer[\textit{Conv}]{C:(\xi z:\mathsf{Nat}.*) ,x :  C\ (\mathsf{Plus}\ 1\ 1) \vdash x :  C\ 2 } 
                   {    \infer[\textit{Var}]{C:(\xi z:\mathsf{Nat}.*) ,x :  C\ (\mathsf{Plus}\ 1\ 1) \vdash x :  C\ (\mathsf{Plus}\ 1\ 1)}{} & (\mathsf{Plus}\ 1\ 1) =_{\beta} 2
                   }
         }}
} 


\

\noindent Note that the last step is by definition of $\mathsf{Eq}$. Similarly, 
the following is a derivation of $A:*, a:A \vdash \lambda x.x: \mathsf{Eq}\ [A] \ a \ a$.

\
\infer[\textit{Gen}]{\cdot \vdash \lambda a.\lambda x.x :\forall A:*.\Pi a: A. \mathsf{Eq}\ [A]\ a\ a}{
\infer[\textit{Abs}]{A:*  \vdash \lambda a.\lambda x.x :\Pi a: A. \mathsf{Eq}\ [A]\ a\ a}{
\infer[\textit{Def}]
{A:*, a:A \vdash \lambda x.x : \mathsf{Eq}\ [A]\ a\ a }
{
  \infer[Gen]{A:*,a:A \vdash \lambda x.x : \forall C:(\xi z:A.*). C\ a \to C\ a} {\infer[\textit{Abs}]{A:*,a:A, C:(\xi z:A.*) \vdash \lambda x.x : C\ a \to C\ a} 
         {   \infer[\textit{Var}]{A:*,a:A, C:(\xi z:A.*), x:C\ a \vdash x : C\ a}{} 
         }}
} }}

\

\noindent The derivation above can be viewed as a \textit{proof} of \textit{reflexitive} for the $\mathsf{Eq}$(Namely, the formula $\forall A:*.\Pi a: A. \mathsf{Eq}\ [A]\ a\ a$); or it can be viewed as a procedure to 
associate the type $\forall A:*.\Pi a: A. \mathsf{Eq}\ [A]\ a\ a$ to a lambda term $\lambda a.\lambda x.x$. 
Since under Curry-Howard correspondent, dependent type can be interpreted as intuitionistic formula \cite{Barendregt:92}. One can also derive a proof for $\forall A:*. \Pi a:A.\Pi b:A.\forall B:(\xi x:A.*). B \ a \to (\mathsf{Eq}\ [A]\ a \ b) \to B\ b$, which is called the
\textit{substitution property} for Leibniz's equality.  


%% Let the context $\Gamma \ := \ \mathsf{Nat}:\star, \mathsf{ListLen}: \Pi x:\mathsf{Nat}.\star,\mathsf{Nil}:\mathsf{ListLen}\ 0, \mathsf{Cons}:\Pi x:\mathsf{Nat}. (\mathsf{Nat} \to \mathsf{ListLen}\ x \to \mathsf{ListLen}\ x+1)$.
%% One can see:
%% \noindent $\Gamma \vdash \mathsf{Nil}: \mathsf{ListLen}\ 0$

%% \noindent $\Gamma \vdash \mathsf{Cons} \ 0\ 10\ \mathsf{Nil} : \mathsf{ListLen}\ 1$.

%% \noindent $\Gamma \vdash \mathsf{Cons}\ 1\ 11\ (\mathsf{Cons} \ 0\ 10\ \mathsf{Nil}) : \mathsf{ListLen}\ 2$.

%% \noindent $\Gamma \vdash \mathsf{Cons}\ \mathsf{n} : \mathsf{ListLen}\ \mathsf{n} \to \mathsf{ListLen}\ \mathsf{n}+1$ for any $\mathsf{n}:\mathsf{Nat}$.

%% \noindent $\Gamma \vdash \mathsf{It}: \mathsf{Nat} \to ()$

%\subsection{Problems of Church Encoding with Dependent Type}

We have seen we can prove some very basic properties about Church numerals with the operation $\mathsf{Plus}$ and the relation $\mathsf{Eq}$. 
For functions like $\mathsf{Plus}$, one would also like to prove properties about commutativity, associativity etc. For such properties, the proof normally will involve induction argument, but it is known that induction is not 
derivable in second order dependent type system \cite{Geuvers:2001}. Note that induction principle can be expressed
as $\mathsf{Id}\ := \ \forall P: (\xi x: \mathsf{Nat}.*). P\ 0 \to (\Pi y:\mathsf{Nat}. (P \ y) \to (P\ (\mathsf{S}\ y))) \to \Pi x:\mathsf{Nat}.P\ x$. So this means there does not exist any derivation and term $t$ such that $\cdot \vdash t : \mathsf{Id}$. Because, informally, we can only have 

%% \noindent $P:(\xi x: \mathsf{Nat}.*), a : P\ 0, b : \Pi y:\mathsf{Nat}. (P \ y) \to (P\ (\mathsf{S}\ y)), x:\mathsf{Nat} \vdash t' : P\ x $ 

%% \noindent in the derivation. Let $\Gamma := P:(\xi x: \mathsf{Nat}.*), a : P\ 0, b : \Pi y:\mathsf{Nat}. (P \ y) \to (P\ (\mathsf{S}\ y)), x:\mathsf{Nat}$ . Then $\Gamma \vdash (b\ 0\ a): P\ (\mathsf{S}\ 0)$, $\Gamma \vdash (b\ 1\ (b\ 0\ a)): P\ (\mathsf{S}\ 1)$, etc. (or 

\noindent $x:\mathsf{Nat} \vdash 0 : \forall P: (\xi x: \mathsf{Nat}.*). P\ 0 \to (\Pi y:\mathsf{Nat}. (P \ y) \to (P\ (\mathsf{S}\ y))) \to P\ 0$ 

\noindent $x:\mathsf{Nat} \vdash \bar{n} : \forall P: (\xi x: \mathsf{Nat}.*). P\ 0 \to (\Pi y:\mathsf{Nat}. (P \ y) \to (P\ (\mathsf{S}\ y))) \to P\ \bar{n}$, for any Church numerals $\bar{n}$ 

\noindent Here we can see that the dependent type system can not capture the notion of \textit{for any Church numerals $\bar{n}$ }, since it involves a meta-level quantification. This problem is also discussed in Coquand's \cite{coquand:inria-00075471}. In later section, we propose $\mathsf{Selfstar}$, with the help of recursive definition and self type mechanism, we can effectively capture this kind of meta-level quantification. 

The second problem is that in dependent type with Church encoding \cite{Werner:92}, $0 \not = 1$ is not derivable. Note that $0 \not = 1$ can be represented as $(\mathsf{Eq}\ [\mathsf{Nat}]\ 0\ 1) \to \bot$, where $\bot\ := \forall Q:*.Q$. In
intuitionistic logic, $\neg A$ is expressed as $A \to \bot$. $\bot$ means contradiction, can be encoded in 
system \textbf{F} as $\forall Q:*.Q$, because there is no derivation such that $\cdot \vdash t: \forall Q:*.Q$ \cite{Girard:1989}. Because the non-derivability of $\bot$, $(\mathsf{Eq}\ [\mathsf{Nat}]\ 0\ 1) \to \bot$ become underivable. Thus $0 \not = 1$ is not a theorem in dependent type system. 

In a sense these problems are not surprising, that is why 
in \textit{Peano arithmetic} and \textit{Heyting arithmetic} \cite{peano1889arithmetices}, \cite{heyting1930formalen}, both of these system have numbers as primitives, induction, $0 \not = 1$ and many others as axioms.  

\section{Confluence}
\label{Conf}
%(5-6 pages)

We seen the use of \textit{reduction}($\to_\beta$) to convey a notion of \textit{equation}($=_{\beta}$). 
Informally, the equation $a\ =\ b$ means $a$ and $b$ denotes the same thing; while the reduction $a \to b$
convey how one can obtain the expression $b$ from $a$, even though both $a$ and $b$ denote
the same thing. So to interpret the equation $\mathsf{S}\ 0\ +\ (\mathsf{S}\ \mathsf{S}\ 0)\ =\ \mathsf{S}\ \mathsf{S}\ \mathsf{S}\ 0$ by saying the expressions at both side
denote the same thing is uninteresting; but $\mathsf{S}\ 0\ +\ (\mathsf{S}\ \mathsf{S}\ 0)\ \to 0\ + \ (\mathsf{S}\ \mathsf{S}\ \mathsf{S}\ 0)\ \to \mathsf{S}\ \mathsf{S}\ \mathsf{S}\ 0$ shows how one can obtain the expression $\mathsf{S}\ \mathsf{S}\ \mathsf{S}\ 0$ from the expression $\mathsf{S}\ 0\ +\ (\mathsf{S}\ \mathsf{S}\ 0)$ by applying appropriate rules. The difference between reduction and equation is discussed in Girard's (\cite{Girard:1989}, chapter 1), for a philosophical perspective on this distinction(\textit{sense and denotation} distinction), we refer to Frege's \cite{frege1967basic}.  

Beta reduction $\to_{\beta}$ will be analyzed in this section. We first identify the \textit{confluent} property and \textit{Church-Rosser} property in abstract reduction system (similar treatment can be found at \cite{bezem2003term}, \cite{baader1999term}) and illustrate some consequences of confluence and Church-Rosser, then we survey three methods to show $\to_{\beta}$ reduction is confluent.

\begin{definition}
\label{c-r}
  Given an abstract reduction system $(\mathcal{A}, \{ \to_i\}_{i\in \mathcal{I}})$, let $\to$ denote $\bigcup_{i\in \mathcal{I}} \to_i$, let $=$ denote the equivalence relation generated by $\to$.
\begin{itemize}
\item Confluence: For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then there exist $d \in \mathcal{A}$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

\item Church-Rosser: For any $a,b \in \mathcal{A}$, if $a = b$, then there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$.

\end{itemize}
\end{definition}

\noindent The two properties above can be expressed by following diagrams:

\
\begin{center}
\begin{tabular}{lll}
\begin{diagram}[size=1.5em,textflow]
 & & a & & \\
 & \ldOnto & & \rdOnto &  \\
 b & &  &  & c \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & d & & \\
\end{diagram}

&

&
\begin{diagram}[size=1.5em,textflow]
 a & & = &  & b \\
 & \rdDashtoo & & \ldDashtoo &  \\
 & & c & & \\
\end{diagram}

\end{tabular}
\end{center}

\begin{lemma}
\label{Conf-CR}
  An abstract reduction system $\mathcal{R}$ is confluent iff it is Church-Rosser.
\end{lemma}
\begin{proof}
  Assume the same notation as defintion \ref{c-r}. 

 ``$\Leftarrow$'': Assume $\mathcal{R}$ is Church-Rosser. For any $a,b,c \in \mathcal{A}$, if $a \twoheadrightarrow b$ and $a \twoheadrightarrow c$, then this means $b = c$. By Church-Rosser, there is a $d \in \mathcal{A}$, such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. 

``$\Rightarrow$'': Assume $\mathcal{R}$ is Confluent. For any $a,b \in \mathcal{A}$, if $a = b$, then we show there is a $c \in \mathcal{A}$ such that $a \twoheadrightarrow c$ and $b \twoheadrightarrow c$ by induction on the generation of $a = b$:  

If $a \twoheadrightarrow b \Rightarrow a = b$, then let $c$ be $b$.

If $b = a \Rightarrow a = b$, by induction, there is a $c$ such that $b \twoheadrightarrow c$ and $a \twoheadrightarrow c$. 

If $a = d, d = b \Rightarrow a = b$, by induction there is a $c_1$ such that $a \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_1$; there is a $c_2$ such that $d \twoheadrightarrow c_2$ and $b \twoheadrightarrow c_2$. So now 
we get $d \twoheadrightarrow c_1$ and $d \twoheadrightarrow c_2$, by confluence, we have a $c$ such that $c_1 \twoheadrightarrow c$ and $c_2 \twoheadrightarrow c$. So $a \twoheadrightarrow c_1 \twoheadrightarrow c$ and $b \twoheadrightarrow c_2 \twoheadrightarrow c$. This process is illustrated by the following diagram:

\begin{diagram}[size=1.5em,textflow]
 a &            & = &            & d &           & =  &           & b &  \\
   & \rdDashtoo &   & \ldDashtoo &   & \rdDashtoo &   & \ldDashtoo & & \\
   &            & c_1 &          &   &            & c_2 &            & & \\
   &            &  & \rdDashtoo         &   &    \ldDashtoo        &  &            & & \\
   &            &     &          & c  &            &  &            & & \\
\end{diagram}

\end{proof}

The definition of $=$ depends on $\twoheadrightarrow$, the definition of $\twoheadrightarrow$ depends on $\to$, 
confluence is often easier to prove compare to Church-Rosser, in the sense that it is easier to anaylze $\twoheadrightarrow$ compare to $=$. Now let us see some consequences of confluence. 

\begin{corollary}
  If $\mathcal{R}$ is confluent, then every element in $\mathcal{A}$ has at most one normal form.
\end{corollary}
\begin{proof}
  Assume $a \in \mathcal{A}$, $b,c$ are two diferent normal forms for $a$. So we have $a \twoheadrightarrow b$
and $a \twoheadrightarrow c$, by confluence, there exist a $d$ such that $b \twoheadrightarrow d$ and $c \twoheadrightarrow d$. But $b,c$ are normal form, this implies $b$ and $c$ are the same as $d$, which contradicts that they are two different normal form. 
\end{proof}

\begin{definition}
  For an abstract reduction system $\mathcal{R}$, it is trivial if 
for any $a , b \in \mathcal{A}$, $a = b$.
\end{definition}

\begin{corollary}
  If $\mathcal{R}$ is confluent and there are at least two different normal forms, then $\mathcal{R}$ is
not trivial.
\end{corollary}

\subsection{Tait-Martin L\"of's Method}

We want to show lambda calculus as an abstract reduction system is confluent. We present
a method of proving confluence in abstract reduction system, which is due to W. Tait and P. Martin-L\"of(reported in \cite{Barendregt:1985}). Then we show how we
can apply this method to show lambda calculus is confluent. 

\begin{definition}[Diamond Property]
  Given an abstract reduction system $(\mathcal{A}, \{ \to_i\}_{i\in \mathcal{I}})$, it has diamond property if:

 For any $a, b, c \in \mathcal{A}$, if $a \to b$ and $a \to c$, then there exist $d \in \mathcal{A}$ such that $b \to d$ and $c \to d$.

\begin{diagram}[size=1.5em,textflow]
 & & a & & \\
 & \ldTo & & \rdTo &  \\
 b & &  &  & c \\
 & \rdDashto & & \ldDashto &  \\
 & & d & & \\
\end{diagram}


\end{definition}

\begin{lemma}
  If $\mathcal{R}$ has diamond property, then it is confluent.
\end{lemma}
\begin{proof}
By simple diagam chasing suggested below:


\begin{diagram}[size=1.5em,textflow]
   &            &     &          & a  &            &  &            & & \\
   &            &     & \ldTo &   & \rdTo &   &  & & \\
   &            & c_1 &          &   &          & c_2 &            & & \\
   &  \ldTo     &     & \rdDashto        &   &       \ldDashto    &  &   \rdTo      & & \\
  e &            &     &            & d &           &   &           & b &  \\
   & \rdDashto &   & \ldDashto &   & \rdDashto &   & \ldDashto & & \\
   &            & c_1 &          &   &            & c_2 &            & & \\
   &            &  & \rdDashto         &   &    \ldDashto        &  &            & & \\
   &            &     &          & c  &            &  &            & & \\
\end{diagram}
\end{proof}

\begin{lemma}
\label{Subeq}
  If exist some $\to_i$, $\to \subseteq \to_i \subseteq \twoheadrightarrow$ and $\to_i$ satisfies diamond property, then 
$\to$ is confluent.
\end{lemma}
\begin{proof}
  Since $\to \subseteq \to_i \subseteq \twoheadrightarrow$ implies $\twoheadrightarrow \subseteq {\twoheadrightarrow_i} \subseteq {\twoheadrightarrow}$, so $\twoheadrightarrow_i = {\twoheadrightarrow}$. And the diamond property of $\to_i$ implies $\to_i$ is confluence, thus implies the confluence of $\to$. 
\end{proof}

Sometimes $\to$ may not satisfy diamond property, then one can look for the possibility to construct an 
intermediate reduction $\to_i$ such that it has diamond property. That is exactly what we will do for lambda
calculus.

\subsubsection{Confluence of Lambda calculus}

Beta reduction itself does not satsify diamond property, for example, $(\lambda x.((\lambda u.u)\ v)\ ((\lambda y.y\ y)\ z) \to_{\beta} (\lambda x.((\lambda u.u)\ v))\  (z\ z)$ and $(\lambda x.((\lambda u.u)\ v)\ ((\lambda y.y\ y)\ z) \to_{\beta} (\lambda u.u)\ v$. And one can not join $(\lambda u.u)\ v$ and $(\lambda x.((\lambda u.u)\ v))\  (z\ z)$ in one step. But one can see they are still joinable, but not joinable in one step. This leads to the notion of parallel reduciton. 

\begin{definition}[Parallel Reduction]
\

\

  \begin{tabular}{llll}


\infer{ t \Rightarrow_{\beta} t}{}

&

\infer{\lambda x.t \Rightarrow_{\beta} \lambda x.t'}{t \Rightarrow_{\beta} t'}

&
\infer{t_1 t_2 \Rightarrow_{\beta} t_1' t_2'}{t_1 \Rightarrow_{\beta} t_1' & t_2 \Rightarrow_{\beta} t_2'}

&

\infer{(\lambda x.t_1) t_2 \Rightarrow_{\beta} [t_2'/x]t_1' }{t_1 \Rightarrow_{\beta} t_1' & t_2 \Rightarrow_{\beta} t_2'}

\\
\end{tabular}
\end{definition}

Intuitively, parallel reduction allows us to contract many beta redex(or not contracting at all) in once step, under this notion of 
one step reduction, we can obtain diamond property for $\Rightarrow_{\beta}$. 

\begin{lemma}
\label{Par:sub}
  If $ t_1 \Rightarrow_{\beta} t_1'$ and $ t_2 \Rightarrow_{\beta} t_2'$, then $[t_2/x]t_1 \Rightarrow_{\beta} [t_2'/x]t_1'$. 
\end{lemma}
\begin{proof}
  By induction on the derivation of $ t_1 \Rightarrow_{\beta} t_1'$. We will not prove this here.
\end{proof}

\begin{lemma}
\label{Par}
  $\Rightarrow_{\beta}$ satisfies diamond property.
\end{lemma}
\begin{proof}
  Assume $t \Rightarrow_{\beta} t_1$ and $t \Rightarrow_{\beta} t_2$, we need to show
there exists a $t_3$ such that $t_1 \Rightarrow_{\beta} t_3$ and $t_2 \Rightarrow_{\beta} t_3$.
We prove this by induction on the derivation of $t \Rightarrow_{\beta} t_1$. 

\

\noindent \textbf{Case}:  
\infer{ t \Rightarrow_{\beta} t}{}
Simply let $t_3$ be $t$. 

\

\noindent \textbf{Case}:  
\infer{\lambda x.t' \Rightarrow_{\beta} \lambda x.t''}{t' \Rightarrow_{\beta} t''}

In this case $t$ is of the form $\lambda x.t'$, where $t' \Rightarrow_{\beta} t''$; $t_1$ is of the form $\lambda x.t''$. $t_2$ must be of the form $\lambda x.t'''$, where $t' \Rightarrow_{\beta} t'''$. Thus by induction, we have a $t_3'$ such that $t'' \Rightarrow_{\beta} t_3'$ and $t''' \Rightarrow_{\beta} t_3'$. Thus let $t_3$ be $\lambda x.t_3'$, we get $t_1 \equiv \lambda x.t'' \Rightarrow_{\beta} \lambda x.t_3' \equiv t_3$ and $t_2 \equiv \lambda x.t'''\Rightarrow_{\beta} \lambda x.t_3' \equiv t_3$. 

\

\noindent \textbf{Case}:  
\infer{(\lambda x.t_4) t_5 \Rightarrow_{\beta} [t_4'/x]t_5' }{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

In this case $t$ is of the form $(\lambda x.t_4) t_5$,  $t_1$ is of the form $[t_5'/x]t_4'$,  $t_4 \Rightarrow_{\beta} t_4' $ and $ t_5 \Rightarrow_{\beta} t_5'$. 

If $t_2$ is of the form $(\lambda x.t_4'') t_5'' $, where $t_4 \Rightarrow_{\beta} t_4''$ and $t_5 \Rightarrow_{\beta} t_5''$ . Thus by induction, we have a $t_6$ such that $t_5'' \Rightarrow_{\beta} t_6$ and $t_5' \Rightarrow_{\beta} t_6$. And same by induction, there is a $t_7$ such that $t_4'' \Rightarrow_{\beta} t_7$ and $t_4' \Rightarrow_{\beta} t_7$.  Thus let $t_3$ be $[t_6/x]t_7$, we get $t_1 \equiv [t_5'/x]t_4'  \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$(by lemma \ref{Par:sub}) and $t_2 \equiv (\lambda x.t_4'') t_5'' \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$. 

If $t_2$ is of the form $[t_5''/ x]t_4'' $, where $t_4 \Rightarrow_{\beta} t_4''$ and $t_5 \Rightarrow_{\beta} t_5''$ . Thus by induction, we have a $t_6$ such that $t_5'' \Rightarrow_{\beta} t_6$ and $t_5' \Rightarrow_{\beta} t_6$. And same by induction, there is a $t_7$ such that $t_4'' \Rightarrow_{\beta} t_7$ and $t_4' \Rightarrow_{\beta} t_7$.  Thus let $t_3$ be $[t_6/x]t_7$, by lemma \ref{Par:sub}, we get $t_1 \equiv [t_5'/x]t_4'  \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$ and $t_2 \equiv [t_5''/ x]t_4'' \Rightarrow_{\beta} [t_6/x]t_7 \equiv t_3$. 

Note: Careful readers are recommended to draw the corresponding diagrams while reading this proof.
 
\

\noindent \textbf{Case}:  
\infer{t_4 t_5 \Rightarrow_{\beta} t_4' t_5'}{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

Similar to the arguments above. 

\end{proof}

\begin{lemma}
\label{Par:eq}
  $\to_{\beta} \subseteq \Rightarrow_{\beta} \subseteq \twoheadrightarrow_{\beta}$. 
\end{lemma}

\begin{theorem}
  $\to_{\beta}$ reduction is confluent.
\end{theorem}
\begin{proof}
  By lemma \ref{Subeq}, lemma \ref{Par} and lemma \ref{Par:eq}.
\end{proof}

\subsubsection{Takahashi's Method}
Based on the notion of parallel reduction, Takahashi \cite{Takahashi95} observed, instead of trying to prove $\Rightarrow_{\beta}$ has diamond property, one can prove a stronger property. 

\begin{definition}
$\Rightarrow_{\beta}$ is said to satisfy triangle property if: $t \Rightarrow_{\beta} t'$ implies  $t' \Rightarrow_{\beta} t^*$, where $t^*$(we called it \textit{parallel contraction}) is defined as:   

\noindent  $x^* \ := \ x$.

\noindent  $(\lambda x.t)^* \ := \ \lambda x. t^*$.

\noindent  $(t_1\ t_2)^* \ := \ t_1^* \ t_2^*$ if $t_1 \ t_2$ is not a beta redex.

\noindent  $((\lambda x.t_1)\ t_2)^* \ := \ [t_2^*/x]t_1^* $.

\begin{diagram}[size=1.5em,textflow]
 & & t & & \\
 &  & & \rdImplies &  \\
  & & \dMapsto &  & t' \\
 &  & & \ldEImplies &  \\
 & & t^* & & \\
\end{diagram}

\end{definition}

One can see that the definition of $t^*$ only depends on $t$ and $\_^*$ is really a recursively defined function that contract all the redex in $t$, so once we prove $\Rightarrow_{\beta}$ has triangle property(name from \cite{bezem2003term}), that will implies the diamond property. 

\begin{lemma}
  If $\Rightarrow_{\beta}$ has triangle property, then it has diamond property.
\end{lemma}
\begin{proof}
   
\begin{diagram}[size=1.6em,textflow]
 & & t & & \\
 & \ldImplies & & \rdImplies &  \\
 t_1 & & \dMapsto &  & t_2 \\
 &\rdEImplies  & & \ldEImplies &  \\
 & & t^* & & \\
\end{diagram}
\end{proof}

\begin{lemma}
  $\Rightarrow_{\beta}$ has triangle property. 
\end{lemma}
\begin{proof}
Assume $t \Rightarrow_{\beta} t'$, we prove this by induction on the derivation of $t \Rightarrow_{\beta} t'$.

\

\noindent \textbf{Case}:  
\infer{ t \Rightarrow_{\beta} t}{}

We need to show $t \Rightarrow_{\beta} t^*$. This can be proved by induction on the form of $t$, we will
not go through the proof here. 

\

\noindent \textbf{Case}:  
\infer{\lambda x.t_1 \Rightarrow_{\beta} \lambda x.t_1'}{t_1 \Rightarrow_{\beta} t_1'}

$t$ is of the form $\lambda x.t_1$, where $t_1 \Rightarrow_{\beta} t_1'$; $t'$ is of the form $\lambda x.t_1'$. By induction, there exist a reduction $t_1' \Rightarrow_{\beta} t_1^*$. Thus there is a reduction $\lambda x.t_1'  \Rightarrow_{\beta} \lambda x.t_1^* \equiv (\lambda x.t_1)^*$. 

\

\noindent \textbf{Case}:  
\infer{(\lambda x.t_4) t_5 \Rightarrow_{\beta} [t_4'/x]t_5' }{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

$t$ is of the form $(\lambda x.t_4) t_5$,  $t'$ is of the form $[t_5'/x]t_4'$,  $t_4 \Rightarrow_{\beta} t_4' $ and $ t_5 \Rightarrow_{\beta} t_5'$. By induction, there is a reduction  $t_4' \Rightarrow_{\beta} t_4^*$ and $t_5' \Rightarrow_{\beta} t_5^*$.  Thus there is a reduction $[t_5'/x]t_4' \Rightarrow_{\beta} [t_5^*/x]t_4^* \equiv ((\lambda x.t_4)\ t_5)^*$(lemma \ref{Par:sub}). 

 
\

\noindent \textbf{Case}:  
\infer{t_4 t_5 \Rightarrow_{\beta} t_4' t_5'}{t_4 \Rightarrow_{\beta} t_4' & t_5 \Rightarrow_{\beta} t_5'}

$t$ is of the form $t_4\ t_5$,  $t'$ is of the form $t_4' \ t_5'$,  $t_4 \Rightarrow_{\beta} t_4' $ and $ t_5 \Rightarrow_{\beta} t_5'$. 

Assume $t_4 \equiv \lambda x.t_6$, then $t_4'$ must be of the form $\lambda x.t_6'$ with $t_6 \Rightarrow_{\beta} t_6' $. By induction, there is a reduction  $t_6' \Rightarrow_{\beta} t_6^*$ and $t_5' \Rightarrow_{\beta} t_5^*$.  So $t_4'\ t_5' \equiv (\lambda x.t_6')\ t_5' \Rightarrow_{\beta} [t_5^*/x] t_6^* \equiv ((\lambda x.t_6)\ t_5)^* \equiv (t_4\ t_5)^*$. 

Assume $t_4 $ is not of the form $\lambda x.t_6$. By induction, there is a reduction  $t_4' \Rightarrow_{\beta} t_4^*$ and $t_5' \Rightarrow_{\beta} t_5^*$.  Thus there is a reduction $t_4' t_5' \Rightarrow_{\beta} t_4^*\ t_5^* \equiv (t_4\ t_5)^*$. 

\end{proof}

%% Takahashi's method benefits from the existence of parallel contraction, there is some parallel reduction
%% system in which is hard to get a well-defined parallel contraction, even hard to obtain the triangle property, but a direct proof of diamond property is still possible(citemynote).  


\subsubsection{Barendregt's Labelling Method}

Barendregt provide a method to prove the confluence of beta reduction for lambda calculus without appeal 
to diamond property \cite{Barendregt:1985}, which has an advantage over Tait-Martin L\"of's(and Takahashi's) method
 in the sense that one does not need to formulate the parallel reduction. 

The new concepts involved are \textit{labelled terms} and \textit{labelled reduction}, both of which are extension of the usual terms and reduction. 

\begin{definition}[Labelled Terms]

\

\noindent $t \ :: = \ x \ |  \ \lambda x.t \ | \ t t'  \ | \ (\underline{\lambda} x.t)t'$ 

\end{definition}

We simply label certain beta redexes. Note that $\underline{\lambda}x.t$ is not a well-formed labelled term, but $ (\underline{\lambda} x.t)t'$ is a well-formed labelled term. The labelled beta reduction extends the usual beta reduction \textit{naturally} in the sense that it can reduce the labelled beta redex. 

\begin{definition}[Labelled Beta Reduction]

\

\begin{tabular}{llll}

 \infer{(\lambda x.t)t' \lstep{\beta} [t'/x]t}{}

&

\infer{(\underline{\lambda} x.t)t' \lstep{\beta} [t'/x]t}{}

&

 \infer{\lambda x.t \lstep{\beta} \lambda x.t'}{t \lstep{\beta}t' }

&

\infer{t t' \lstep{\beta} t'' t'}{ t \lstep{\beta} t''}

\\

\\

\infer{t t' \lstep{\beta} t t''}{t' \lstep{\beta}t'' }

&

\infer{(\underline{\lambda}x.u) t' \lstep{\beta} (\underline{\lambda}x.u') t'}{ u \lstep{\beta} u' }

& 

\infer{(\underline{\lambda}x.u) t' \lstep{\beta} (\underline{\lambda}x.u) t''}{ t' \lstep{\beta} t'' }

\\
\end{tabular}
\end{definition}


 It is natural to make sure that: if $t$ is a well-formed labelled term and $t \lstep{\beta} t'$, then $t'$ is also a well-formed labelled term. We can do this by induction on the derivation of $t \lstep{\beta} t'$. We will use $\underline{\Lambda}$ to denote the set of all labelled terms, $\Lambda$ to denote the set of unlabelled terms. So $\Lambda \subset \underline{\Lambda}$. As usual, $\twoheadrightarrow_{\underline{\beta}}$ denotes the reflexive and transitive closure of $\to_{\underline{\beta}}$. $\to_{\underline{\beta}} $ is defined on terms in $\underline{\Lambda}$. Note that $\to_{\beta} \subseteq \lstep{\beta}$.

\begin{definition}[Erasure]
We define erasure function $e: \underline{\Lambda} \to \Lambda$ as below:

\

\noindent $e(x)\ := x$

\noindent $e(t t')\ := e(t) e(t')$

\noindent $e(\lambda x. t')\ := \lambda x. e(t')$

\noindent $e((\underline{\lambda} x.t) t')\ :=  (\lambda x.e(t)) e(t')$

\noindent graphically denoted by $\to_{e}$

\end{definition}

\begin{definition}[Contraction]

\noindent We define a contraction function $\phi: \underline{\Lambda} \to \Lambda$ as below:  

\

\noindent $\phi(x)\ := x$

\noindent $\phi(t t')\ := \phi(t) \phi(t')$

\noindent $\phi(\lambda x. t')\ := \lambda x. \phi(t')$

\noindent $\phi((\underline{\lambda} x.t) t')\ :=  [\phi(t')/x]\phi(t)$

\noindent graphically denoted by $\to_{\phi}$

\end{definition}

The erasure recursively remove all the labels in a term $t \in \underline{\Lambda}$ without
changing the structure of the term. The contraction functions recursively reduce all the labelled redexes in a labelled term. 

\begin{lemma}
\label{e/m}
If $t_1 \to_{\beta} t_2$ and $t_1' \to_{e} t_1$, then there exist $t_2'$ such that $t_1' \to_{\underline{\beta}} t_2'$,  and $t_2' \to_{e} t_2$. 

\begin{diagram}[size=2em,textflow]
t_1' & \rDashto_{\underline{\beta}} & t_2' \\
\dTo_{e} & &  \dDashto_{e} \\
t_1 &  \rTo_{\beta}  & t_2 \\
\end{diagram}

\end{lemma}
\begin{proof}
\noindent If $t_1 \to_{\beta} t_2$, then $t_2$ is obtained by contracting one redex $\Delta$ in $t_1$. We reduce $\Delta$(either labelled or unlabelled) in $t_1'$ we get $t_2'$(recalled that $\to_{\beta} \subseteq \lstep{\beta}$), which has $e(t_2') = t_2$.   

\end{proof}
\begin{lemma}

\begin{diagram}[size=2em,textflow]
t_1' & \rDashtoo_{\underline{\beta}} & t_2' \\
\dTo_{e} & &  \dDashto_{e} \\
t_1 &  \rOnto_{\beta}  & t_2 \\
\end{diagram}


\end{lemma}

\begin{proof}
Using lemma \ref{e/m}. By transitivity.
\end{proof}

\begin{lemma}
\label{contr}
$\phi([t'/x]t) = [\phi(t')/x]\phi(t)$.

\end{lemma}

\begin{proof}
By induction on the structure of labelled term $t$.
\end{proof}

\begin{lemma}
\begin{diagram}[size=2em,textflow]
t_1 & \rTo_{\underline{\beta}} & t_2 \\
\dTo_{\phi} & &  \dTo_{\phi} \\
\phi(t_1) &  \rDashtoo_{\beta}  & \phi(t_2) \\
\end{diagram}
\end{lemma}

\begin{proof}
By induction on the derivation of $t_1 \lstep{\beta} t_2$. Using lemma \ref{contr}.
\end{proof}


\begin{lemma}
\begin{diagram}[size=2em,textflow]
 & & t & & \\
 & \ldTo^e & & \rdTo^{\phi} &  \\
e(t) & & \rDashtoo_{\beta} &  & \phi(t) \\
\end{diagram}
\end{lemma}

\begin{proof}
By induction on the structure of $t$.
\end{proof}

\begin{lemma}[Strip Lemma]
\begin{diagram}[size=2em,textflow]
 & & t & & \\
 & \ldTo^{\beta} & & \rdOnto^{\beta} &  \\
t_1 & &  &  & t_2 \\
 & \rdDashtoo_{\beta} & & \ldDashtoo_{\beta} &  \\
 & & t' & & \\
\end{diagram}
\end{lemma}

\begin{proof}
Let $t_1$ be the result of reducing the redex $\Delta$ in $t$. Let $t_3 \in \underline{\Lambda}$ be the term obtained from $t$ by indexing $\Delta$. So $\phi(t_3) \equiv t_1$. 
By the following diagram:

\begin{diagram}[size=3em,textflow]
 & & t & & & \\
 & \ldTo^{\beta} & \uTo^e & \rdOnto^{\beta} &  & \\
t_1 &  \lTo_{\phi} &   t_3 & &  t_2\\
 & \rdDashtoo_{\beta}  & & \rdDashtoo_{\underline{\beta}} \ldDashtoo^{\beta} &  \uDashto_e \\
 & & t'  & \lTo_{\phi} & t_4 \\
\end{diagram}

\end{proof}

Strip lemma implies confluence by simple diagram chasing. Thus we can conclude the confluence of lambda calculus. 
The above proof of strip lemma relies on the fact that: for any $t \to_{\beta} t_1$, there exist $t_3 \in \underline{\Lambda}$ such that $e(t_3) \equiv t$ and $\phi(t_3) \equiv t_1$. This limits the application of this method to the system that contains multiple kinds of redexes and reductions. For example lambda calculus extends with $\lambda x.t\ x \to_{\eta} t $. The term $\lambda x.(\lambda y.y\ z)\ x$ contains both beta redex and eta redex, contracting one will make the other disappear. So if the definition of $\phi$ and $e$ unchanged, consider $\lambda x.(\lambda y.y\ z)\ x \to_{\eta} \lambda y.y\ z $. Since it does not constract a beta redex, if we let $t_3 \equiv \lambda x.(\lambda y.y\ z)\ x \in \underline{\Lambda}$,  then $e(t_3) \equiv \lambda x.(\lambda y.y\ z)\ x$ and $\phi(t_3) \equiv \lambda x.(\lambda y.y\ z)\ x$. So $\phi(t_3) \not \equiv \lambda y.y\ z$ in this case. So this method
can not directly generalize to deal with lambda calculus with beta and eta reductions, and also for reduction systems with multiple kinds of redexes and reductions. 

\subsection{Hardin's Interpretation Method}
Sometimes it is inevitable to deal with reduction systems that contains more than one reduction, for example, $(\Lambda, \{ \to_{\beta}, \to_{\eta}\})$. Confluence problem for this kind of system require some nontrivial efforts to prove. Hardin's interpretion method \cite{Hardin:1989} provide a way to deal with some of those reduction systems. 

\begin{lemma}[Interpretation lemma]
\label{interp}
Let $\to $ be $ \to_1 \cup \to_2$, 
$\to_1$ being confluent and strongly normalizing. We denote by $\nu(a)$ the $\to_1$-normal form of $a$. Suppose that there is some relation $\to_i$ on $\to_1$ normal forms satisfying:

\

$\to_i \subseteq \twoheadrightarrow$, and $a \to_2 b $ implies $ \nu(a)   {\twoheadrightarrow_i}    \nu(b)$ $(\dagger)$

\

\noindent Then the confluence of $\to_i$ implies the confluence of $\to$.
\end{lemma}

\begin{proof}
 So suppose $\to_i$ is confluent. If $a  {\twoheadrightarrow}  a'$ and $a  {\twoheadrightarrow}  a''$. So by ($\dagger$), $\nu(a)  {\twoheadrightarrow_i}  \nu(a')$ and $\nu(a)  {\twoheadrightarrow_i}  \nu(a'')$. Notice that $t  {\to_1^*}  t'$ implies $\nu(t) = \nu(t')$(By confluence and strong normalizing of $\to_1$). By confluence of $\to_i$, there exists $b$ such that $\nu(a')  {\twoheadrightarrow_i}  b$ and $\nu(a'')  {\twoheadrightarrow_i}  b$. Since $\to_i, \to_1 \subseteq \twoheadrightarrow$, we got $a' {\twoheadrightarrow}   \nu(a')  {\twoheadrightarrow}  b$ and $a'' {\twoheadrightarrow}   \nu(a'')  {\twoheadrightarrow}  b$. Hence $\to$ is confluent.

\begin{diagram}[size=2em,textflow]
& & & & a & & & &\\
& & &\ldLine &  & \rdLine & & &\\
& &\ldLine & &  & & \rdLine & &\\
&\ldOnto & & & \dDashtoo^1 &  &  & \rdOnto & \\
a'& &  & & \nu(a)  &  & & & a'' \\
&\rdDashtoo^1 &  &\ldDashtoo^i &   & \rdDashtoo^i  &  & \ldDashtoo^1 &\\
& & \nu(a')  &  &   &  & \nu(a'') & & \\
& &  & \rdDashtoo^i  &  & \ldDashtoo^i &   &  & \\
& &  &  & b  &   &  & & \\
\end{diagram}

\end{proof}

Hardin's method reduce the confluence problem of $\to_1 \cup \to_2$ to $\to_i$, given the confluence and strong
normalizing of $\to_1$, this make it possible to apply Tait-Martin-L\"of's (Takahashi's) method to prove confluence of $\to_i$. 


\subsubsection{Local $\lambda \mu$ Calculus}
%(6-7 pages)
\label{Local}
We now show an applicaiton of Hardin's method on a concrete example, this example arise naturally in proving 
type preservation for $\mathsf{Selfstar}$. The approach we adopt is similar to the one in \cite{CurienHL96}. The proofs are in the appendix. 
\begin{definition}[Local Lambda Mu Terms]

\

\noindent \textit{Terms} $t \ :: = \ x \ |  \ \lambda x.t \ | \ t t'  \ | \ \mu t$

\noindent \textit{Closure} $\mu \ ::= \M{x}{t}{\mathcal{I}}$

\end{definition}

The closure is basically a set of recursively defined definitions. Let $\mathcal{I}$ be a finite nonempty index set. For $\M{x}{t}{\mathcal{I}}$, we require for any $ 1 \leq i \leq n $, the set of free variables of $t_i$, $\mathsf{FV}(t_i) \subseteq dom(\mu) = \{x_1,..., x_n\}$ and we also do not allow reduction, definition substitution, substitution inside the closure, we call it \textit{local property}, without this property, we are in the dangerous of losing confluence property(see \cite{Ariola:1997} for a detailed discussion). $\mu \in t$ means the closure $\mu$ appears in $t$. $\vec{\mu}t$ denotes $\mu_1...\mu_n t$. $[t'/x](\mu t )\ \equiv \mu([t'/x]t)$. So $\mathsf{FV}(\mu t) = \mathsf{FV}(t) - dom(\mu)$.

\begin{definition}[Beta-Reductions]

\

\begin{tabular}{llllll}
\infer{(\lambda x.t)t' \to_{\beta} [t'/x]t}{}

&

 \infer{\mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto t_i) \in \mu}

&

\infer{\lambda x.t \to_{\beta} \lambda x.t'}{t \to_{\beta}t' }

&

\infer{t t' \to_{\beta} t'' t'}{t \to_{\beta}t''}

&

 \infer{t t' \to_{\beta} t t''}{t'\to_{\beta}t''}

&

\infer{\mu t \to_{\beta} \mu t'}{t \to_{\beta}t' }
\\

\end{tabular}

\end{definition}

\begin{definition}[Mu-Reductions]

\

\begin{tabular}{llll}

 \infer{\mu t \to_{\mu} t}{dom(\mu) \# \mathsf{FV}(t)}

&

 \infer{ \mu(\lambda x.t) \to_{\mu} \lambda x.\mu t}{}

&

\infer{ \mu(t_1 t_2)  \to_{\mu} (\mu t_1 ) (\mu t_2)}{}

&

 \infer{\lambda x.t \to_{\mu} \lambda x.t'}{t \to_{\mu} t'}

\\
\\
\infer{t t' \to_{\mu} t t''}{t'\to_{\mu} t''}

&
\infer{t t' \to_{\mu} t'' t'}{t \to_{\mu} t''}

&

\infer{\mu t \to_{\mu} \mu t'}{t \to_{\mu}t' }

&
\\

\end{tabular}

\end{definition}


\subsubsection{Confluence of Local $\lambda_{\mu}$ Calculus}


\begin{lemma}
  $\to_{\mu}$ is strongly normalizing and confluent.
\end{lemma}

\begin{definition}[$\mu$-Normal Forms]
  
\

\noindent $n \ :: = \ x \ | \   \mu x_i \ | \ \lambda x.n \ | \ n n'$

\end{definition}

\noindent We require $x_i \in dom(\mu)$. 

\begin{definition}[$\mu$-Normalize Funciton]

\

\begin{tabular}{ll}

 $ m(x) \ : = \  x$

& $m(\lambda y.t)\ : = \ \lambda y.m(t)$

\\

 $m(t_1 t_2)\ : = \ m(t_1) m(t_2)$

& 
 $ m(\vec{\mu}y) \ := y$ if $y \notin dom(\vec{\mu})$.

\\
 $ m(\vec{\mu}y) \ := \mu_i y$ if $y \in dom(\mu_i)$.

&  
 $m(\vec{\mu}(t t')) \ :=  m(\vec{\mu} t) m( \vec{\mu}t')$
\\
 $m(\vec{\mu}( \lambda x.t)) \ := \lambda x.  m(\vec{\mu}t)$.

\\
\end{tabular}

\end{definition}

\begin{lemma}
\label{norm:fun}
 Let $\Phi$ denote the set of $\mu$ normal form, for any term $t$, $m(t)\in \Phi$.
\end{lemma}

\begin{definition}[$\beta$ Reduction on $\mu$-normal Forms]

\
  
\begin{tabular}{llll}

\infer{n \to_{\beta \mu} m(t)}{n \to_{\beta}t}
&
\infer{\lambda x.n \to_{\beta \mu} \lambda x.n'}{n \to_{\beta \mu} n' }

&

\infer{n n' \to_{\beta \mu} n n''}{n' \to_{\beta \mu} n'' }

&

\infer{n n' \to_{\beta \mu} n'' n'}{n \to_{\beta \mu} n'' }

\end{tabular}

\end{definition}

\noindent Note that the last three rules follows from the first rule.  For the second one, because $ n \to_{\beta} t$ implies $ \lambda x.n \to_{\beta} \lambda x.t$ and $m(\lambda x.t) \equiv \lambda x.m(t)$. The others follow similarly. 


\begin{definition}[Parallelization]

\

\begin{tabular}{lll}

\infer{ n \Rightarrow_{\beta \mu} n}{}

&

 \infer{\mu x_i \Rightarrow_{\beta\mu} m(\mu t_i)}{(x_i \mapsto t_i) \in \mu}

&

\infer{(\lambda x.n_1) n_2 \Rightarrow_{\beta\mu} m([n_1'/x]n_2')}{  n_1\Rightarrow_{\beta\mu} n_1' & n_2\Rightarrow_{\beta\mu} n_2'}

\\
\\


\infer{\lambda x.n \Rightarrow_{\beta\mu} \lambda x.n'}{n \Rightarrow_{\beta\mu}n' }

&

\infer{n n' \Rightarrow_{\beta\mu} n'' n'''}{n' \Rightarrow_{\beta\mu} n''' & n \Rightarrow_{\beta\mu}n'' }
\end{tabular}
\end{definition}

\begin{lemma}
  $\to_{\beta\mu} \subseteq \Rightarrow_{\beta\mu} \subseteq \to_{\beta\mu}^*$.
\end{lemma}

\begin{lemma}
\label{norm:sub}
If $n_2 \Rightarrow_{\beta\mu} n_2'$, then $m([n_2/x]n_1) \Rightarrow_{\beta\mu} m([n_2'/x]n_1)$.
\end{lemma}

\begin{lemma}
\label{norm:iden}
 $m(m(t)) \equiv m(t)$ and $m([m(t_1)/y] m(t_2)) \equiv m([t_1/y]t_2)$. 
\end{lemma}

\begin{lemma}
\label{key}
If $n_1 \Rightarrow_{\beta\mu} n_1'$ and $ n_2 \Rightarrow_{\beta\mu} n_2'$, then $m([n_2/x]n_1) \Rightarrow_{\beta\mu} m([n_2'/x]n_1')$.
\end{lemma}

\begin{lemma}
\label{diamond}
  If $ n \Rightarrow_{\beta\mu} n'$ and $ n \Rightarrow_{\beta\mu} n''$, then there exist $n'''$ such that $ n'' \Rightarrow_{\beta\mu} n'''$ and $ n' \Rightarrow_{\beta\mu} n'''$. So $\to_{\beta\mu}$ is confluent.
\end{lemma}


One can also use Takahashi's method to prove the lemma above. We will not explore that here.

\begin{lemma}
\label{Interp}
If $a \to_{\beta} b$, then $ m(a)\to_{\beta\mu}^* m(b)$.
\end{lemma}

\begin{theorem}
  $\to_{\beta} \cup \to_{\mu}$ is confluent. 
\end{theorem}
\begin{proof}
  By lemma \ref{interp}.
\end{proof}

\subsection{Type Preservation and Confluence}
\label{Conf:Presv}
Recall the statement of the type preservation: If $\Gamma \vdash t:T$ and $t \to t'$, then $\Gamma \vdash t':T$. 
In dependent type system, the following conversion rule is presented: 

\

\infer[\textit{Conv}]{\Gamma \vdash t:T}{\Gamma \vdash t:T' & T = T'}

\

\noindent The common method to prove type preservation is by induction on the derivation of $\Gamma \vdash t:T$.
One will reach the case of $\Gamma \vdash (\lambda x.t_1)t_2:T$, where $\Gamma \vdash \lambda x.t_1: T_1 \to T_2$
, $\Gamma \vdash t_2:T_1$ and $T_2 = T$. Since $(\lambda x.t_1)t_2 \to [t_2/x]t_1$, we need to show $\Gamma \vdash [t_2/x]t_1:T$. $\Gamma \vdash \lambda x.t_1: T_1 \to T_2$ implies $\Gamma, x:T_1' \vdash t_1: T_2'$ and $T_1'\to T_2' = T_1 \to T_2$. It would be desirable to have $T_1' = T_1$ and $T_2' = T_2$, then we would
have $\Gamma, x:T_1' \vdash t_1:T_2$ and $\Gamma \vdash t_2: T_1'$, so we should be able to get $\Gamma \vdash [t_2/x]t_1:T_2$ and $T_2 = T$. 

So the question is: given that $T_1'\to T_2' = T_1 \to T_2$, is it true that $T_1'=T_1$, $T_2'=T_2$? We called this \textit{inverse structure congruence} problem. We know that given $T_1'= T_1$, $T_2' = T_2$, one can conclude that $T_1'\to T_2' = T_1 \to T_2$. It is not immediate that the inverse structure congruence holds, so we need
to analyze the convertability relation between $T_1'\to T_2'$ and $T_1 \to T_2$. 

\noindent We would like the following invertability property holds.

\begin{definition}[Inverse Structure Congruence]
$T_1 \to T_2 = T_1' \to T_2'$ implies $T_1 = T_1'$ and $T_2 = T_2'$.  
\end{definition}

A reducton system $(\mathcal{T}, \rightarrowtail)$, where $\mathcal{T}$ is a set of types, arise when we analyze
the relation $T = T'$. Often for such system we want to design $\rightarrowtail$ such that $T_1 \to T_2 $ can only be reduced to $ T_1' \to T_2'$ when $T_1 \rightarrowtail T_1'$ or $T_2 \rightarrowtail T_2'$. So confluence of $(\mathcal{T}, \rightarrowtail)$ will imply that, for $T_1 \to T_2 = T_1' \to T_2'$, there is a $T_3$ such that $T_1 \to T_2 \stackrel{*}{\rightarrowtail} T_3$ and $T_1' \to T_2' \stackrel{*}{\rightarrowtail} T_3$. So we know $T_3$ must be of the form $T_4 \to T_5$. So $T_1 \stackrel{*}{\rightarrowtail} T_4 \stackrel{*}{\leftarrowtail} T_1'$ and $T_2 \stackrel{*}{\rightarrowtail} T_5 \stackrel{*}{\leftarrowtail} T_2'$, thus $T_1 = T_1'$ and $T_2 = T_2'$. So that is why confluence can be used to get the inverse structure congruence property, thus to prove type preservation. This machinery can be better illustrated by example, namely, the proof of type preservation for $\mathsf{Selfstar}$, but we will have to leave that to future work.

\section{Future Explorations and Conclusions}  

\subsection{System $\mathsf{Selfstar}$}
\label{Self}
%% (3-4 pages)
%% 1. examples
%% 2. reduction rules

This section we present a novel type system that extends dependent type system with recursive definitions, $*:*$ and self type.  The type for Church numerals and Scott numerals reflect a form of induction principle. As a dependent typed programming language, we think it provides an alternative design approach handle data types in functional
programming language. While due to the present of recursive definition and $*:*$, we do not have Curry-Howard correspondent in this type system, so terms in $\mathsf{Selfstar}$ does not correspond to proofs in intuitionistic logic.

\begin{definition}

\

\noindent Term $t \ :: = \ * \ | \ x \ | \ \lambda x.t \ |
\ t t' \ | \ \mu t \ |\ \Pi x:t_1.t_2 \ | \ \iota x.t$.

\noindent Closure $\mu \ ::= \M{x}{t}{\mathcal{I}}$

\noindent \textit{Context} $\Gamma \ :: = \ \cdot \ | \ \Gamma, x:t\ | \ \Gamma, \tilde{\mu}
 $

\end{definition}

We called $\iota x.t$ self type and the closure $\mu$ is used for mutually recusive definitions, it follows the same convention in section \ref{Local}. $\tilde{\_}$ is an operation(we call it \textit{lifting}). If $\mu$ is $\M{x}{t}{\mathcal{I}}$, then $\tilde{\mu}$ is $\bm{x}{a}{t}{\mathcal{I}}$.

 We collapse the syntax of terms and types, so the notion of types 
only arise when we have the judgement $\Gamma \vdash t:t'$, we call $t'$ the type of $t$. We list
only some essential rules for typing.

\begin{definition}[Typing]

\

\begin{tabular}{lll}
    

\infer[\textit{Pi}]{\Gamma \vdash \Pi x:t_1.t_2 : *}{\Gamma,
x: t_1 \vdash t_2 : * & \Gamma \vdash t_1 : * }

&

\infer[\textit{Var}]{\Gamma \vdash x:t}{(x:t) \in \Gamma}

&
\infer[\textit{Star}]{\Gamma \vdash *:*}{}

\\
\\

\infer[\textit{App}]{\Gamma \vdash t t':[t'/x] t_2}{\Gamma
\vdash t:\Pi x:t_1. t_2 & \Gamma \vdash t': t_1}


&
\infer[\textit{SelfInst}]{\Gamma \vdash t: [t/x]t'}{\Gamma
\vdash t : \iota x.t'}


&
\infer[\textit{SelfGen}]{\Gamma \vdash t : \iota x.t'}{\Gamma
\vdash t: [t/x]t'}
\\
\\

\infer[\textit{Conv}]{\Gamma \vdash t : t_2}{\Gamma \vdash t:
t_1 & \Gamma \vdash t_1 = t_2}

&
\infer[\textit{Lam}]{\Gamma \vdash \lambda x.t :\Pi x:t_1.
t_2}{\Gamma, x:t_1 \vdash t: t_2 & \Gamma \vdash t_1:*}

&
\infer[\textit{Self}]{\Gamma \vdash \iota x.t : *}{\Gamma,
x:\iota x.t \vdash t : * }

\\
\\

\infer[\textit{Mu}]{\Gamma \vdash \mu t: \mu t'}{\Gamma, \tilde{\mu}
\vdash t:t' &  \{\Gamma, \tilde{\mu} \vdash t_j: a_j\}_{(t_j:a_j) \in \tilde{\mu}} }

\\
\end{tabular}

\end{definition}

For type $\Pi x:t_1.t_2$ if the variable $x$ does not appear in $t_2$, we write $t_1 \to t_2$ instead. Note that
$\to$ in this section has nothing to do with reduction. Now we can see how to type Church encoding and Scott encoding with self type and resursvie defintion.

\begin{definition}[Church Encoding]
Let $\tilde{\mu_c}$ be the following recursive defintions:

\noindent $(\mathsf{Nat}:* ) \mapsto \iota x. \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ x)$

\noindent $(\mathsf{S}: \mathsf{Nat} \to \mathsf{Nat} )\mapsto \lambda n.\lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z)$

\noindent $(0:\mathsf{Nat})  \mapsto \lambda C. \lambda s. \lambda z.z$
\end{definition}

 Now let us see how we can derive $\tilde{\mu_c} \vdash \lambda C. \lambda s. \lambda z.z  : \mathsf{Nat}$ and $\tilde{\mu_c} \vdash \lambda n.\lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z) : \mathsf{Nat} \to \mathsf{Nat}$.

\

\infer[Conv]{\tilde{\mu_c} \vdash \lambda C. \lambda s. \lambda z.z  : \mathsf{Nat}}
{\infer[SelfGen]{\tilde{\mu_c} \vdash \lambda C. \lambda s. \lambda z.z  : \iota x. \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ x)}
{
\infer[Lam]{\tilde{\mu_c} \vdash \lambda C. \lambda s. \lambda z.z  :  \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ (\lambda C. \lambda s. \lambda z.z))}
{ \infer[Conv]{\tilde{\mu_c}, C: \mathsf{Nat} \to *, s: (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))), z: C\ 0 \vdash  z  :  C\ (\lambda C. \lambda s. \lambda z.z)}
{\infer[Var]{\tilde{\mu_c}, C: \mathsf{Nat} \to *, s: (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))), z: C\ 0 \vdash  z  :  C\ 0}{}}}
}}

\

\infer[Lam]{\tilde{\mu_c} \vdash \lambda n.\lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z) : \mathsf{Nat} \to \mathsf{Nat}}
{\infer{\tilde{\mu_c}, n:\mathsf{Nat} \vdash \lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z) : \mathsf{Nat}}
{
\infer[\textit{$=\iota$}]{
\tilde{\mu_c}, n:\mathsf{Nat} \vdash \lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z) :  \iota x. \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ x)
  }
{ \infer[Lam]{
\tilde{\mu_c}, n:\mathsf{Nat} \vdash \lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z) :  \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ (\mathsf{S}\ n) )
}
{\infer[App]{\tilde{\mu_c}, n: \mathsf{Nat}, C: \mathsf{Nat} \to *, s: (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))), z: C\ 0 \vdash  s \ n\ (n\ C\ s\ z): C\  (\mathsf{S}\ n)
 }{
\infer[App]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  s \ n: (C\ n) \to (C\ (\mathsf{S}\ n))}{\Delta_1} & 
\infer[App]{\tilde{\mu_c}, n: \mathsf{Nat}, \Gamma \vdash n\ C\ s\ z: C\ n}
{\Delta_2
}}}}
}}

\

In above derivation, $\Gamma = C: \mathsf{Nat} \to *, s: (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))), z: C\ 0$, and $=\iota$ step first convert $\mathsf{S}\ n$ to $\lambda C. \lambda s.\lambda z. s \ n\ (n\ C\ s\ z)$, then apply the SelfGen rule. The $\Delta_1$ is a subderivation: 

\

\infer{\Delta_1}{\infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  s  : \Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))}{} & \infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  n: \mathsf{Nat}}{}}

\

\noindent The $\Delta_2$ is a subderivation: 

\

\infer{\Delta_2}{\infer[SelfInst]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  n  : \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ n)}
{
\infer[Conv]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  n : \iota x. \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))) \to (C\ 0) \to (C\ x) }{\infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash  n:\mathsf{Nat}}{}}} & \Delta_3}

\

\noindent where $\Delta_3$: 

\

\infer{\Delta_3}{\infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash C:\mathsf{Nat} \to * }{} & 
\infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash s: \Pi n : \mathsf{Nat}. (C\ n) \to (C\ (\mathsf{S}\ n))}{} & \infer[Var]{\tilde{\mu_c},n: \mathsf{Nat}, \Gamma \vdash z: C\ 0}{}  }

\

The derivations above are a little lengthy, we present that only for the purpose of demonstration, we will not give any derivation any more. Now the induction principle for Church encoding can be expressed as: 
$ \tilde{\mu_c} \vdash (\mathsf{Ind}\ :=\lambda C. \lambda s.\lambda z. \lambda n. n\ C\ s\ z): \Pi C: \mathsf{Nat} \to *. \Pi n:\mathsf{Nat}.((C\ n)\to (C\ (\mathsf{S}\ n))) \to C\ 0 \to \Pi n:\mathsf{Nat}. C\ n$.  

With $*:*$, we now can define Leibniz's equality as $\mathsf{Eq}\ :=  \lambda A. \lambda x. \lambda y.  \Pi C:(A \to*). C\ x \to C\ y$. Now we have the judgement $\cdot \vdash \mathsf{Eq} : \Pi A:*. A \to A \to *$. Now define 
addition:

\noindent $\tilde{\mu_c} \vdash (\mathsf{add}:= \lambda n. \lambda m. \mathsf{Ind}\ (\lambda y.\mathsf{Nat})\ (\lambda x.\mathsf{S})\ m\ n) : \mathsf{Nat} \to \mathsf{Nat} \to \mathsf{Nat}$.

\noindent Now one can use induction principle to derive $\tilde{\mu_c} \vdash t : \Pi m: \mathsf{Nat}.(\mathsf{Eq}\ \mathsf{Nat}\ (\mathsf{add}\ m\ 0)\ m) $ for some term $t$, the term $t$ is a lambda expression that encode a proof of the formula $\Pi m: \mathsf{Nat}.(\mathsf{Eq}\ \mathsf{Nat}\ (\mathsf{add}\ m\ 0)\ m)$ (Namely, by induction on natural number $m$). Recalled that in this system we do not have Curry-Howard correspondent, so $t$ does not in general corresponds to a proofs of its type, but for the derivation of $\tilde{\mu_c} \vdash t : \Pi m: \mathsf{Nat}.(\mathsf{Eq}\ \mathsf{Nat}\ (\mathsf{add}\ m\ 0)\ m) $, we do not use any illogical principle, we still want to say it is a valid proof. So for future work, we want to identify a fragment of 
$\mathsf{Selfstar}$ that is logically consistent. %% thus this leads to a notion of Curry-Howard \textit{embedding}, where higher order intuitionistic logic can be embedded in this system. Details of exploring the notion of Curry-Howard \textit{embedding} will have to be left as future works. 

\begin{definition}[Scott Encoding]
Let $\tilde{\mu_s}$ be the following recursive defintions:

\noindent $(\mathsf{Nat}:* ) \mapsto \iota x. \Pi C: \mathsf{Nat} \to *.  (\Pi n : \mathsf{Nat}.C\ (\mathsf{S}\ n)) \to (C\ 0) \to (C\ x)$

\noindent $(\mathsf{S}: \mathsf{Nat} \to \mathsf{Nat} )\mapsto \lambda n.\lambda C. \lambda s.\lambda z. s \ n$

\noindent $(0:\mathsf{Nat})  \mapsto \lambda C. \lambda s. \lambda z.z$
\end{definition}

With Scott numerals defined above, one can derive a case analysis principle:

\noindent $\tilde{\mu_s} \vdash (\mathsf{Case}\ := \lambda C. \lambda s.\lambda z. \lambda n. n\ C\ s\ z): \Pi C: \mathsf{Nat} \to *. \Pi n:\mathsf{Nat}. (C\ (\mathsf{S}\ n)) \to C\ 0 \to \Pi n:\mathsf{Nat}. C\ n$

\noindent addition function can also be defined by extending the closure $\tilde{\mu_s}$ by 

\noindent $(\mathsf{add}: \mathsf{Nat} \to \mathsf{Nat} \to \mathsf{Nat} ) \mapsto \lambda n.\lambda m. \mathsf{Case}\ (\lambda n.\mathsf{Nat})\ \ (\lambda p . (\mathsf{S}\ (\mathsf{add}\ p\ m)) )\ m\ n $

One can further prove theorems about the $\mathsf{add}$ function like what we did for Church encoding, we will
not pursue that here. Interestingly the expression for $\mathsf{Case}$ and $\mathsf{Ind}$ are the same, they are used in the $\mathsf{add}$ operation for the typing purpose. Comparing with Church version, one can see two styles of defining addition, one through iteration, the other through recurison, both of which are expressible within the $\mathsf{Selfstar}$ system. 

\subsection{Conclusion and Future Works}

\textbf{Conclusion}: We present two methods to represent natural number as lambda terms, namely, Church encoding and Scott encoding.
We also surveyed type systems from simply typed lambda calculus to second order dependent type systems. Church encoding with system \textbf{F} and Scott encoding with recursive type are discussed. Some of the problems with Church encoding data in dependent type systems are addressed. System $\mathsf{Selfstar}$ is presented as a respond to the problems arise in dependent type system, and also to general data type design in functional programming language. 

Type preservation problem of $\mathsf{Selfstar}$ leads us to another line of works that related to term rewriting. The notion of abstract reduction system is introduced, and several methods to prove confluence are included. The a fragment of term system of the $\mathsf{Selfstar}$ is shown to be confluent. The connection between confluence and type preservation is illustrated. Church and Scott encoding numerals are typed in $\mathsf{Selfstar}$, together with the corresponding induction and case analysis principles, some simple theorems are presented to demonstrate the logical reasoning. 

\noindent \textbf{Future Works}: We want to extend the confluence of the $\lambda_{\mu}$ to the whole
$\mathsf{Selfstar}$ system, then to establish type preservation. We also want to identify a logical fragment of $\mathsf{Selfstar}$ and show this logical fragment is consistent. Last but not least, we want to refine the prototype system to reflects some of the new ideas from the analysis of $\mathsf{Selfstar}$. 




\bibliographystyle{plain}
\bibliography{exam}

\appendix

\section{Proofs}

\subsection{Proof of Lemma \ref{norm:fun}}

 Let $\Phi$ denote the set of $\mu$ normal form, for any term $t$, $m(t)\in \Phi$.

\begin{proof}
  One way to prove this is first identify $t$ as $\dot{\overrightarrow{\mu_1}}t'$, here $\dot{\overrightarrow{\mu_1}}$ means
there are zero or more closures and $t'$ does not contains any closure at head position.
 Then we can proceed by induction on the structure of $t'$:

\

\noindent \textbf{Base Cases}: $t' = x$, obvious.

\

\noindent \textbf{Step Cases}: If $t' = \lambda x.t''$, 
then $m(\dot{\overrightarrow{\mu_1}}(\lambda x.t'')) \equiv \lambda x.m(\dot{\overrightarrow{\mu_1}} t'')$. Now we can
again identify $t''$ as $\dot{\overrightarrow{\mu_2}} t'''$, where $t'''$ does not have any closure at head position. Since $t'''$ is structurally smaller than $\lambda x.t''$, by IH, $m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}} t''') \in \Phi$, thus $m(\dot{\overrightarrow{\mu_1}}(\lambda x.t'')) \equiv \lambda x.m(\dot{\overrightarrow{\mu_1}} t'') \in \Phi$.

For $t' = t_1 t_2$, we can argue similarly as above.

\end{proof}

\subsection{Proof of Lemma \ref{norm:sub}}
If $ n_2 \Rightarrow_{\beta\mu} n_2'$, then $ m([n_2/x]n_1) \Rightarrow_{\beta\mu} m([n_2'/x]n_1)$.

\begin{proof}
\noindent  By induction on the structure of $n_1$. We list a few non-trivial cases:

\

\noindent \textbf{Base Cases}: $n_1= x$, $n_1 = \mu x_i$, Obvious. 

\

\noindent \textbf{Step Case}: $n_1= \lambda y.n$. We have $ m(\lambda y.[n_2/x]n) \equiv \lambda y.m([n_2/x]n) \stackrel{IH}{\Rightarrow_{\beta\mu}} \lambda y.m([n_2'/x]n) \equiv m(\lambda y.[n_2'/x]n)$.

\

\noindent \textbf{Step Case}: $n_1= n n'$. We have $ m([n_2/x]n [n_2/x]n') \equiv m([n_2/x]n) m([n_2/x]n')\stackrel{IH}{\Rightarrow_{\beta\mu}} m([n_2'/x]n) m([n_2'/x]n')\equiv m([n_2'/x]n[n_2'/x]n)$.

\end{proof}

\subsection{Proof of Lemma \ref{norm:iden}}
 $m(m(t)) \equiv m(t)$ and $m([m(t_1)/y] m(t_2)) \equiv m([t_1/y]t_2)$. 

\begin{proof}
The first equality is by lemma \ref{norm:fun}. For the second equality, we 
prove it through similar method as lemma \ref{norm:fun}: We identify $t_2$ as $\dot{\overrightarrow{\mu_1}}t_2'$, 
 $t_2'$ does not contains any closure at head position. We proceed by induction on the structure of $t_2'$:

\

\noindent \textbf{Base Cases}: For $t_2' = x$, we use $m(m(t)) \equiv m(t)$. 

\

\noindent \textbf{Step Cases}: If $t_2' = \lambda x.t_2''$, 
then $m(\dot{\overrightarrow{\mu_1}}(\lambda x.[t_1/y]t_2'')) \equiv \lambda x.m(\dot{\overrightarrow{\mu_1}}([t_1/y]t_2'')) \equiv \lambda x.m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2'''))$, where $t_2''$ as $\dot{\overrightarrow{\mu_2}} t_2'''$ and $t_2'''$ does not have any closure at head position. Since $t_2'''$ is structurally smaller than $\lambda x.t_2''$, by IH, $m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2''')) \equiv m([t_1/y](\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2''')) \equiv m([m(t_1)/y] m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2'''))$. Thus $\lambda x.m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}([t_1/y]t_2''')) \equiv \lambda x. m([m(t_1)/y] m(\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2'''))$. So $m([t_1/y]\dot{\overrightarrow{\mu_1}}(\lambda x.t_2'')) \equiv m( [m(t_1)/y] m(\lambda x.\dot{\overrightarrow{\mu_1}}\dot{\overrightarrow{\mu_2}}t_2''')) \equiv m( [m(t_1)/y] m(\lambda x.\dot{\overrightarrow{\mu_1}}t_2'')) \equiv m( [m(t_1)/y] m(\dot{\overrightarrow{\mu_1}}(\lambda x.t_2'')))$

For $t_2' = t_a t_b$,  we can argue similarly as above.



\end{proof}

\subsection{Proof of Lemma \ref{key}}

If $  n_1 \Rightarrow_{\beta\mu} n_1'$ and $  n_2 \Rightarrow_{\beta\mu} n_2'$, then $  m([n_2/y]n_1) \Rightarrow_{\beta\mu} m([n_2'/y]n_1')$.


\begin{proof}

\noindent We prove this by induction on the derivation of $  n_1 \Rightarrow_{\beta\mu} n_1'$.
  
\

\noindent \textbf{Base Case:}

\

\noindent \infer{  n \Rightarrow_{\beta \mu} n}{}

\

\noindent By the lemma \ref{norm:sub}.

\

\noindent \textbf{Base Case:}

\

\noindent \infer{  \mu x_i\Rightarrow_{\beta\mu} m(\mu t_i)}{x_i \mapsto t_i \in \mu}

\

\noindent Because $y \notin \mathsf{FV}(\mu x_i)$ and $\mu$ is local. 

\


\noindent \textbf{Step Case:}

\

\noindent \infer{  (\lambda x.n_a) n_b \Rightarrow_{\beta\mu} m([n_a'/x]n_b')}{  n_a\Rightarrow_{\beta\mu} n_a' &   n_b\Rightarrow_{\beta\mu} n_b'}

\

\noindent We have $  m((\lambda x.[n_2/y]n_a) [n_2/y] n_b) \equiv (\lambda x.m([n_2/y]n_a)) m([n_2/y] n_b)$

$ \stackrel{IH}{\Rightarrow_{\beta\mu}} m([m([n_2'/y] n_b')/x]m([n_2'/y] n_a')) \equiv m([n_2'/y]([n_b'/x]n_a'))$. The last equality is by lemma \ref{norm:iden}.

\

\noindent \textbf{Step Case:}

\

\noindent \infer{  \lambda x.n \Rightarrow_{\beta\mu} \lambda x.n'}{  n \Rightarrow_{\beta\mu}n' }

\

\noindent We have $  m(\lambda x.[n_2/y]n) \equiv \lambda x.m([n_2/y]n) \stackrel{IH}{\Rightarrow_{\beta\mu}} \lambda x.m([n_2'/y]n') \equiv m(\lambda x.[n_2'/y]n') $

\

\noindent \textbf{Step Case:}

\

\noindent \infer{  n_a n_b \Rightarrow_{\beta\mu} n_a'n_b'}{   n_a\Rightarrow_{\beta\mu} n_a' &   n_b\Rightarrow_{\beta\mu} n_b'}

\

\noindent We have $  m([n_2/y]n_a [n_2/y] n_b) \equiv m([n_2/y]n_a) m([n_2/y] n_b)$

$ \stackrel{IH}{\Rightarrow_{\beta\mu}} m([n_2'/y] n_a') m([n_2'/y] n_b')\equiv m([n_2'/y](n_a'n_b'))$.

\end{proof}

\subsection{Proof of Lemma \ref{diamond}}

  If $   n \Rightarrow_{\beta\mu} n'$ and $  n \Rightarrow_{\beta\mu} n''$, then there exist $n'''$ such that $   n'' \Rightarrow_{\beta\mu} n'''$ and $   n' \Rightarrow_{\beta\mu} n'''$.
\begin{proof}
  \noindent By induction on the derivation of $  n \Rightarrow_{\beta\mu} n'$. 

\noindent \textbf{Base Case:}

\

\noindent \infer{  n \Rightarrow_{\beta \mu} n}{}

\

\noindent Obvious.

\

\noindent \textbf{Base Case:}

\

\noindent \infer{  \mu x_i\Rightarrow_{\beta\mu} m(\mu t_i)}{}

\

\noindent Obvious. 

\

\noindent \textbf{Step Case:}

\

\noindent \infer{  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu} m([n_1'/x]n_2')}{   n_1\Rightarrow_{\beta\mu} n_1' &  n_2\Rightarrow_{\beta\mu} n_2'}

\

\noindent Suppose $  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu}(\lambda x.n_1'') n_2''$, where $  n_1 \Rightarrow_{\beta\mu}n_1''$ and $  n_2 \Rightarrow_{\beta\mu} n_2''$. By lemma \ref{key} and IH, we have $  m([n_1'/x]n_2') \Rightarrow_{\beta\mu} m([n_1'''/x]n_2''')$. We also have $  (\lambda x.n_1'') n_2''\Rightarrow_{\beta\mu} m([n_1'''/x]n_2''')$, where $  n_1'' \Rightarrow_{\beta\mu}n_1'''$ and $  n_1' \Rightarrow_{\beta\mu}n_1'''$ and $  n_2' \Rightarrow_{\beta\mu} n_2'''$ and $  n_2' \Rightarrow_{\beta\mu}n_2'''$ .

\

\noindent Suppose $  (\lambda x.n_1) n_2 \Rightarrow_{\beta\mu}m([n_2''/x]n_1'') $, where $  n_1 \Rightarrow_{\beta\mu}n_1''$ and $  n_2 \Rightarrow_{\beta\mu} n_2''$. By lemma \ref{key} and IH, we have $  m([n_1'/x]n_2') \Rightarrow_{\beta\mu} m([n_1'''/x]n_2''')$ and $  m([n_1''/x]n_2'') \Rightarrow_{\beta\mu} m([n_1'''/x]n_2''')$.

\

\noindent The other cases are either similar to the one above or easy.

\end{proof}
\subsection{Proof of Lemma \ref{Interp}}
\begin{lemma}
\label{vec}
$m(\vec{\mu}\vec{\mu}t) \equiv m(\vec{\mu}t)$ and $m(\vec{\mu} ([t_2/x]t_1)) \equiv m( [\vec{\mu} t_2/x]\vec{\mu} t_1)$
\end{lemma}

\begin{proof}
We can prove this using the same method as lemma \ref{norm:fun}. We will not prove it here.
\end{proof}


\begin{prop}
If $  a \to_{\beta} b$, then $  m(a)\to_{\beta\mu} m(b)$.
\end{prop}

\begin{proof}
\noindent   We prove this by induction on the derivation(depth) of $  a \to_{\beta} b$. We list a few non-trial cases:

\

\noindent \textbf{Base Case:}

\

\noindent \infer{  \mu x_i \to_{\beta} \mu t_i}{(x_i \mapsto t_i) \in \mu}

\

\noindent We have $  m(\mu x_i) \equiv \mu x_i \to_{\beta\mu} m(\mu  t_i)$.

\

\noindent \textbf{Base Case:}

\

\noindent \infer{  (\lambda x.t)t' \to_{\beta} [t'/x]t}{}

\

\noindent We have $  m((\lambda x.t)t') \equiv (\lambda x.m(t))m(t') \to_{\beta\mu} m([m(t)/x]m(t')) \equiv m([t'/x]t)$.

\

\noindent \textbf{Step Case:}

\

\noindent \infer{  \lambda x.t \to_{\beta} \lambda x.t'}{  t \to_{\beta}t' }

\

\noindent By IH, we have $  m(\lambda x.t)  \equiv  \lambda x.m(t)  \stackrel{IH}{\to_{\beta\mu}} \lambda x.m(t') \equiv m(\lambda x.t') $. 

\

\noindent \textbf{Step Case:}

\

\noindent \infer{  \mu t \to_{\beta} \mu t'}{t \to_{\beta}t' }

\

\noindent We want to show $  m(\mu t) \to_{\beta\mu}  m(\mu t') $. If $dom(\mu)\# FV(t)$, then $  m(\mu t) \equiv m(t) \stackrel{IH}{\to_{\beta\mu}}  m(t') \equiv m(\mu t') $. Of course, here we assume beta-reduction does not introduce any new variable.

\

\noindent If $dom(\mu)\cap FV(t) \not = \emptyset$, then identify $t$ as $\dot{\overrightarrow{\mu_1}}t''$, where
$t''$ does not contain any closure at head position. We do case analyze on the structure of $t''$: 

%\noindent $t \not = \m{y}{t'}{\bar{\mu}.y_i}$ since it will violate our assumption. 
\

\textbf{Case.} $t''=x_i \in dom(\dot{\overrightarrow{\mu_1}})$ or $x_i \notin dom(\dot{\overrightarrow{\mu_1}})$, these cases will not arise.

\

\textbf{Case.} $t'' = \lambda y.t_1$, then it must be that $ t' = \dot{\overrightarrow{\mu_1}}(\lambda y.t_1')$ where $ t_1 \to_{\beta} t_1'$. So 
we get $   \mu \dot{\overrightarrow{\mu_1}} t_1 \to_{\beta} \mu \dot{\overrightarrow{\mu_1}}t_1'$. By IH(depth of $ \mu \dot{\overrightarrow{\mu_1}} t_1 \to_{\beta} \mu \dot{\overrightarrow{\mu_1}}t_1'$ is smaller), we have $  m(\mu \dot{\overrightarrow{\mu_1}}t_1) \to_{\beta\mu} m(\mu \dot{\overrightarrow{\mu_1}}t_1')$. Thus $  m(\mu\dot{\overrightarrow{\mu_1}}(\lambda y.t_1)) \equiv \lambda y.m(\mu\dot{\overrightarrow{\mu_1}} t_1) \to_{\beta\mu} \lambda y.m(\mu\dot{\overrightarrow{\mu_1}} t_1') \equiv m(\mu\dot{\overrightarrow{\mu_1}} (\lambda y.t_1'))$. 

\

\textbf{Case.} $t'' = t_1 t_2$ and $t' = \dot{\overrightarrow{\mu_1}}(t_1' t_2)$, where $ t_1 \to_{\beta} t_1'$. We have  $  \mu\dot{\overrightarrow{\mu_1}} t_1 \to_{\beta } \mu\dot{\overrightarrow{\mu_1}} t_1'$. By IH(depth of $\mu\dot{\overrightarrow{\mu_1}} t_1 \to_{\beta } \mu\dot{\overrightarrow{\mu_1}} t_1'$ is smaller),
$  m(\mu\dot{\overrightarrow{\mu_1}} t_1) \to_{\beta \mu} m(\mu\dot{\overrightarrow{\mu_1}} t_1')$. Thus $  m(\mu\dot{\overrightarrow{\mu_1}}(t_1 t_2)) \equiv m(\mu\dot{\overrightarrow{\mu_1}} t_1) m(\mu\dot{\overrightarrow{\mu_1}} t_2) \to_{\beta \mu} m(\mu\dot{\overrightarrow{\mu_1}} t_1') m(\mu \dot{\overrightarrow{\mu_1}}t_2) \equiv m(\mu\dot{\overrightarrow{\mu_1}}(t_1' t_2))$.
For $t'' = t_1 t_2'$, where $  t_2 \to_{\beta} t_2'$, we can argue similarly. 

\

\textbf{Case.} $t'' = (\lambda y.t_1)t_2$ and $t' = \dot{\overrightarrow{\mu_1}}([t_2/y]t_1)$. $  m(\mu\dot{\overrightarrow{\mu_1}} ((\lambda y.t_1)t_2)) \equiv (\lambda y.m(\mu\dot{\overrightarrow{\mu_1}} t_1)))m(\mu \dot{\overrightarrow{\mu_1}}t_2)  \to_{\beta\mu} m( [m(\mu \dot{\overrightarrow{\mu_1}}t_2)/y] m(\mu \dot{\overrightarrow{\mu_1}} t_1)) \equiv m([\mu\dot{\overrightarrow{\mu_1}} t_2/y] \mu \dot{\overrightarrow{\mu_1}}t_1) \equiv m(\mu \dot{\overrightarrow{\mu_1}} [t_2/y]t_1)$(lemma \ref{vec}).

\end{proof}





\end{document}
