\documentclass{article} 
\usepackage{url} 
\usepackage{hyperref}
\usepackage{stmaryrd}
\usepackage{manfnt}
\usepackage{fullpage}
\usepackage{proof}
\usepackage{savesym}
\usepackage{amssymb} 
\usepackage{titling}
\newcommand{\subtitle}[1]{%
  \posttitle{%
    \par\end{center}
    \begin{center}\large#1\end{center}
    \vskip0.5em}%
}

%% \savesymbol{mathfrak}
%\usepackage{MnSymbol} Overall mnsymbol is depressing.
%\restoresymbol{MN}{mathfrak}
\usepackage{xcolor} 
%\usepackage{mathrsfs}
\usepackage{amsmath, amsthm}
%\usepackage{diagrams}
\makeatletter
\newsavebox{\@brx}
\newcommand{\llangle}[1][]{\savebox{\@brx}{\(\m@th{#1\langle}\)}%
  \mathopen{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\newcommand{\rrangle}[1][]{\savebox{\@brx}{\(\m@th{#1\rangle}\)}%
  \mathclose{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\makeatother
\newcommand{\selfstar}[0]{\mathsf{Selfstar}} 
\newcommand{\cc}[0]{\mathbf{CC}} 
\newcommand{\systemt}[0]{\mathbf{T}} 
\newcommand{\systemg}[0]{\mathfrak{G}} 
\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
% My own macros
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 
\newcommand{\self}[0]{\mathbf{S}} 
\newcommand{\fomega}[0]{\mathbf{F}_{\omega}} 

\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newcommand{\ep}[0]{\epsilon} 
\newcommand{\nil}[0]{\mathsf{nil}} 
\newcommand{\cons}[0]{\mathsf{cons}} 
\newcommand{\vecc}[0]{\mathsf{vec}} 
\newcommand{\suc}[0]{\mathsf{S}} 
\newcommand{\app}[0]{\mathsf{app}} 
\newcommand{\interp}[1]{\llbracket #1 \rrbracket} 
\newcommand{\intern}[1]{\llangle #1 \rrangle} 
\newcommand*\template[1]{\(\langle\)#1\(\rangle\)}
%% \newarrowfiller{dasheq} {==}{==}{==}{==}
%% \newarrow {Mapsto} |--->
%% \newarrow {Line} -----
%% \newarrow {Implies} ===={=>}
%% \newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
%% \newarrow {Onto} ----{>>}
%% \newarrow {Dashto}{}{dash}{}{dash}{>}
%% \newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\newtheorem{prop}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}


\begin{document}
%\pagestyle{empty}
\title{Lambda Encoding in Type Theory \`a la Curry}
\subtitle{Dissertation Proposal}
\author{Peng Fu \\
Computer Science, The University of Iowa}
\date{\today}


\maketitle \thispagestyle{empty}

%% \begin{abstract}
%%  The title in each section in this proposal will correspond to a chapter in the dissertation (Except the first section.). The dissertation will be based on four articles that I involved with during the years 2011-2013. 
%%   Among them, ``A Framework for Internalizing Relations into Type Theory'' is a published joint work with Aaron Stump and Jeff Vaughan. ``Dependently-typed Programming with Scott Encoding'' is currently unpublished; ``Church Encoding with Dependent Type'' is currently under review, both are joint works with Aaron Stump. ``Lambda Encoding with Comprehension'' is an unpublished article written by me, which is available from my homepage.  
%% \end{abstract}
\section{Background Information}
Functional programming languages such as Scheme, OCAML, Haskell have been emphasized in
academia. Many conferences in programming language have large sections devoted to concepts related to functional programming language. Recently, the idea of functional programming have been incorporated in language likes Scala, F\#, and some functional languages have been used in industrial and commercial applications\footnote{e.g. Galois.inc. uses Haskell, Jane Street uses OCAML.}. Compilation techniques for functional programming languages are also mature\footnote{See data about Haskell compiler(GHC) \url{http://benchmarksgame.alioth.debian.org/u32q/which-programs-are-fastest.php}}.

There are many desirable features in functional programming\cite{hughes1989functional}, the reliability, more specifically, verifying the correctness of the functional programs, will be focused in my dissertation. One way to carry out the verification is to program and prove correctness in the tools that support verfication like Coq or Agda, then extract the correct program to other language like Haskell or OCAML. The programs written in Coq or Agda are also functional, so the extraction may be done in a straightforward way. The frameworks of these verification tools are largely inspired by the results of the theoretical works of Girard\cite{Girard:72} \cite{Girard:1989}, Martin-L\"of \cite{martin:1984}, Coquand\cite{Coquand:1988}. Informally, these logical frameworks have two language fragments, one is language for writing programs, the other is language for writing proofs and statement about the programs. To be more concrete, let us look at a simple example in Coq.

First, the programmer specify data using \textit{algebraic data type}, so it is \textit{nat}
in this case. Then a program or a function (called \textit{plus}) operates on $\mathrm{nat}$ is defined using pattern matching. 

\begin{verbatim}
Inductive nat : Type :=
    | O : nat
    | S : nat -> nat.

Fixpoint plus (n : nat) (m : nat) : nat :=
  match n with
    | O => m
    | S n' => S (plus n' m)
  end.
\end{verbatim}

There is nothing sepcial about this program, in principle this kind of program can be 
tranlated to any functional programming language that support recursion. What is interesting
is that we can prove a property about the program \textit{plus}. 

\begin{verbatim}
Theorem plus_0_r : forall n:nat, plus n 0 = n.
Proof.
  intros n. induction n as [| n'].
  Case "n = 0". reflexivity.
  Case "n = S n'". simpl. rewrite -> IHn'. reflexivity. Qed.
\end{verbatim}

The above proof may be a little hard to read for a first timer. The idea is that we are proving
the equation $\mathrm{plus}\ n\ 0 = n$ by induction on $n$. Usually Coq user proves theorems \textit{interactively}, namely, the user evaluates each line of proof script and Coq's IDE would provide imediate feedbacks on the progress of the proof. Coq is also responsible to check
the proof scripts we write actually prove the formula $\forall n:nat. \mathrm{plus}\ n\ 0 = n$

So we seen typically how verification is done with tools like Coq or Agda. It is natural to consider how to design such a tool or a logical framework to achieve this effect. When designing
such a framework, there are many design decisions to make, and there are many constraints that 
need to satisfy based on difference choices.
\begin{itemize}
\item First and foremost, the framework must be consistent, in the sense that one can not prove both formulas $F$ and $\neg F$ within the system. Dependending on what kind of logical system the designer chooses, consistency may or may not imply a restriction on the termination of the programs. However, we want to point out that most of these logical frameworks do require termination of the program fragment to establish consistency. This restriction is considerable, since 
  there are many interesting nonterminating programs. There has been many research on designing
  a consistent logical framework that does not require program termination\cite{kimmell2012equational}, \cite{casinghinocombining}. 
  
  \item Secondly, the designer may choose to implement a built-in version of an algebraic data type (e.g. \textit{nat} above). Algebraic data type naturally arise from statically typed languages (they first appeared in a small functional programming language called HOPE). One standard criterion for most statically typed programming languages is called \textit{type preservation}, i.e. the behavior of a well-typed program is predicatable. For example, if a program/function has type $\mathsf{Nat} \to \mathsf{Nat}$, it is reasonable to expect that if it terminates for given input $n$ of type $\mathsf{Nat}$, the resulting value will also be of type $\mathsf{Nat}$. Usually the process of proving type preservation is quite involved. One of the motivations for my dissertation work is to explore an alternative way to represent algebraic data types through lambda-encodings, which not only simplifies the design of the logical framework, but also makes the type preservation argument feasible to achieve. 
   \item Build-in algebraic data types imply a build-in induction principle for reasoning abut these algebraic data. This means extra rules that we need to take care of when proving properties about the logical framework. With lambda-encoding and the \textit{self type} mechanism proposed in the dissertation, the induction principle for algebraic data type is 
     derivable, so there is no need to have a build-in induction principle, simplifying  
     consistency proofs. 
\end{itemize}

\section{Fundamental Concepts}
In this Chapter, we review the notions of abstract reduction system, lambda calculus and untype lambda encoding. Church-Rosser property is a key property for an abstract rewriting system. We will survey several methods to prove Church-Rosser property for a rewrite system. These methods are used in section \ref{scott}  and \ref{church}. The idea of Curry-Howard correspondence will be outlined, then we will give some examples (such as System $\mathbf{F}$, Calculus of Construction.) to illustrate Curry style typing systems. 

\section{A Framework for Internalizing Relations into Type Theory}
This Chapter is based on \cite{fu2011framework}. It is about incorporating relations such
as equality, membership and subtype as part of the type in the dependent type system. We call this process internalization. So $t = t'$, $t \in T$ and $T <: T'$ will be types in the object language of the type system instead of as relations at meta level. The purpose of internalization is to be able to use the object language to reason about these relations. For example, we can form a type likes $t_1 = t_2 \to t_2 = t_3 \to t_1 = t_3$. We show that the resulting system is normalizing with respect to call-by-name reduction, thus it is consistent in the sense that not every type has an inhabitant. 

In retrospect, the internalization approach is considerably 
\textit{ad hoc} in the sense that the incorporation of these relation is through additional specialized rules, meaning, for each relation, we would add a specific rule to incoporate it into
the system. A direct drawback is making the type preservation theorem hard to prove, in fact, we are not able to prove without further adding new rules, see the injectivity rules in \cite{casinghinocombining} for example. Perhaps the major contribution of the work on internalization is that the idea of representing $t \in T$ as a type provides inspiration for designing System $\mathfrak{G}$. 

%% It would be interesting to compare the work of internalization with Plotkin and Abadi's Logic for Parametric Polymorphism \cite{plotkin1993logic}. The major difference is that we treat relation as type instead of keeping the notion of type and relation separated. 

%% All these issues have been solved in a satisfiable fashion in System $\mathfrak{G}$, all these relation are representable as a formula in $\mathfrak{G}$, so no primitive elimination rules are needed.

\section{Dependently-typed Programming with Scott Encoding}
\label{scott}
We introduce $\selfstar$, a Curry-style dependent type system featuring \textit{self} type $\iota x.t$, together with mutually recursive definitions and $*:*$. The motivation to devise $\selfstar$ is to obtain Scott-encoded datatypes and has the ability to \textit{reason} about these data without the need to treat data-type and pattern matching as primitves. We shows how to encode and reason numerals and vectors to demonstrate the power of $\selfstar$ as a dependently-typed programming language. Standard metatheorems such as type preservation and progress are proved. We are able to adopt the proof method to prove type preservation for System $\mathbf{S}$ in section \ref{church} and System $\mathfrak{G}$. 

Due to $*:*$ and unrestricted mutually recursive definition, $\selfstar$ is inconsistent as a logic. So by \textit{reasoning} in $\selfstar$, we really mean the ability to do type-level casting. For example, we are able to inhabit a type like $\Pi x:\mathsf{Nat}. x + 0 = x$ in $\selfstar$, we should not read this as a formula, but read it as way to convert any type $P(x + 0)$ to $P(x)$ given $x:\mathsf{Nat}$. 

\section{Church Encoding with Dependent Type}
\label{church}
It seems impossible to obtain a consistent type system that support Scott encoding. In this chapter, we introduce $\self$, which rectifies the inconsistency issue of $\selfstar$ by restricting the mutually recursive definitions and adopting Miquel's implicit product $\forall x:T.T'$ to obtain Church-encoded datatypes and the corresponding induction principles. A notion of erasure from $\self$  to $\fomega$ with let-bindings is defined, thus establishing strongly normalization for the terms. Type preservation is also proved. 

It is not possible to encode Scott data with $\self$. Compared to Scott encoding, Church encoding is inefficient to retrieve subdata. $\self$ is consistent in the sense that not every type is
inhabited. 

\section{Lambda Encoding with Comprehension}

In this Chapter, we view self type from a different perspective. Instead of viewing it as
a type, we view it as a form of \textit{set abstraction}. So instead of having $t:\iota x.T$, where it is a typing relation at meta-level, we have $t \ep \iota x.F$ ($F$ here is a formula possibly contains $x$) as a formula at the object level. So self type mechanism is identified as
comprehension principle: $t \ep \iota x.F = [t/x]F$. We formalize this idea in System $\systemg$. We show $\systemg$ is consistent and has the subject reduction\footnote{This property is called type preservation in pure type systems.} property. 

Compared to $\selfstar$ and $\self$, System $\systemg$ is different in several aspects: 1. there is no explicit notion of type, however, type can be emulated inside $\systemg$. 2. $\systemg$ can reason about Scott encoding data consistently, which can not be done with $\selfstar$ and $\self$. 3. $\systemg$ can not perform any formula-level computation, for example, one can not compute a formula given an input $\mathsf{Nat}$. 

\section{Future Works}
For future work, we would like to implement System $\systemg$ as a proof system\footnote{In fact, it is currently under implementation}. We also want to invstigate the prospect of adding an external layer of type inference for the term language in $\systemg$. Further extensions for System $\systemg$ is needed to encode dependent vector and represent the parametricity of vector and list, we expect these kind of extensions will be straightforward.  


\bibliographystyle{plain}
\bibliography{prop}


\end{document}
