\documentclass{article} 
\usepackage{url} 
\usepackage{hyperref}
\usepackage{stmaryrd}
\usepackage{manfnt}
\usepackage{fullpage}
\usepackage{proof}
\usepackage{savesym}
\usepackage{amssymb} 
%% \savesymbol{mathfrak}
%\usepackage{MnSymbol} Overall mnsymbol is depressing.
%\restoresymbol{MN}{mathfrak}
\usepackage{xcolor} 
%\usepackage{mathrsfs}
\usepackage{amsmath, amsthm}
%\usepackage{diagrams}
\makeatletter
\newsavebox{\@brx}
\newcommand{\llangle}[1][]{\savebox{\@brx}{\(\m@th{#1\langle}\)}%
  \mathopen{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\newcommand{\rrangle}[1][]{\savebox{\@brx}{\(\m@th{#1\rangle}\)}%
  \mathclose{\copy\@brx\kern-0.6\wd\@brx\usebox{\@brx}}}
\makeatother


\newcommand{\frank}[1]{\textcolor{blue}{\textbf{[#1 --Frank]}}}
% My own macros
\newcommand{\m}[2]{ \{\mu_{#1}\}_{#1 \in #2}} 
\newcommand{\M}[3]{\{#1_i \mapsto #2_i\}_{i \in #3}} 
\newcommand{\bm}[4]{
\{(#1_i:#2_i) \mapsto #3_i\}_{i \in #4}} 

\newcommand{\mlstep}[1]{\twoheadrightarrow_{\underline{#1}}}
\newcommand{\lstep}[1]{\to_{\underline{#1}}}
\newcommand{\mstep}[1]{\twoheadrightarrow_{#1}}
\newcommand{\ep}[0]{\epsilon} 
\newcommand{\nil}[0]{\mathsf{nil}} 
\newcommand{\cons}[0]{\mathsf{cons}} 
\newcommand{\vecc}[0]{\mathsf{vec}} 
\newcommand{\suc}[0]{\mathsf{S}} 
\newcommand{\app}[0]{\mathsf{app}} 
\newcommand{\interp}[1]{\llbracket #1 \rrbracket} 
\newcommand{\intern}[1]{\llangle #1 \rrangle} 
\newcommand*\template[1]{\(\langle\)#1\(\rangle\)}
%% \newarrowfiller{dasheq} {==}{==}{==}{==}
%% \newarrow {Mapsto} |--->
%% \newarrow {Line} -----
%% \newarrow {Implies} ===={=>}
%% \newarrow {EImplies} {}{dasheq}{}{dasheq}{=>}
%% \newarrow {Onto} ----{>>}
%% \newarrow {Dashto}{}{dash}{}{dash}{>}
%% \newarrow {Dashtoo}{}{dash}{}{dash}{>>}

\newtheorem{prop}{Proposition}
\newtheorem{definition}{Definition}
\newtheorem{corollary}{Corollary}
\newtheorem{lemma}{Lemma}
\newtheorem{theorem}{Theorem}


\begin{document}
%\pagestyle{empty}
\title{Research Expository}
\author{Peng Fu \\
Computer Science, The University of Iowa}
\date{\today}


\maketitle \thispagestyle{empty}


\section{Introduction}
Functional programming languages such as Scheme, OCAML, Haskell have been emphasized in
academia. Major conferences in programming language such as ICFP\footnote{The International Conference on Functional Programming}, POPL\footnote{ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages}, ESOP\footnote{European Symposium on Programming} have large sections devoted to concepts related to functional programming language. Recently, the idea of functional programming have been incorporated in language likes Scala, F\#, and some functional languages have been used in industrial and commercial applications \footnote{e.g. Galois.inc. uses Haskell, Jane Street uses OCAML.}. Compilation techniques for functional programming languages are also mature\footnote{See data about Haskell compiler(GHC) \url{http://benchmarksgame.alioth.debian.org/u32q/which-programs-are-fastest.php}}.

There are many desirable features in functional programming\cite{hughes1989functional}, I will focus on the reliability in this article, more specifically, verifying the correctness of the functional programs. One way to carry out the verification is to program and prove correctness in the tools that support verfication like Coq or Agda, then extract the correct program to other language like Haskell or OCAML. The programs written in Coq or Agda are also functional, so the extraction can be done in a straightforward way. The design of these verification tools are based on logical frameworks that largely inspired by the results of the theoretical works of Girard\cite{Girard:72} \cite{Girard:1989}, Martin-L\"of \cite{martin:1984}, Coquand\cite{Coquand:1988}. These logical frameworks typically have two language fragments, one is language for writting programs, the other is language for writing proofs and statement about the programs. To be more concrete, let us look at a simple example in Coq.

First, the programmer specify data using \textit{algebraic data type}, so it is \textit{nat}
in this case. Then a program or a function (called \textit{plus}) operates on $\mathrm{nat}$ is defined using pattern matching. 

\begin{verbatim}
Inductive nat : Type :=
    | O : nat
    | S : nat -> nat.

Fixpoint plus (n : nat) (m : nat) : nat :=
  match n with
    | O => m
    | S n' => S (plus n' m)
  end.
\end{verbatim}

There is nothing sepcial about this program, in principle this kind of program can be 
tranlated to any functional programming language that support recursion. What is interesting
is that we can prove property about the program \textit{plus}. 

\begin{verbatim}
Theorem plus_0_r : forall n:nat, plus n 0 = n.
Proof.
  intros n. induction n as [| n'].
  Case "n = 0". reflexivity.
  Case "n = S n'". simpl. rewrite -> IHn'. reflexivity. Qed.
\end{verbatim}

The above proof may be a little hard to read for a first timer. The idea is that we are proving
the equation $\mathrm{plus}\ n\ 0 = n$ by induction on $n$. Usually Coq user prove theorem \textit{interactively}, namely, the user evaluates each line of proof script and Coq's IDE would provide imediate feedbacks on the progress of the proof. Coq is also responsible to check
the proof scripts we write actually prove the formula $\forall n:nat. \mathrm{plus}\ n\ 0 = n$

So we seen typically how verification is done with tools like Coq or Agda. It is natural to consider how to design such a tool or a logical framework to achieve this effect. When designing
such a framework, there are many design decisions to make, and there are many constraints that 
need to satisfy based on difference choices.
\begin{itemize}
\item First and formost, the framework must be consistent, in the sense that one can not prove both formula $F$ and the $\neg F$ within the system. Dependents on what kind of logical systems the designer choose, consistency may or may not imply a restriction on the termination of the programs. However, we want to point out that most these logical framework does require termination of the program fragment to establish consistency. This restriction is considerably huge since 
  a lot of interesting programs are not terminating. There have been many research on designing
  a consistent logical framework that does not require the termination of the programs \cite{kimmell2012equational}, \cite{casinghinocombining}, part of the results of my dissertation work is providing a consistent logical system that does not require termination of the program and be able to soundly reason about program. 
  \item Secondly, the designer may\footnote{In fact, most of the designer does so.} choose to implement a built-in version of algebraic data type (e.g. \textit{nat} above). Usually algebraic data type naturally arises from statically typed language (first appeared in a small functional programming language called HOPE). One standard criterion for most statically typed programming language is called \textit{type preservation}, i.e. the behavior of a well-typed program is predicatable. For example, if a program/function has type $\mathsf{Nat} \to \mathsf{Nat}$, it is reasonable to expect that if it terminates for given input $n$ has type $\mathsf{Nat}$, the resulting value will also be of type $\mathsf{Nat}$. Usually the process of proving type preservation is quite involved, the built-in algebaic data type will only further complicate the preservation argument. The major motivation of my dissertation work is exploring an alternative way to represent algebraic data type through lambda-encodings, which not only simplifies the design of the logical framework, but also makes the type preservation argument feasible to achieve. 
   \item Build-in algebraic data type implies build-in induction principle for reasoning abut these algebraic data. Build-in induction principle means extra rules that we need to take care of when proving properties about the logical framework. With lambda-encoding and the \textit{self type} mechanism proposed in the dissertation, induction principle for algebraic data type is 
     derivable, so there is no need to build-in induction principle, thus further simplify the proof of 
     both consistency and type preservation. 
\end{itemize}

\section{Results in the Dissertation}

In this section, I will highlight three systems (System $\mathbf{S}$, $\mathsf{Selfstar}$ and System $\mathfrak{G}$) that presented in the dissertation. Among them, System $\mathbf{S}$ and $\mathfrak{G}$ are consistent logical frameworks. $\mathsf{Selfstar}$ is intended as a Turing complete programming language, although it has rich type system, it is inconsistent as a logical 
framework. 
\begin{itemize}
\item System $\mathbf{S}$ \cite{Pfu:2013} incorporates self type mechanism and algebraic data type is represented by Church encoding. It is consistent as a logic, and the proof of its consistency is much
  simpler compared to the consistency proofs of the other logical systems that treat algebraic data type as primitives. We prove type preservation for System $\mathbf{S}$ as well, again, it is only possible due to the simple design of $\mathbf{S}$. There are notably three limits for $\mathbf{S}$ compared to other logical systems. First, it does not support the so-called \textit{large elimination}, informally it means that one can not write a program to compute a formula\footnote{Usually this formula is encoded by data, and one use certain mechanism to go back and forth between this data to the corresponding formula in the logical system. Thus term-formula distinction is blurred, which typically will leads to inconsistency. Please do not confuse this with bluring the program-data distinction, which is the engine behind lambda calculus.}. This feature is a \textit{reflective} feature similar to the behavior of write program that can write programs in LISP. It is known to be tricky to deal with and may lead to inconsistency\cite{Girard:72}. Further research is needed to deal with large elimination in $\mathbf{S}$. Second, Church encoding scheme is known to be inefficent to retrieve subdata, which is a common practice in functional programming. To this my answer is that one should not compile the program written in $\mathbf{S}$ to Church encoding, but translate it to Scott-encoding and run the Scott encoded version. To do this, we would need to know that this translation preserves the operational semantics, which should be straightforward. So one can write program and prove property about his/her program with $\mathbf{S}$, then compile it to Scott encoding to run the program. Third, the type checking algorithm for $\mathbf{S}$ is undecidable. This means that programmer would need to write enough annotations to type check the program. This process could be lighten by certain heuristic automation for 
  the surface language. Although $\mathbf{S}$ has its limitations in practice, it is the first theoretical system that is proved to be consistent and type safe and does not assume algebraic data type as primitive, which we think are theoretical merits of $\mathbf{S}$.
  
  \item $\mathsf{Selfstar}$ \cite{Pfu-stump:2013} is an attempt to avoid the ineffeciency issue of Church encoding. It also use self type mechanism and adopted Scott encoding scheme. Thus retrieving subdata is not a problem for $\mathsf{Selfstar}$. In order to type Scott encoding, we sacrifice the consistency as a logic. However, as a Turing-complete programming language, $\mathsf{Selfstar}$ have very rich notion of type, results in flexible type level \textit{casting}. For example, one can form terminating program of type likes $\Pi x:\mathsf{Nat}. x + 0 = x$, so any program of type $P(n+0)$ can be casted to $P(n)$ without changing its behavior. Since we sacrifice consistency in $\mathsf{Selfstar}$, the notion of term-type distinction can be blurred, which means we can write program to compute types\footnote{Note that type and formula are different concept in general.}. We prove type preservation theorem for $\mathsf{Selfstar}$. The only drawback for $\mathsf{Selfstar}$ as a practical functional programming language is also the decidability of type checking. It is well-known that for a sufficiently rich type system, decidability of type checking will always be an issue. The research on $\mathsf{Selfstar}$ makes us realize the tension between the consistency and the efficiency, and the importance of decidability of the type checking. 
    \item System $\mathfrak{G}$ \cite{fu:2013} address the problems arise in $\mathbf{S}$ and $\mathsf{Selfstar}$ in a satisfiable way. To solve the tension between Scott encoding and consistency, System $\mathfrak{G}$ only admit the notion of formula, leaving the lambda term untyped. Self type mechanism becomes comprehension scheme, which provides the ability for $\mathfrak{G}$ to reason about Scott encoding. System $\mathfrak{G}$ is shown to be consistent. System $\mathfrak{G}$ does not use type as formula, which means $\mathfrak{G}$ has the ability to reason about general programs. We can design a decidable type system separately for the programming fragement of $\mathfrak{G}$. I believe $\mathfrak{G}$ is the best result from the dissertation, and it is currently under implementation using Haskell. 
      
\end{itemize}


\bibliographystyle{plain}
\bibliography{statement}


\end{document}
