\documentclass[preprint]{sigplanconf}

\usepackage{amsmath,amssymb,amsthm}
\usepackage{xspace}
\usepackage{url}
\usepackage{wasysym} % nicer \leadsto
\usepackage{xcolor}
\usepackage{mathpartir}

%% begin for ott
\usepackage{supertabular}
\include{selfstar_unan-inc}
\include{selfstar_ann-inc}
%% end for ott

\newcommand{\selfstar}[0]{Selfstar\xspace}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}{Proposition}
\newtheorem{definition}{Definition}

\usepackage{comment}
\newif\ifcomments\commentstrue

\ifcomments
\newcommand{\gk}[1]{\textcolor{blue}{\textbf{[#1 ---GK]}}}
\newcommand{\frank}[1]{\textcolor{red}{\textbf{[#1 ---Frank]}}}
\else
\newcommand{\gk}[1]{}
\fi

\newcommand{\ifrName}[1]{\ottdrulename{#1}}


\begin{document}

%\conferenceinfo{POPL 2013}{date, City.} 
%\copyrightyear{2005} 
%\copyrightdata{[to be supplied]} 

%\titlebanner{banner above paper title}        % These are ignored unless
%\preprintfooter{short description of paper}   % 'preprint' option specified.

\title{Dependent Lambda Encodings with Self Types}

\authorinfo{Aaron Stump \and  Harley Eades \and Frank Fu \and Garrin Kimmell}
           {Computer Science, The University of Iowa, USA}
           {\{aaron-stump, harley-eades, peng-fu, garrin-kimmell\}@uiowa.edu}

\maketitle

\begin{abstract}
This paper shows how to obtain dependently typed versions of Church
and Scott encodings of datatypes, using a new typing feature called
self types.  The type $\ottkw{self}\ x.T$ allows the type $T$ to
refer, via bound variable $x$, to the term which the self type is
typing.  This enables the type of a term to mention the term itself,
just what is needed for dependent eliminations of lambda-encoded data.
The paper defines and analyzes \selfstar, a dependent lambda calculus
with $\star:\star$, self types, call-by-value and call-by-name
abstractions, and mutual recursion. Using \selfstar, we show how
standard datatypes, including indexed datatypes, can be Church- and
Scott-encoded as pure lambda terms with dependent eliminations.
Thanks to the inclusion of call-by-name as well as call-by-value
abstractions, the approach also applies to (mixed) inductive and
coinductive types.
\end{abstract}

\category{D.3.2}{Programming Languages}{Applicative (Functional) Languages}
\category{F.3.2}{Logics and Meanings of Programs}{Semantics of Programming Languages}

\terms
Type theory, Lambda calculus

\keywords
Church encoding, Self types, Dependent types

\section{Lambda Encodings and Dependent Types}
\label{sec:intro}

The design, analysis, and implementation of a dependently typed
functional language is a challenging endeavor, particularly if one
wishes to keep the formal definition of the language small.
% (to
%facilitate computer-checked proofs of its metatheoretic properties,
%for example).
A central point of difficulty is the datatype system.
Richly varied forms of datatypes are known and desirable in practice:
inductive, coinductive, mixed inductive and coinductive, indexed,
non-positive, with compile-time arguments, and more.  This diversity
leads to complex notions of pattern matching, which have also received
substantial attention; a noteworthy issue is whether or not data may
be eliminated (recursively decomposed) at the type level of the
language, as well as the term level.  Type-level elimination of data,
also called large or strong elimination, enables a range of advanced
techniques, including datatype-generic programming and other methods
based on denotational semantics for types.  If one is designing a
Turing-complete programming language, then the difficulties end there.
For total type theories, one must also devise a theory of well-founded
recursion and productive co-recursion as well.

It is a pity that we must endure all this complexity, when there is a
well-known and simple solution to the problem of datatypes: lambda
encodings.  The Church encoding defines data as their own iterators,
while the Scott encoding defines data as their own pattern-matching
constructs (these encodings are reviewed in Section~\ref{sec:backgr}
below).  With lambda encodings, datatypes are not taken as primitives,
but reduced completely to pure lambda calculus.  The resulting
language is much simpler to define and implement, and the large
literature on pure lambda calculus can be applied to analyze it.

The idea of lambda-encoding data is well-known to the Type Theory
community, and one can find signs of that community's regret at being
forced to add datatypes as primitives in papers from the original
proposals for extending the simple and elegant Calculus of
Constructions to the much more complex Calculus of Inductive
Constructions, down to contemporary times (related work on this issue
is discussed further in Section~\ref{sec:related} below).  But lambda
encodings have not, up to now, been shown to be compatible with the
following two features, which are highly desirable or crucial for
dependently typed languages:

\begin{itemize}
\item \textbf{Multi-level eliminations.} In the Calculus of Inductive
  Constructions, an element of an inductive datatype can be eliminated
  at distinct levels of the language.  For example, from a natural
  number $n$ we can recursively define the $(\textit{add}\ n)$
  function which maps another natural number $m$ to the value $n+m$.
  This is a term-level elimination of $n$.  But we can also 
  eliminate $n$ at the type level, for example to form a type like
  $A\to \cdots \to A$, with $n+1$ copies of $A$.  Such large
  eliminations are very useful for advanced idioms like
  datatype-generic programming.

\item \textbf{Dependent eliminations.} Let us call the type of the
  result which is being computed by eliminating $n$ the
  \emph{elimination type}.  So for the \textit{add} example above, the
  elimination type is $\textit{nat}\to\textit{nat}$, since from $n$ we
  compute the $(\textit{add}\ n)$ operation on natural numbers.  A
  \emph{dependent elimination} of $n$ is one where the elimination
  type mentions $n$.  Dependent eliminations are crucial for
  dependently typed programming, and for representing proofs as
  functional programs via the Curry-Howard isomorphism.  For example,
  we might wish to eliminate $n$ at type $(\textit{add}\ n\ 0) = n$
  (assuming an equality type of some kind), in order to prove a basic
  property of the \textit{add} function.  Here the elimination type
  mentions $n$, so this is a dependent elimination.
\end{itemize}

In this paper, we develop a language called \selfstar embodying
solutions to both these problems.  For the problem of multi-level
eliminations, we follow $\Pi\Sigma$ (see Section~\ref{sec:related}) in
adopting the typing principle $\star:\star$ (i.e., ``Type'' is a
type).  This has the effect of collapsing all typing levels into one.
Since the language does not have multiple typing levels, the problem
of multi-level elimination does not arise.  Adding $\star:\star$ is
known to render the language non-normalizing, but like the designers
of $\Pi\Sigma$, we propose to handle the issue of totality checking --
for example, for functions intended as proofs under the Curry-Howard
isomorphism -- via a separate static analysis.  We are able to achieve
a language with a decidable type-checking problem, by putting the use
of the conversion rule (which runs code, in this case possibly
non-normalizing code) under explicit programmer control (see
Section~\ref{sec:alg}).  This makes \selfstar the first type system we
are aware which includes $\star:\star$ while retaining termination of
the type-checking algorithm.

For the problem of dependent eliminations, we propose a novel typing
construct called \emph{self types} (see Section~\ref{sec:related}
below for the relation to self types as considered in object-oriented
programming).  These are types of the form $\ottkw{self}\ x.t$.  The
intuition for self types is that the type $t$ may refer to the subject
of the type via the name $x$.  So whenever we have
$t':\ottkw{self}\ x.t$, we also have $t':[t'/x]t$.  Thus the type for
subject $t'$ can mention $t'$ itself.  Section~\ref{sec:lang} gives
formal typing rules for self types, including rules for
\emph{subjective equality} $\Gamma \vdash t_1 \stackrel{t}{=} t_2$,
where $t$ is the subject of equal types $t_1$ and $t_2$.  We will see
in Section~\ref{sec:encode} that self types with subjective equality
enable lambda-encoding data with dependent eliminations.  We also show
how to support coinductive datatypes using call-by-name
$\lambda$-abstractions in the encodings, instead of call-by-value
ones.

The main contributions of this paper are:
\begin{itemize}
\item self types and subjective equality, allowing types to refer to their subjects.
\item Scott- and Church-encodings with dependent eliminations, for a
  variety of example datatypes.
\item support for (mixed) inductive and co-inductive datatypes using
  call-by-name and call-by-value abstractions in the lambda encodings.
\item the first type system with decidable type-checking and $\star:\star$.
\end{itemize}

Source code for a prototype implementation of \selfstar, written in
OCaml, maybe be found at
\url{http://trellys.googlecode.com/svn/trunk/lib/selfstar-popl13}. All
examples from this paper (see the \texttt{examples/} subdirectory)
type-check with this implementation.

\section{Related Work}
\label{sec:related}

\textbf{Languages with $\star:\star$.}  Adopting the $\star:\star$
principle simplifies a number of advanced typing developments.  For
example, Chapman et al. use this as a temporary convenience, to avoid
the need for level-polymorphism in their approach to datatypes for
Epigram 2~\cite{chapman+10}.  Similarly, Weirich and Casinghino
simplify their development of arity-generic, datatype generic
programming by adopting $\star:\star$~\cite{weirich+10}.  For both
those works, $\star:\star$ is explicitly presented as a convenient
device, which can and should be eliminated in practice for use in
total type theories.  The reason for this reluctance is that
$\star:\star$ is well-known to render pure lambda-calculus
non-normalizing~\cite{girard72} (see also~\cite{coquand86,meyer+86}).
Nevertheless, Altenkirch et al. have proposed basing their $\Pi\Sigma$
dependently typed programming language on $\star:\star$, with totality
checking to be handled separately~\cite{altenkirch+10}.  Abel and
Altenkirch present a type-checking algorithm for a type theory with
$\star:\star$, but their algorithm is not guaranteed to termiante.
Cardelli shows how $\star:\star$ can be used for programming, and
defines a denotational semantics based on the concept of
retraction~\cite{cardelli86}.

\textbf{Object calculi with self typing.} In many object-oriented
programming languages, an object's methods are invoked with the entire
object itself as the argument for an implicit \textit{self} or
\textit{this} parameter.  For typing of this self parameter, the
central technical problem has been to devise type systems supporting
self-application while still validating natural record-subtyping
rules.  Solutions are proposed in~\cite{odersky+03,crary99,abadi+94},
for example.  The issue of typing the self parameter of an object's
methods appears different from that of allowing a type to refer to its
subject.  A connection is made, however, in the work of Hickey, who
proposes a type-theoretic encoding of objects based on very dependent
function types $\{ f \,|\, x:A\to B\}$, where the range $B$ can depend
on both $x$ and values of the function $f$ itself on inputs which are
strictly smaller than $x$ in some well-founded
ordering~\cite{hickey96}.  The self types we propose also allow a type
to refer to its subject, as the very dependent function type refers to
its subject $f$.  Self types as developed below are a simpler concept,
however, as (for example) no well-founded ordering is required for
their use.

\textbf{Lambda encodings.} Scott encodings are described
in~\cite{CHS72} (page 504); the article of Scott's cited there as the
source for the encodings appears never to have been published.  Scott
encodings can be carried out in System F~\cite{abadi+93}, though it is
not clear how to write recursive functions on such encodings; and in
object calculi (e.g., Section 4.2.2 of~\cite{abadi+94}).  Church
encodings are well-known to be typable in System F (see, e.g., Chapter
11 of~\cite{gtl90}).

\textbf{Type theory and inductive types.} Work on Epigram 2 and
$\Pi\Sigma$ seeks to give a foundational account for
datatypes~\cite{chapman+10,altenkirch+10}.  In addition to resulting
in simpler languages that are easier to implement and analyze, such an
account also can provide new semantic insights.  For example, the work
on $\Pi\Sigma$ paved the way for Agda's approach to mixed inductive
and coinductive types~\cite{danielsson+10}.  Non-derivability of
dependent eliminations was the central reason for extending the
Calculus of Constructions to the Calculus of Inductive Constructions.
Already in the original paper, the need for extension to inductive
types was acknowledged~\cite{coq88} (see also~\cite{geuvers01} for a
related model-theoretic proof); this was confirmed in subsequent works
proposing such extensions~\cite{pfenning+89,coquand+88}.

\section{Background: Church and Scott Encodings}
\label{sec:backgr}

In this section, we review the Church and Scott lambda-encodings of
basic datatypes, and the problems with basing a dependently typed
language on lambda encodings.  

\subsection{The Church encoding}
\label{sec:church}

The best known lambda-encoding is probably the Church encoding, where
each element of an inductive datatype is encoded as its own iterator.
For example, the natural numbers $\mathbb{N}$ are encoded like this:
\[
\begin{array}{lll}
0 & := & \lambda s.\lambda z. z \\
1 & := & \lambda s.\lambda z. (s\ z) \\
2 & := & \lambda s.\lambda z. (s\ (s\ z)) \\
\ldots &\ &\ \\
n & := & \lambda s.\lambda z. (s^n\ z)
\end{array}
\]
\noindent To formulate the datatype by means of constructors, one defines:
\[
\begin{array}{lll}
0 & := & \lambda s.\lambda z. z \\
S & := & \lambda n.\lambda s.\lambda z.(s\ (n\ s\ z))
\end{array}
\]
\noindent Church-encoded datatypes like this are well-known to be
typable in System F (or $F^\omega$, if one is encoding parametrized
types like the type of homogeneous lists).  If we use a Church-style
formulation of System F (with polymorphic types $\Pi A.T$, explicit
type annotations on $\lambda$-bound variables, and explicit type
abstractions $\Lambda A.t$ and instantiations $t[A]$), we would
define:
\[
\begin{array}{lll}
\textit{Nat} & := & \Pi A.(A\to A)\to A\to A\\
0 & := & \Lambda A. \lambda s:(A\to A).\lambda z:A. z \\
S & := & \lambda n : \textit{Nat}.\Lambda A.\lambda s:(A\to A).\lambda z:A. (s\ (n[A]\ s\ z)) 
\end{array}
\]
\noindent Many common operations on Church-encoded data (addition on
numbers, append on lists, etc.) can then be defined as typable System
F terms.  Since terms typable in System F are strongly normalizing,
all these operations are thus guaranteed to be total.

\subsection{The Scott encoding}
\label{sec:scott}

The Scott encoding defines data as their own case statements.  In
untyped lambda calculus, numerals are defined by:
\[
\begin{array}{lll}
0 & := & \lambda s.\lambda z. z \\
1 & := & \lambda s.\lambda z. (s\ 0) \\
2 & := & \lambda s.\lambda z. (s\ 1) \\
\ldots &\ &\ \\
n+1 & := & \lambda s.\lambda z. (s\ n)
\end{array}
\]
\noindent The constructors are then:
\[
\begin{array}{lll}
0 & := & \lambda s.\lambda z. z \\
S & := & \lambda n.\lambda s.\lambda z.(s\ n)
\end{array}
\]
\noindent The most natural way to type these
(\emph{pace}~\cite{abadi+93}) is using recursive types.  If we work in
an extension of System F with singly recursive top-level definitions,
we can define a typed Scott encoding of the natural numbers this way:
\[
\begin{array}{lll}
\textit{Nat} & := & \Pi A.(\textit{Nat}\to A)\to A\to A\\
0 & := & \Lambda A. \lambda s:(\textit{Nat}\to A).\lambda z:A. z \\
S & := & \lambda n : \textit{Nat}.\Lambda A.\lambda s:(\textit{Nat}\to A).\lambda z:A. (s\ n) 
\end{array}
\]

\subsection{Lambda encodings and type theories} 
\label{ssec:lambda-and-type-theories}

As summarized by Werner~\cite{werner92}, the three problems with using
lambda-encodings in impredicative type theories like the Calculus of
Constructions are:
\begin{itemize}
\item inefficiency of operations returning immediate subdata (like the
  predecessor of a natural number) for Church-encoded data.
\item non-derivability of induction principles (see the references in
  Section~\ref{sec:related} above).
\item non-derivability of clash principles like $0 \neq (S\ x)$, for
  data headed by distinct constructors; Werner gives an elegant
  argument for this by reduction to $F^\omega$.
\end{itemize}
\noindent The first problem is an issue with Church encodings in
general, not with their use in type theories.  This problem does not
afflict Scott encodings, and so we will focus our attention below on
those (though Section~\ref{sec:egchurch} shows that Church encodings
are indeed dependently typable in \selfstar).  

To Werner's list we can add the problem of multi-level elimination,
which is the source of the third problem.  Since lambda-encoded data
are defined to be some form of their own eliminators (iterators or
case statements), the definition must choose a level for those
eliminations.  We cannot use a \textit{Nat} with the definition above
(Section~\ref{sec:church}) to compute a type.  That is because such a
\textit{Nat} can only be used to compute an $A$ where $A:\star$.  We
cannot compute an $A$ where $A:\Box$ (that is, where $A$ is a kind)
from this definition.  Adding impredicative quantification to the kind
level of the Calculus of Constructions is known to lead to
non-normalization~\cite[Section 9]{coquand86}, so even using multiple
copies of the encoded data (one for term-level and another for
type-level computation) is not an option for total type theories.  The
problem is not significantly lessened in predicative type theories,
since there one must duplicate the encoded numbers at each level, or
else add some form of level polymorphism~\cite{harper+91}.  The lack
of multi-level eliminations is what leads to the non-derivability of
clash principles, which can easily be derived in the Calculus of
Inductive Constructions, where multi-level elimination of primitive
data is supported.  We will see below (Section~\ref{sec:clash} that
clash principles can be derived in \selfstar, due to the lack of
multiple type levels.

The non-derivability of induction principles is essentially the
problem of dependent eliminations mentioned in
Section~\ref{sec:intro}, and is the main target of the present paper.
In the Calculus of Constructions, for example, we would like to have
an inhabitant of the following type, representing the induction
principle for natural numbers:
\[
\begin{array}{l}
\Pi C : (\textit{Nat} \to *) . (\Pi n : \textit{Nat}. (C\ n) \to (C\ (S\ n))) \to \\
                   (C\ 0) \to  \Pi n:\textit{Nat}.(C\ n)
\end{array}
\]
\noindent It is quite puzzling how this should be represented for
Church-encoded data (Scott-encoded data pose similar problems).  If we
erase all dependencies, this type is just $\Pi C:*.\Pi (C \to C)\ \to
C \to \textit{Nat} \to C$.  Except for extra \textit{Nat}, this is
exactly the System-F type which we listed above
(Section~\ref{sec:church}) as the definition of \textit{Nat}.  So this
type is clearly close to what we might have expected, based on our
System F typing.  But for data encoded as their own iterators, we
should have
\[
\begin{array}{lll}
n & : & \Pi C : (\textit{Nat} \to *) . (\Pi n : \textit{Nat}. (C\ n) \to (C\ (S\ n))) \to\\
\ & \ &  (C\ 0) \to (C\ n)
\end{array}
\]
\noindent That is, a Church-encoded number is one where given a
predicate $C$ on natural numbers, a step case showing how to go from
$(C\ n)$ to $(C\ (S\ n))$, and a base case showing $(C\ 0)$, the
number should show you how to compute $C$ of the number itself.  So we
could reasonably expect the type of $n$ to mention $n$ -- something
that does not seem possible in existing type theories like the
Calculus of Constructions.  We will turn now to a new typing construct,
self types, which enable types to refer to their subjects, and will
allow the above typing for natural numbers $n$.

\section{Self types}
\label{sec:self}

The locus of the above problem with dependent lambda encodings is the
lack of a typing construct which would allow a type to refer to the
subject of its typing.  So let us add such a construct to our set of
types: $\ottkw{self}\ x.t$ is the type which can refer, in body $t$,
to its subject via the bound variable $x$.  We will study this concept
formally in Section~\ref{sec:lang}, but informally, we can observe
several important points about self types.

\begin{itemize}
\item Suppose we have $t':\ottkw{self}\ x.t$ (and we use the same
  meta-variable $t$ at the term and type levels, since the presence of
  $\star:\star$ will make it impossible to distinguish terms and types
  syntactically).  Then we should also have $t':[t'/x]\ t$, and vice
  versa.
\item This suggests that $\ottkw{self}\ x.t$ and $[t'/x]\ t$ should
  be viewed as equal types for the subject $t'$.
\item So we will develop our type theory with a typing judgment
  $\Gamma\vdash t:t'$ as expected, and also a judgment $\Gamma \vdash
  t_1\stackrel{t}{=}t_2$ of \emph{subjective equality}, stating that
  types $t_1$ and $t_2$ are equal types for subject $t$ in context $\Gamma$.
\item Sometimes we will not have a subject in our subjective equality,
  for example when trying to equate domain-types $t_1$ and $t_2$ of
  two different $\Pi$-types.  Then we will just write $\Gamma \vdash
  t_1 \stackrel{\_}{=} t_2$, where the $\_$ above the equality symbol
  indicates that the equality is true for any subject.
\item When typing $\ottkw{self}\ x.t$, we must assign some type for
  $x$, for purposes of typing the body $t$.  Since we will replace $x$
  with a term which has the entire self type as its type, we will
  assign the entire self type ($\ottkw{self}\ x.t$) to $x$ when typing
  $t$.
\item When defining subjective equality, there is a special case we
  can consider when the subject is a $\lambda$-abstraction, or more
  generally, any term $t$ such that for all $t'$, if $t\leadsto t'$,
  then $t$ and $t'$ must be instances of the same term construct.  In
  order to prove $\Gamma \vdash \Pi x:t_1.t_2 \stackrel{\lambda
    x.t}{=} \Pi x:t_1'.t_2'$, it will suffice to prove $\Gamma \vdash
  t_1 \stackrel{\_}{=} t_2$ and $\Gamma ,x:t_1 \vdash t_2 \stackrel{t}
  t_2'$.  The important point here is that (as in polite conversation)
  we are allowed to \emph{change the subject}, in this case from
  $\lambda x.t$ to just the body $t$.
\end{itemize}

Let us see how self types can be applied for a very simple example,
namely the lambda-encoded booleans.  Since the Church and Scott
encodings agree for non-recursive datatypes, this example is the same
no matter which encoding we choose.  We would like to use self types
to express the dependence of the type for booleans on its subject.  So
we can begin with the following (which is not yet a correct
definition):
\[
\textit{bool}\ :=\ \ottkw{self}\ b.\Pi C:\textit{bool}\to \star.(C\ \textit{true})\to(C\ \textit{false})\to (C\ b)
\]
\noindent While self types are allowing us to express the crucial
dependency, there is still an issue to resolve: the type we are
attempting to take as the definition of \textit{bool} mentions
\textit{bool}, as well as \textit{true} and \textit{false}.  So we
must define all three of these mutually recursively.  For this
purpose, \selfstar includes a mutually recursive fixed-point construct
$\mu x_1 = t_1,\ldots,x_n = t_n.t$, where the definitions of bound
variables $x_1$ through $x_n$ are in scope both for the defining terms
$t_1$ through $t_n$, and also the body $t$ of the construct.  To type
such a term, we must assign types $t_1',\ldots,t_n'$ to the defining
terms $t_1,\ldots,t_n$, respectively.  We will extend the typing context
with the typed equations $x_1:t_1' = t_1,\ldots,x_n:t_n' = t_n$, when
typing each defining term, and when typing the body $t$.

With this construct, we can complete our definition.  We are going to
start our formal development below with a Curry-style (unannotated)
formulation of \selfstar (Section~\ref{sec:lang}), so we use that
here.  (A Church-style (annotated) formulation with algorithmic typing
is given in Section~\ref{sec:alg} below.)  Here, \textit{body} is any
further term we wish to type with these definitions:
\[
\begin{array}{lll}
\mu\  \textit{bool} & = & \ottkw{self}\ b.\Pi C:\textit{bool}\to \star.\\
 \ &\ & (C\ \textit{true})\to(C\ \textit{false})\to (C\ b),\\
\ \ \ \textit{true} & = & \lambda C.\lambda t.\lambda f. t, \\
\ \ \  \textit{false} & = & \lambda C.\lambda t.\lambda f. f\ . \\
\textit{body} &\ &\ 
\end{array}
\]
\noindent When type-checking this term, we can assign \textit{bool}
the type $\star$, and \textit{true} and \textit{false} the type
\textit{bool}.  Let us see informally why these typings make sense.
First, we can type \textit{bool} with type $\star$ because if we
assign the $\ottkw{self}$-bound variable $b$ the entire self type,
then the body of the self type has type $\star$.  To see this, we note
that since we check all defining terms in the fully extended context,
including both the typing and the definition of each of the
recursively defined terms, we have available the assumption that
$\textit{bool}:\star$.  So $\textit{bool}\to\star$ has type $\star$.
The subterms $(C\ \textit{true})$ and $(C\ \textit{false})$ also have
type $\star$, since we have assumptions that $\textit{true}:\star$ and
$\textit{false}:\star$ in the extended typing context.  Finally,
we can type $(C\ b)$ because we have the assumption that $b$ has
the entire self type as its type, and also the assumption that this
self type equals \textit{bool}.

Now we should consider how $\lambda C.\lambda t.\lambda f. t$ can be
assigned the type \textit{bool} (the case for the defining term for
\textit{false} is similar).  The definition of subjective equality for
self types (and the defining equation for \textit{bool}) gives us:
\[
\textit{bool} \stackrel{\textit{true}}{=} \Pi C:\textit{bool}\to \star.(C\ \textit{true})\to(C\ \textit{false})\to (C\ \textit{true})
\]
\noindent So it suffices to assign the type on the right-hand side of
this equation to $\lambda C.\lambda t.\lambda f. t$, which can easily
be seen to be a legal type assignment.

This simple example illustrates the heart of our approach to dependent
lambda encodings using self types.  To see more of the details, and
how this approach can be applied to more complex datatypes than just
the booleans, we will turn first to a formal definition of \selfstar
(Sections~\ref{sec:lang} and~\ref{sec:alg}).  We will then
consider more examples in Section~\ref{sec:encode}.

\section{Type-assignment Formulation of \selfstar}
\label{sec:lang}

In turning now to the definition of \selfstar, we will follow a
2-level design methodology for dependently typed languages, which we
and others have found very useful.  One first designs a
non-algorithmic type-assignment system, including notions of equality
-- whether provable equalities or automatic conversion relations --
operating on Curry-style, unannotated terms.  We call this the
unannotated system, and define unannotated \selfstar in the current
section.  Then one develops a scheme of annotations for such terms,
with algorithmic typing rules.  We present annotated \selfstar in
Section~\ref{sec:alg}.

The advantages of this approach are that (1) metatheoretic results are
proved about the unannotated language, which has less syntactic
clutter than the annotated version (this is an advantage well known
from the study of languages without dependent types); and (2) the
annotated language can still base its notions of equality on
unannotated (hence, erased) terms.  This gives a very clean and direct
way of obtaining ``annotation-irrelevance'' for equality: terms are
considered equal iff their erased (unannotated) versions are, so
annotations are irrelevant for equality.  

A notable recent work using this methodology is the work on
$\textnormal{ICC}^*$, the algorithmic counterpart of the
(non-algorithmic) Implicit Calculus of Constructions
(ICC)~\cite{miquel01,barras+08}.  Sj\"oberg and Stump extend this
approach to consider conversions using provable equality as
annotations themselves, and hence subject to
erasure~\cite{sjoberg+10}.  We will follow their approach here, though
in our setting, (subjective) equality is a judgment only, while
in~\cite{sjoberg+10} it is a type (though see Section~\ref{sec:future}
for our future plans on taking subjective equality to be a type).

\subsection{Syntax of terms}

The syntactic categories of unannotated \selfstar are given below.  In
order to support co-inductive and mixed inductive/co-inductive types,
we include call-by-name as well as call-by-value abstractions.  The
difference is indicated by a strategy annotation $s$ on $\lambda$- and
$\Pi$-abstractions.  In a few places below we will write $[[\m xs = ts
    . t']]$ for $[[\m x1 = t1 , dots , xn = tn . t']]$, for
typographic reasons.

\ 

\ottgrammartabular{
\ottstrategy\ottinterrule
\ottterm\ottafterlastrule
}

\subsection{Syntax of contexts and environments}

The typing, subjective equality, and reduction judgments defined below
rely on typing contexts $\Gamma$ and environments $E$.  We will
implicitly assume that contexts and environments do not contain
bindings of the same form (e.g., typings $x:t$) either individually or
when paired together in judgments containing both a typing context and
an environment.  Bound variables can always be renamed to achieve
this.

\ 

\ottgrammartabular{
\ottbinding\ottinterrule
\ottassump\ottinterrule
\ottmucontext\ottafterlastrule
}

\subsection{Reduction, typing, subjective equality}

\begin{comment}

This section defines the judgments of our system, of which the
central ones are marked with a $\star$
\begin{itemize}
\item[$\star$] $[[G |- t : t']]$ for assigning $t'$ as a type for $t$ in context $\Gamma$.
\item[$\star$] $[[G |- E ; t ~> E' ; t']]$ for reducing $t$ in one step to $t'$ with starting environment $E$ and ending environment $E'$, in context $\Gamma$.
\item $[[G |- E ; t ~> m E' ; t']]$ for reducing $t$ in $m$ steps to $t'$ with starting environment $E$ and ending environment $E'$, in context $\Gamma$.
\item $[[G |- [ t ? : t1 ] = t2]]$ subjective equality: the terms $t_1$ and $t_2$ are equal types for subject $t$, if $t$ is present in the judgment, in context $\Gamma$.  If $t$ is absent, we write $\_$.
\item[$\star$] $[[G |- [ t ? : t1 ] => t2]]$ directed subjective equality; the meaning is the same as for subjective equality, but the relation is not symmetric.  
\item $[[ G |- val t ]]$ term $t$, possibly containing free variables,
  is a value in context $\Gamma$.  A free variable $x$ is considered a
  value if it was introduced to the context for typing a call-by-value
  abstraction.  In this case, the context will contain an additional
  binding $[[val x]]$.
\end{itemize}
\end{comment}
\noindent Subjective equality is defined below as a form of
joinability with respect to directed subjective equality.  One could
give a more declarative definition of subjective equality including
versions of the rules for our directed subjective equality as well as
symmetry.  We have found directed approach convenient in practice,
however, as we will see further in Section~\ref{sec:alg}.  So we
base our unannotated development on directed equality.

The typing rules use a couple meta-level helper functions:

\ 

\ottfundefnsmaybeval
%\ottfundefnsclose

\ 

\noindent The rules for the judgments are then as follows:

\ 

\ottdefnsJtyp


\subsection{Metatheoretic properties}
\label{sec:meta}

\begin{comment}
\begin{theorem}[Progress]
  \label{thm:progress}
  If $[[. |- t : t']]$ then either $[[. |- val t]]$ or $\exists E,t''.[[. |- .;t ~> E;t'']]$.
\end{theorem}

\begin{lemma}
\label{lem:diamondeval}
If $[[G |- . ; t1 ~> m E' ; t2]]$ and $[[ G |- [ t ? : t1 ] -> t3]]$,
then there exists $m'$, $E''$, and $t_4$ such that
\begin{itemize}
\item $[[G |- . ; close(E',t2) ~> m' E'' ; t4]]$, and
\item $[[G |- [ t ? : t3 ] -> close(E'',t4)]]$.
\end{itemize}
\end{lemma}


\begin{theorem}[Preservation]
  \label{thm:preservation}
 If $[[G |- t : t']]$ and $[[G |- . ; t ~> E ; t1]]$, then $[[G |- close (E, t1): t']]$.
\end{theorem}


\begin{theorem}[Type Safety]
\label{thm:safety}
TODO
\end{theorem}

\end{comment}


\section{Algorithmic Formulation of \selfstar}
\label{sec:alg}

The typing judgment for the unannotated language presented in
section~\ref{sec:lang} is non-algorithmic, due to the \textrm{Conv}
rule and the Curry-style formulation of $\lambda$- and
$\mu$-abstractions. In this section, we present a slightly modified
annotated version of the unannotated system that is
syntax-directed. The term syntax and typing rules for the annotated
language are shown below.

\annottgrammartabular{
\annottafterlastrule
\annottterm\annottafterlastrule
}

\annottdefntyping

\annottdefnequality


In addition to explicit annotations of $\lambda$ and $\mu$-bound
variables, the language includes a \textbf{conv} construct that is
used to guide the type checker in the application of the directed
subjective equality rules.


%% Fixme -- can't ott filter here, so need to typeset term constructs.
The \textbf{conv} form \texttt{conv t to t' by p1 , p2} changes the
type of a term \texttt{t} to the type \texttt{t'}, using the
programmer-supplied proofs \texttt{p1} and \texttt{p2}. The first
proof \texttt{p1} specifies the derivation necessary to change the
inferred type of the subject of conversion term to some intermediate
term \texttt{} , while the second proof \texttt{p2} specifies the
derivation necessary to change the target type \texttt{} to the same
intermediate term. \gk{Is the following informative, or even correct%
  terminology?}  This provides notational convenience, as it allows
proof rules to match the directional orientation of reduction, rather
than the symmetric notion of equality, which would require (for
example), proof terms for $\beta$-expansion and folding.

The language for proofs is shown below; following the language of proofs
are the rules for directed subjective equality 
  \verb!G |- proof : [p1 : t1]  => t2! for the annotated language.  

\annottgrammartabular{
\annottafterlastrule
\annottpf\annottafterlastrule
}



Proof terms fall into two main categories. The first are the
$\annottkw{refl}$, $\annottkw{eval}$, $\annottkw{substself}$, %
$\texttt{p1 ; p2}$ ($\annottkw{trans}$), and $\annottkw{unfold}$ constructs, that guide
the application of the corresponding rules. The second are those that
closely match the syntax of annotated term; these proof constructs are
used to apply the directed subjective equality congruence
rules. 

The $\annottdrulename{Red\_RedPiVal}$ and
$\annottdrulename{Red\_CongPi}$ rules overlap syntactically. In this
case, the subject term is used to disambiguate; if it is a $\lambda$
term, then $\annottdrulename{Red\_RedPiVal}$ is applied, otherwise
$\annottdrulename{Red\_CongPi}$ is used. In either case, the $\Pi$
proof syntax is used. The $\annottdrulename{Red\_RedRefl}$ rule is
derivable from $\annottdrulename{Red\_RedEval}$, as terms trivially
reduce to themselves in zero steps. However, we include this rule to
remain faithful to the implementation, where a $\annottkw{refl}$ proof
serves to indicate a subterm of the type that remains constant under
conversion.\\

\annottdefnareduction\annottafterlastrule




\section{Lambda-encoding Datatypes in \selfstar}
\label{sec:encode}

This section presents a number of example datatypes which we have
lambda-encoded in \selfstar, with dependent eliminations.  The examples
are written in the concrete syntax of our prototype implementation, where
\begin{itemize}
\item \verb|! x : t1 . t2| denotes $[[! cbv x : t_1. t_2]]$.
\item \verb|! x :: t1 . t2| denotes $[[! cbn x : t_1. t_2]]$.
\item \verb|\ x : t1 . t2| denotes $\lambda^V x : t_1. t_2$.
\item \verb|\ x :: t1 . t2| denotes $\lambda^N x : t_1. t_2$.
\item \verb|t1 -> t2| abbreviates $[[! cbv x : t_1. t_2]]$ when $x$ is not free in $t_2$.
\item \verb|t1 => t2| abbreviates $[[! cbn x : t_1. t_2]]$ when $x$ is
  not free in $t_2$.
\end{itemize}

\noindent We write \texttt{fix} for $\mu$, and make use of notation
for top-level \texttt{Fix} constructs as syntactic sugar for nested
$\mu$-terms.  As explained further in Section~\ref{sec:future}, our
goal is to develop \selfstar into a core language for dependently
typed programming.  A surface language would include many further
conveniences, notably automation to infer many type annotations or
elided arguments, and to construct as many as possible of the proof
terms we use to prove judgmental equalities.  We do not propose that
end programmers write such proofs themselves, as this is (as the
reader will soon see) quite tedious.  The point of this section is to
show that these examples can be type-checked algorithmically in
\selfstar.  We leave automated reasoning and type inference for
\selfstar as future work.

\paragraph{Unit and Void} 

Figure~\ref{fig:voidandunit} shows the \selfstar definitions for the
\texttt{void} type (with no constructors) and \texttt{unit} type that
includes a single nullary \texttt{mkunit} constructor. These
definitions illustrate the basic structure of a Scott-encoded datatype
in \selfstar as a comma-delimited group of mutually recursive
definitions -- one definition for the type itself, followed by a series
of definitions, one for each constructor. 

Recall that the Scott encoding of a dependent type may mention the
type itself. Consequently, the definitions of the \texttt{void} and
\texttt{unit} type constructors each begin with the \texttt{self} form
to introduce a name for the type being defined -- abstracting this
name plays a crucial role in the definition of the constructors for
the Scott-encoded type, as we describe below. Defined types will
then take an argument, \texttt{C} (the ``motive'', using the
terminology of \citet{mcbride:04:view}), mapping from the type being defined to the elimination
type. 

Following the motive, the type will take a sequence of arguments
corresponding to the various data constructors of the type. When the
Scott encoding of a type is viewed as a $\lambda$-encoding of the
type's (dependent) case, these arguments correspond to the various
branches of the case, one for each constructor. Because we are
encoding the \emph{dependent} eliminator of a type, these arguments
should have the type of the motive applied to the respective
constructor. In the case of the empty \texttt{void} type, there are no
constructors; in the case of the \texttt{unit} type, we have a single
argument for the \texttt{mkunit} constructor.

Finally, the form of a data type \texttt{t} encoded in \selfstar should have the
range \texttt{C t} -- in the case of our examples, \texttt{C void} and
\texttt{C unit} respectively. However, observe from the
$\annottdrulename{Self}$ rule that in a term $[[self u . t]]$, the
variable $[[u]]$ will have the type $[[self u . t]]$. Consequently, we
have to use the annotated \textbf{conv} rule to cast $[[u]]$ to the
type being defined. In the case of the \texttt{void} and \texttt{unit}
this is done in the same way, by unfolding the definition of the
target type. 

Thus in the case of the \texttt{void} type, we join the inferred
type of \texttt{self u. ! C : (void -> *). (C conv u to void by refl,
  unfold)} and the target of the cast, \texttt{void}, using
\texttt{refl} to leave the inferred type unchanged, and using
\texttt{unfold} to replace the defined type \texttt{void} with its
definition,  \texttt{self u. ! C : (void -> *). (C conv u to void by refl,
  unfold)}. A the same conversion proofs are used in the case of the
\texttt{unit} type.

The \texttt{mkunit} binding as part of the set of mutually recursive
bindings for the definition of the \texttt{unit} type. The
\texttt{mkunit} consists of a $\lambda$-encoding -- \texttt{\ C : unit
  -> * . \ u : (C mkunit) . u} -- wrapped in a \texttt{conv} to cast the
encoding to the proper type, \texttt{unit}. The inferred type of the
$\lambda$-encoding is 

\texttt{! C : unit -> * . ! u : (C mkunit) . (C  mkunit)}. 

Using the congruence proof constructs and unfolding of definitions),
we convert this type to an intermediate type

 \texttt{(! C : (unit -> *) . (! u : . (C mkunit) . (C $t$)))}

 where $t$ is the body of the \texttt{mkunit} constructor. 

The target \texttt{unit} term
is converted to the same intermediate type by using the proof term
\texttt{[ unfold ; substself ]}. This proof will first unfold \texttt{unit} to 

\texttt{self u . ! C : (unit -> *). (C mkunit) -> (C conv u to unit by
  refl, unfold)}. 

Next, the \texttt{substself} rule replaces
\texttt{u} in this type with the term being converted, in this case
the entire body, named $t$ above, of the \texttt{mkunit} constructor,
resulting in the type 

\texttt{! C : (unit -> *). (C mkunit) -> (C conv $t$ to unit by refl,
  unfold)}. 

\gk{Do we need to say something about TJoin equating terms
  modulo conv-dropping?}
This type matches (modulo dropping \texttt{conv} terms, which have no
computational content) the type we converted the original inferred
type of the term defining $\lambda$-encoding of the constructor, \texttt{\ C : unit
  -> * . \ u : (C mkunit) . u}. 

The \texttt{unit} type illustrates the key concept of using
\texttt{self} types to allow dependent elimination of
$\lambda$-encoding data. In the remainder of this section, we present
a series of examples illustrating the use of \texttt{self} types for a
variety of common dependently typed programming idioms.


\begin{figure*}
\begin{verbatim}
Fix void : * = self u. ! C : (void -> *). (C conv u to void by refl, unfold)

Fix unit : * = self u . ! C : (unit -> *). (C mkunit) -> (C conv u to unit by refl, unfold) ,
    mkunit : unit =
         conv \ C : unit -> * . \ u : (C mkunit) . u 
         to unit
         by (! C : refl . (! z : refl . (C unfold))),
            [ unfold ; substself ]
\end{verbatim}
\caption{\texttt{void} and \texttt{unit} types}
\label{fig:voidandunit}
\end{figure*}


\paragraph{Natural Numbers}

Figure~\ref{fig:nats} shows a definition of Scott-encoded natural
numbers corresponding to the definition from Section~\ref{sec:scott},
modified to support dependent elimination through the use of the
\texttt{self} type. Moreover, the motive \texttt{C} is
defined as a call-by-name abstraction (indicated by the \texttt{=>}
arrow in the type ascription for \texttt{C} in the definition of
\texttt{nat}. \gk{need i say more about why we use call by name? why
  do we use call by name here?} Finally, in first proof changing the
type of \texttt{succ}, we first change \texttt{C (succ n)} to the
  target type by first unfolding \texttt{succ} to its definition (a
  call-by-value $\lambda$-abstraction), then using \texttt{eval} to
  contract the resulting $\beta$-redex.

Figure~\ref{fig:nats} includes a definition, \texttt{add}, of natural
number addition. As we begin programming with $\lambda$-encoded data
types, we need to supply a large number of proofs for converting
types. As \selfstar is intended to be a core language, we are willing
to bear this programmer burden. However, to simply programs we will
often define elimination functions, such as \texttt{nat\_elim}, to
localize the necessary conversion. 

\begin{figure*}
\begin{verbatim}
Fix nat : * = 
        self n . ! C : (nat => *) . (! n : nat. (C (succ n))) -> (C zero) -> 
                   (C conv n to nat by refl, unfold) 
    ,
    zero : nat = 
         conv \ C : nat => * . \ s : (! n : nat. (C (succ n))) . \ z : (C zero) . z 
         to nat 
         by (! C : refl . (! s : refl . (! z : refl . (C unfold)))),
            [ unfold ; substself ]
           
    ,
    succ : nat -> nat = 
         \ n : nat . 
           conv \ C : nat => * . \ s : (! n : nat. (C (succ n))) . \ z : (C zero) . (s n) 
           to nat
           by ( ! C : refl . refl -> refl -> (C [ (unfold refl) ; eval ])) ,
              [ unfold ; substself ]


Define nat_elim : ! n : nat . 
                  ! C : (nat => *) . 
                  (! n : nat. (C (succ n))) -> 
                  (C zero) -> 
                  (C n) =
  \ n : nat .
  (conv n to ! C : (nat => *) . (! n : nat. (C (succ n))) -> (C zero) -> (C n) 
             by [ unfold ; substself ] , refl)

Fix add : nat -> nat -> nat =
    \ n : nat . \ m : nat .
       (conv
          (nat_elim n
            (\ n :: nat . nat))
         to ((nat -> nat) -> nat -> nat)
         by ((refl -> [ (refl (unfold refl)); eval]) -> eval -> eval), refl
         (\ p : nat . (succ (add p m))) 
         m)
\end{verbatim}

\label{fig:nats}
\caption{Natural numbers}  
\end{figure*}


\paragraph{Coinductive datatypes}

Figure~\ref{fig:coinductive-types} demonstrates the use of call-by-name
abstractions to Scott-encode coinductive data types. As in the
definition of the \texttt{nat} type, the motive is
call-by-name. However, in contrast to \texttt{nat}, arguments
corresponding to case alternatives are call-by-name, as indicated by
the \texttt{=>} arrows and the double-colon \texttt{::} type
ascription on $\lambda$-bound variables. Moreover, the predecessor
argument to \texttt{succ} is also call-by-name. Infinite values, such
as \texttt{inf}, can be defined directly, as can functions such as the
predecessor.

Using call-by-value and call-by-name abstractions, inductive and
coinductive data can be freely mixed, as shown in the \texttt{colist}
example of figure~ref{fig:coinductive-types}. We elide the definitions
of the constructors, but they types are sufficient to illustrate that
the \texttt{cons} constructor takes a call-by-value head, and a
call-by-name tail.

\begin{figure*}
\begin{verbatim}
Fix conat : * = 
        self n . ! C :: (conat => *) . (! n :: conat. (C (cosucc n))) => (C cozero) => 
                   (C conv n to conat by refl, unfold) 
    ,
    cozero : conat = 
         conv \ C :: conat => * . \ s :: (! n :: conat. (C (cosucc n))) . \ z :: (C cozero) . z 
         to conat 
         by (! C :: refl . (! s :: refl . (! z :: refl . (C unfold)))),
            [ unfold ; substself ]
    ,
    cosucc : conat => conat = 
         \ n :: conat . 
           conv \ C :: conat => * . \ s :: (! n :: conat. (C (cosucc n))) . \ z :: (C cozero) . (s n) 
           to conat
           by ( ! C :: refl . refl => refl => (C [ (unfold refl) ; eval ])) ,
              [ unfold ; substself ]

Fix inf : conat = (cosucc inf)
\end{verbatim}

\begin{verbatim}
  Fix colist : * -> * =
  \A : * . self p . 
    !C : ((colist A) => *) . 
       ! nilCase: (C (conil A)) .
       ! consCase: (! hd:A . ! tl:: (colist A) . (C (cocons A hd tl))).
        (C (conv p to (colist A) by refl, [(unfold refl); eval])),

   conil : ! A:* . (colist A) = ... , 
   cocons : ! A:* . A -> (colist A) => (colist A) = ...
\end{verbatim}
\label{fig:coinductive-types}
\caption{Coinductive Types}
\end{figure*}

\paragraph{Large Eliminations} 

The \verb!add! function, defined above, eliminates Scott-encoded
natural numbers at the term level. The inclusion of the \verb!*:*!
axiom in \selfstar allows us to utilize the same Scott-encoding of
natural numbers to eliminate to a type. Figure~\ref{fig:large-elim}
shows such a function, which takes a natural number \texttt{n} as an
argument and calculates the type of functions that take \texttt{n}
natural number arguments and return a \texttt{nat}. Such eliminations
enable powerful generic programming idioms~\cite{weirich+10}.

\begin{figure*}
\begin{verbatim}
Fix atype : nat -> * =
    conv
       \n : nat . 
             (nat_elim n
              (\ n :: nat . *)
              (conv (\n' : nat . nat -> (atype n'))
               to (! n' : nat . ((\ n :: nat . *) (succ n')))
               by refl, (! n' : nat . eval))
              (conv nat to ((\ n :: nat . *) zero) by refl, eval)
              )
   to nat -> * 
   by (! n : nat . eval), refl
\end{verbatim}

%% \begin{verbatim}
%%   Fix chomp_n : ! n : nat . (atype n)  =
%%   conv
%%    \n : nat . 
%%       (nat_elim n
%%         (\ n' :: nat . (atype n'))
%%           # Succ branch
%%           conv 
%%            (\pred : nat . \ m : nat . (chomp_n pred))
%%           to (! pred : nat . ((\ n' :: nat . (atype n')) (succ pred)))
%%           by refl, (! pred : nat . [eval; (unfold refl); eval; (refl (unfold refl)); 
%%                                     eval; (unfold refl refl refl refl) ; eval ])
%%           # Zero branch
%%           conv zero 
%%             to ((\ n' :: nat . (atype n')) zero) 
%%             by refl, [eval; (unfold refl); (refl unfold);  eval; (unfold refl refl refl refl); eval]
%%         )
%%    to (! n : nat . (atype n))
%%    by (! n : nat . eval), refl
%% \end{verbatim}

\label{fig:large-elim}
\caption{Large Eliminations}
\end{figure*}

\paragraph{Indexed Families} 

\gk{Does this statement cause problems since we don't have a logical
  sublanguage?}  One of the most powerful idioms that dependent types
enable is the encoding of data type invariants internally in types as
indicies. Figure~\ref{fig:vectors} shows an example of the archetypal
indexed type of length indexed polymorphic lists, or vectors, defined
as a Scott-encoded type. For space reasons, we only give the types of
the definitions.

As with the \texttt{colist} type, \texttt{vec} is parameterized over a
type \texttt{A}, which occurs uniformily in the range of each of the
constructors. Additionally, \texttt{vec} is also indexed by a natural
number representing the length of the vector. This parameter is
allowded to vary in the range of constructors -- \texttt{zero} in the
case of \texttt{vnil} and as \texttt{succ m} (with \texttt{m} the length
of the tail of the list) in the case of the \texttt{vcons}
constructor.

This distinction is further reflected in the type of the motive
\texttt{C}, which takes, in addition to the data being eliminated, an
argument for the length index. In contrast, the type parameter
\texttt{A} is captured by the top-level \texttt{$\lambda$}-abstraction
in the definition of the \texttt{vec} type.

Using length-indexed vectors, we can write functions, such as the
\texttt{vappend} whose type is shown, that preserve length invariants.

\begin{figure*}
  \begin{verbatim}
Fix vec :  !A:* . nat -> * =  
   \ A : * . \n : nat . self l . ! C :  (! m:nat. (vec A m) -> *) .  
          ! vnilCase : (C zero (vnil A)) .
          ! vconsCase : (!p:nat . (! x : A . ! xs : (vec A p).  (C (succ p) (vcons A p x xs)))) .
                        (C n (conv l to (vec A n) by refl , [(unfold refl refl); eval])),
    vnil : ! A : * . (vec A zero) = ...,
    vcons:  ! A : * . ! n : nat . A -> (vec A n) -> (vec A (succ n)) =  ...

Fix vappend : !A : * . ! m:nat . ! n:nat . 
      ! p: (vec A m) . ! q : (vec A n) . (vec A (add m n)) = ...
  \end{verbatim}
\caption{Length-indexed Vectors}
\label{fig:vectors}  
\end{figure*}


Furthermore can encode types that internalize equality of terms, such
as in the \texttt{eq} type shown in figure~\ref{fig:prop-eq}. The
\texttt{eq} type parameterized over some (homogeneous) type, and is
indexed by two terms of that type. Elements \texttt{eq A t1 t2},
constructed with \texttt{eqrefl}, represent the proposition that t1 and t2
are equal. Both the type argument and the first term argument are
parameters to \texttt{eq}, while the second term argument is an
index. The motive \texttt{C} takes both the index \texttt{b}, as well as the data
(of type \texttt{eq A a b}), as arguments. In common use, \texttt{C}
will depend only on the index; consequently we include a separate
definition \texttt{eqconv} which allows elimination of \texttt{eq}
data with a simplified motive.

\begin{figure*}
\begin{verbatim}
Fix eq : ! A : *. A => A => * = 
         \ A : * . \ a :: A . \ b :: A . 
           self p. ! C : (! b :: A . (eq A a b) => *) .
           (C a (eqrefl A a)) ->
           (C b conv p to (eq A a b) by refl , [ (unfold A a b) ; eval ] )
      ,
      eqrefl : ! A : * . ! a :: A . (eq A a a) = 
         \ A : * . \ a :: A .
         conv
         \ C : (! b :: A. (eq A a b) => *) .
         \ p : (C a (eqrefl A a)). p
         to (eq A a a)
         by (! C : refl . ! p : refl . (C a [ (unfold A a) ; eval ])), [ (unfold A a a) ; eval ; substself ]


Define eqconv : 
  ! A : *. ! a :: A . ! b :: A . ! C : (A => *) . (C a) -> (eq A a b) -> (C b) = ...
  
\end{verbatim}
\label{fig:prop-eq}
\caption{Equality}
\end{figure*}



\subsection{Derivability of clash principles}
\label{sec:clash}

As mentioned in section~\ref{ssec:lambda-and-type-theories}, a
limitation of type theories such as the Calculus of Constructions is
that clash principles -- the absurdity of equations between terms
headed by distinct constructors -- cannot be derived. In
figure~\ref{fig:clash} we such a derivation of $0 \not= 1$,
represented by the type \texttt{eq nat zero (succ zero) -> void},
directly in \selfstar.

\begin{figure*}
\begin{verbatim}
Define nat_elim_simple :  ! n :: nat .  ! C : *.  (! n : nat. C) ->   C ->  C = ...

Define not_zero_eq_one : (eq nat zero (succ zero)) -> void =
  \ x : (eq nat zero (succ zero)) .
    fix et : nat => * = (\x::nat. (nat_elim_simple x * (\p:nat.void) unit)) in
    conv
    (eqconv nat zero (succ zero)  
       et
       conv mkunit to (et zero) by refl, [ (unfold unfold) ; eval; (unfold refl refl refl refl); eval ]
       x)
    to void
    by [(unfold (unfold refl));eval; (unfold refl refl refl refl); eval] , refl
\end{verbatim}
\label{fig:clash}  
\caption{Clash principle for \texttt{nat}}
\end{figure*}

\subsection{Church-encoded data}
\label{sec:egchurch}

In the previous examples in this section, we defined Scott-encoded
data types, where types $\lambda$-encode their own case
expressions. The Church encoding, in contrast,
$\lambda$-encodes data types as their own
iterators. Figure~\ref{fig:church-nat} shows a church encoding of
natural numbers, demonstrating the flexibility in choosing an encoding
of data types using \selfstar.

\begin{figure*}
\begin{verbatim}
  Fix nat : * = 
        self n . ! C : (nat -> *) . (! n : nat. (C n) -> (C (succ n))) -> (C zero) -> 
                   (C conv n to nat by refl, unfold) 
    ,
    zero : nat = 
         conv \ C : nat -> * . \ s : (! n : nat. (C n) -> (C (succ n))) . \ z : (C zero) . z 
         to nat 
         by (! C : refl . (! s : refl . (! z : refl . (C unfold)))),
            [ unfold ; substself ]
           
    ,
    succ : nat -> nat = 
         \ n : nat . 
           conv \ C : nat -> * . \ s : (! n : nat. (C n) -> (C (succ n))) . \ z : (C zero) . 
             (s n ((conv n to ! C : (nat -> *) . (! n : nat. (C n) -> (C (succ n))) -> (C zero) -> (C n)
                      by [ unfold ; substself ] , refl) 
                    C s z)) 
           to nat
           by ( ! C : refl . refl -> refl -> (C [ (unfold refl) ; eval ])) ,
              [ unfold ; substself ]
\end{verbatim}
\label{fig:church-nat}
\caption{Church-encoded natural numbers}
\end{figure*}


\section{Towards a Verified-Programming Language}
\label{sec:future}

In this section, we want to summarize the crucial ingredients we
believe must still be added to \selfstar to make it a full-fledged
core language for dependently typed programming, and a little about
the solutions we are considering.

\textbf{Compile-time arguments.}  We believe these are crucial for
effective dependently typed programming, as they allow function
arguments which play a purely specificational role to be erased, both
when compiling code for final execution, and (at least as importantly)
when reasoning about code.  This has been studied previously in the
Implicit Calculus of Constructions, as well as several of our
co-authored works~\cite{sjoberg+12,sjoberg+10,miquel01}.  The
solutions in these approaches should be suitable as the basis for
\selfstar.  We expect the lambda-encoding approach to shed light,
however, on at least one puzzling question we have encountered in some
of this previous work.  Suppose one wishes to allow some arguments to
type constructors to be erased.  This may seem esoteric, but one could
imagine, for example, a type $\textit{Sub}\ l\ u\ [p]$ for the integer
subrange $[l,u]$, where $p$ is a compile-time argument (indicated by
the square brackets) serving as evidence that $l \le u$.  We may wish
to treat such proofs as irrelevant, so that subrange types can be
proved equal without reasoning about their embedded proofs $p$.  

But when arguments to type constructors are marked compile-time, it
appears that certain arguments to term constructors should also be
marked compile-time, or else type soundness can be violated.  If the
type constructor for homogeneous lists, for example, were declared to
take its argument as a compile-time one (which would not be useful in
practice, but illustrates the point), then we could prove
$\textit{List}\ [\textit{Bool}] = \textit{List}\ [\textit{Nat}]$,
since we reason about erased terms.  If the data in the lists are
run-time, we would be able to cast a list of \textit{Bool}s and
unsoundly extract a \textit{Nat}, using the equation between the
types.  So the data must be marked compile-time if the argument to
\textit{List} is.  Consider this Scott-encoding of the \textit{List}
type, augmented with square brackets to indicate which arguments are
compile-time (this feature is not yet supported by our
implementation):

\begin{verbatim}
Fix list : * -> * =
 \ [A] : * . self l . 
 ! [C] :  ((list [A]) -> *) .  
  ! nilCase : (C (nil [A])) .
  ! consCase : (! x : A . ! xs : (list [A]).  
                (C (cons [A] x xs))) .
     (C A (conv l to (list [A]) 
           by refl , [(unfold refl); eval])),
 ...
\end{verbatim}

\noindent The usual requirement is that compile-time variables cannot
be used later in their scope except in positions that will be erased.
This is clearly violated above, since \texttt{A} is used as the type
for run-time variable $x$ of \texttt{consCase}.  So our extended type
system would reject this declaration as ill-typed, thus retaining type
soundness.  Indeed, this example suggests a refinement to the usual
notion of restricting compile-time variables to erased positions.  The
domain types of $\Pi$-types are not erased, and yet for $\Pi$-types
introducing a compile-time variable, this \textit{List} suggests that
we could consider them as compile-time positions.  This would allow us
to mark $x$ above as a compile-time variable, and allow the term to
type-check with that modification.  So the lambda-encoding has
provided a natural path to a more hereditary notion of compile-time
argument, where marking one argument compile-time may force marking
others as compile-time, too.

\textbf{Totality.} Terminating recursion and productive co-recursion
are needed if we wish to use part of the language as a sound logic.
We plan to apply ideas from recent work of Casinghino et al. on using
a modal type system to separate total from general recursive fragments
of a language, while still allowing the two to
interact~\cite{ccasin:msfp12}.  In previous work, we proposed a
flexible scheme for terminating recursion using a primitive type $t_1
< t_2$ expressing the strict subterm ordering on the values of $t_1$
and $t_2$ (if they both have values)~\cite{kimmell+12}.  The terminating recursor is
typed in such a way that recursive calls must be provided with proof
terms showing that the new argument for the parameter of recursion is
strictly smaller in this ordering than the original parameter to the
recursive function.  The Scott encoding works beautifully with this
approach for inductive data, since, for example $1 < 2$ under this
strict subterm ordering.  This approach must be generalized, however,
for coinductive data and productive co-recursion.  There, we
conjecture that the ordering must take into account a distinction
between call-by-name and call-by-value positions with a term; and that
the terminating co-recursor must somehow require that the output of
any inner co-recursive call is smaller than the output that will be
produced by the outer co-recursive call.

\textbf{Classical reasoning.} When reasoning about general recursive
functions, we have found it critical to be able to case split on
whether or not a term terminates~\cite{kimmell+12}.  This is a
paradigmatic form of non-constructive reasoning.  Fortunately,
integrating constructs corresponding to classical reasoning with a
pure lambda calculus is a well studied
area~\cite{barthe+97,Rehof:1994,Parigot:1992}.  We intend to
incorporate the $\Delta$ operator of Rehof and S{\o}rensen into
\selfstar.  This will likely require additional axioms for reasoning
about when terms lack values.

\section{Conclusion}
\label{sec:conclusion}

We have seen how to use self types to support lambda-encodings with
dependent eliminations.

\textbf{Acknowledgments.} This work is part of the \textsc{Trellys}
project (NSF award 0910510), which is seeking to design and implement
a next-generation dependently typed functional programming language.
We thank the other members of this project for their feedback on
\selfstar: Ki-Yung Ahn, Chris Casinghino, Nathan Collins, Tim Sheard,
Vilhelm Sj\"oberg, and Stephanie Weirich.

\bibliographystyle{abbrvnat}
\bibliography{main}
% The bibliography should be embedded for final submission.

\appendix

\section{About $[[ -> ]]$ Reduction}

\begin{theorem}
$[[->]]$ is terminating.
\end{theorem}

\begin{proof}
Let the measurement on term, $|t|$ be the number of $\mu$ binder in every strict subterm of $t$.
We can easily verify that if $ [[G |- t -> t' ]]$, then $|t| > |t'|$. 

\end{proof}

\begin{theorem}
$[[->]]$ is confluent.
\end{theorem}

\begin{proof}
Since $[[->]]$ is terminating, we only need to show $[[->]]$ is local confluent. We only need to show all the 
``overlaped'' $\mu$ redexes are local confluent. The overlapped redex for $\ottdrulename{Mu1}$ and $\ottdrulename{Mu2}$ is 
$[[(\m xs = ts . t) (\m ys = as .t') ]]$; for $\ottdrulename{Mu1}$ and $\ottdrulename{Mu3}$ are $[[ \m xs = ts. \m ys = as . ( t ( \m zs = bs . t')) ]]$, $[[  (\m xs = ts. \m ys = as . t) ( \m zs = bs . t') ]]$ and $[[  ( t (\m xs = ts. \m ys = as . \m zs = bs . t')) ]]$; for $\ottdrulename{Mu2}$ and $\ottdrulename{Mu3}$ are $[[ \m xs = ts. \m ys = as . ( (\m zs = bs .t )  t') ]]$, $[[  (\m xs = ts. \m ys = as . \m zs = bs . t) t' ]]$ and $[[  ( \m zs = bs .t) (\m xs = ts. \m ys = as .  t') ]]$. We can verify that they are all local confluent. 
\end{proof}

\section{Diamond Property of $[[G |- [t : t1] => t2 ]]$ Relation}

\begin{lemma}

If $[[G |- t1 --> t2]]$ and $[[G |- [t : t1] => t3]]$, then there exist $t_4$ such that $[[G |- [t : t2] => t4 ]]$ and $[[G |- [t : t3] => t4 ]]$.

\end{lemma}

\begin{proof}
By inversion on $[[G |- t1 --> t2]]$, we have $[[G |- t1 -*> t3]]$ and $[[G |- t3 ~> t4]]$ and $[[G |- t4 -*> t2]]$ and $[[G |- t3 -\> ]]$ and $[[G |- t2 -\> ]]$. We then proceed by induction on the derivation of $[[G |- t3 ~> t4]]$. 

\begin{itemize}
\item 

 \[\ottdruleNeed{} \]

This means $[[t1 = x t' ]]$ and $[[t3 = x t'' ]]$ and  $[[G |- t' -*> t'' ]]$. But we know that the only way to get $t_5$ such that $[[G |- [t : x t' ] => t5 ]]$  is the $\ottdrulename{D\_Eval}$ rule. Then we know this lemma is true since $\hookrightarrow$ is deterministic.

\item 

\[ \ottdruleBetaV{} \]

This case and the rest of the cases follows the same arguments as the first case. 

\end{itemize}

\end{proof}

\begin{lemma}
If $[[G |- [t: t1] => t2]]$, then $[[G |- [t: [t'/x]t1] => [t'/x]t2]]$.

\end{lemma}


\begin{theorem}[Diamond]
If $[[G |- [t: t1] => t2]]$ and $[[G |- [t: t1] => t3]]$, then there exists a $t_4$ such that $[[G |- [t: t2] => t4]]$ and $[[G |- [t: t3] => t4]]$.
\end{theorem}

\begin{proof}

\end{proof}

\section{Type Preservation}

\begin{lemma}
If $[[G |- t -> t']]$ and $[[G |- [t : t1 ] = t2 ]]$, then $[[G |- [t' : t1 ] = t2 ]]$.
\end{lemma}

\begin{lemma}
If $[[G |- t1 -> t2]]$, then $[[G |- [t : [t1/x]t' ] = [t2/x]t' ]]$.
\end{lemma}

\begin{lemma}
If $[[G |- [\m xs = ts . t: t1] = t2]]$, then $[[G' = G, x1 = t1, ... , xn = tn]]$ and $[[G' |- [t: t1] = t2 ]]$.

\end{lemma}

\begin{theorem}[Modulo Mu Preservation]
If $[[G |- t -> t' ]]$ and $[[G |- t : t1]]$, then $[[G |- t' : t1]]$.
\end{theorem}

\begin{lemma}
If $[[G |- t ~> t']]$ and $[[G |- [t : t1 ] = t2 ]]$, then $[[G |- [t' : t1 ] = t2 ]]$.
\end{lemma}

\begin{lemma}
If $[[G |- t1 ~> t2]]$, then $[[G |- [t : [t1/x]t' ] = [t2/x]t' ]]$.
\end{lemma}

\begin{definition}
A variable $x$ is subsitutable by $t$ with respect to $[[G]]$ iff they satisfy the following
conditions
\begin{itemize}
 \item $[[ G = G1, x:c, G2]]$ and $[[G1 |- t : c]]$.
 \item If $[[val x in G]]$, then $[[G1 |- val t]]$.
 \item There does not exist $t'$ such that $[[x = t' in G]]$.
\end{itemize}
\end{definition}

\begin{lemma}[Substitution Lemma]
\label{subst}
If $[[ G1, x:a, G2 |- b : c ]]$ and $x$ is subsitutable by $t$ w.r.t. $[[G1, x:a, G2]]$, then $[[ G1, [t/x]G2 |- [t/x]b : [t/x]c ]] $. 

\end{lemma}

\begin{lemma}
\label{gen}
If $[[ G |- \ s x . t : t' ]]$, then there is some $t_1 , t_2$ such that $[[G , x : t1 , maybeval(s,x) |- t : t2 ]]$ and $[[ G |- [ \ s x.t : t'] = ! s x : t1 . t2 ]]$.
\end{lemma}


\begin{theorem}[Preservation]
If $[[G |- t ~> t' ]]$ and $[[G |- t : t1]]$, then $[[G |- t' : t1]]$.
\end{theorem}

\begin{comment}


\section{Progress}
\label{sec:progress}
\begin{lemma}[Value Inhabitants of $\Pi$-Types are $\lambda$-Abstractions]
  \label{lemma:value_inhabitants_of_pi-types_are_lambda-abstractions}
  If $[[. |- t : ! s x : t1 . t2]]$ and $[[. |- val t]]$ then $\exists [[t']].[[t == \ s x.t']]$.
\end{lemma}
\begin{proof}
  This can be shown by straightforward induction on the structure of the assumed typing derivation.
\end{proof}

\begin{theorem}[Progress]
  \label{thm:progress}
  If $[[. |- t : t']]$ then either $[[. |- val t]]$ or $\exists E,t''.[[. |- .;t ~> E;t'']]$.
\end{theorem}
\begin{proof}
  This is a proof by induction on the structure of the assumped typing derivation.  All cases
  where $[[. |- val t]]$ holds are trivial.  Consider the remaining cases.
  
  \begin{itemize}
  \item[Case.] \ \\
    \begin{center}
      \begin{math}
        $$\mprset{flushleft}
        \inferrule* [right=\ifrName{Conv}] {
          [[. |- t : a]]
          \\\\
          [[. |- [t : a] = b]]
        }{[[. |- t : b]]}
      \end{math}
    \end{center}
    This case holds easily by the induction hypothesis. 
    
  \item[Case.]  \ \\
    \begin{center}
      \begin{math}
        $$\mprset{flushleft}
        \inferrule* [right=\ifrName{Mu}] {
          [[G' = . , x1 : t'1 , ... , xn : t'n , x1 = t1 , ... , xn = tn]]
          \\\\
          [[forall i in {1,dots,n}. G' |- ti : t'i]]
          \\\\
          [[G' |- t : t']]
          \\\\
          [[FV(t') # {x1,...,xn}]]
        }{[[. |- \m x1 = t1 , dots , xn = tn.t : t']]}
      \end{math}
    \end{center}
    We know $\mu$-terms are never values thus they must take a step.  In fact this is
    true by $\ottdrulename{MuRed}$ where we take $[[E]] = [[. , x1 : t'1 , ... , xn : t'n , x1 = t1 , ... , xn = tn]]$.

  \item[Case.] \ \\
    \begin{center}
      \begin{math}
        $$\mprset{flushleft}
        \inferrule* [right=\ifrName{App}] {
          [[. |- t1 : ! s x : a . b]]
          \\\\
          [[. |- t2 : a]]
        }{[[. |- t1 t2 :  (\ s x . b) t2]]}
      \end{math}
    \end{center}
    By the induction hypothesis we know that either $[[t1]]$ can be judged a value or
    takes a step for some environment $[[E1]]$ and either $[[t1]]$ can be judged a value or
    takes a step for some environment $[[E2]]$.  Since we know applications can never be
    judged values it suffices to show that $[[t1 t2]]$ takes a step for some environment $[[E3]]$.

    \ \\
    Suppose $[[t1]]$ is not a value, but takes a step to $[[t1']]$.  Then regardless if $[[t2]]$ is a value or takes a step
    we know by $\ottdrulename{App1}$ that $[[. |- .;t1 t2 ~> E1;t'1 t2]]$ where $[[. |- .;t1 ~> E1;t1']]$.

    \ \\
    Now suppose that $[[. |- val t1]]$.  Then by Lemma~\ref{lemma:value_inhabitants_of_pi-types_are_lambda-abstractions}
    we know there exists a term $[[t1']]$ such that $[[t1 == \s x.t1']]$.  We now must case split on the value of $[[s]]$.
    
    \ \\
    Suppose $[[s]] [[==]] [[cbv]]$.  Then if $[[t2]]$ is not a value, but takes a step to $[[t2']]$, then by $\ottdrulename{AppCbv}$
    we know $[[. |- .;t1 t2 ~> E2; t1 t2']]$ where $[[. |- .;t2 ~> E2;t2']]$.  Now suppose $[[. |- val t2]]$.  Then
    by $\ottdrulename{BetaV}$ we know that $[[. |- .;t1 t2 ~> .;[t2/x]t1']]$.

    \ \\
    Suppose $[[s]] [[==]] [[cbn]]$.  Then, again, regardless if $[[t2]]$ can be judged a value or takes a step
    we may apply $\ottdrulename{BetaN}$ to obtain $[[. |- .;t1 t2 ~> .;[t2/x]t1']]$.
  \end{itemize}

\end{proof}
% section progress (end)

\section{The Diamond Property for Directed Subjective Equality}
\label{sec:diamond}

Let us write $[[ G |- [ t ? : t1 ] -> t2]]$ if $[[ G |- [ t ? : t1 ] =>
    t2]]$ is provable without using the $\ottdrulename{Red\_Trans}$
rule.

\begin{lemma}
\label{lem:diamond}
If $[[ G |- [ t ? : t1 ] -> t2]]$ and $[[ G |- [ t ? : t1 ] -> t3]]$ then
there exists $[[t4]]$ such that $[[ G |- [ t ? : t2 ] -> t4]]$ and 
$[[ G |- [ t ? : t3 ] -> t4]]$.
\end{lemma}
\begin{proof}
The proof is by induction on the sum of the depths of the two assumed
derivations.  We begin with a case analysis on the first assumed
derivation:

\begin{itemize}
\item[Case.]

\[\ottdruleRedXXEval{}\]

This follows from Lemma~\ref{lem:diamondeval}, where we are
instantiating $E$ in that lemma with $\cdot$.

\item[Case.]

\[\ottdruleRedXXSelfSubst{}\]

\item[Case.]

\[\ottdruleRedXXPiVal{}\]

\item[Case.]

\[\ottdruleRedXXDef{}\]

\item[Case.]

\[\ottdruleRedXXCongPi{}\]

\item[Case.]

\[\ottdruleRedXXCongLam{}\]

\item[Case.]

\[\ottdruleRedXXCongApp{}\]

\item[Case.]

\[\ottdruleRedXXCongSelf{}\]

\item[Case.]

\[\ottdruleRedXXCongMu{}\]

\end{itemize}
\end{proof}

\begin{lemma}
\label{lem:diamondeval}
If $[[G |- . ; t1 ~> m E' ; t2]]$ and $[[ G |- [ t ? : t1 ] -> t3]]$,
then there exists $m'$, $E''$, and $t_4$ such that
\begin{itemize}
\item $[[G |- . ; close(E',t2) ~> m' E'' ; t4]]$, and
\item $[[G |- [ t ? : t3 ] -> close(E'',t4)]]$.
\end{itemize}
\end{lemma}
\begin{proof}
The proof is by induction on $m$.  If $m$ is $0$, then $t_2 \equiv
t_1$ and $E' \equiv \cdot$.  We can take $0$ for $m$, $\cdot$ for
$E''$, and $t_3$ for $t_4$.  The required properties then hold, where
we obtain $[[G |- [ t ? : t3 ] -> t3 ]]$ using \ottdrulename{Red\_Eval}
and a reduction sequence of length $0$. 

So suppose $m = (m'+1)$.  Suppose the second assumed derivation is
also an inference by $\ottdrulename{Red\_Eval}$, so that we have $[[ G
     |- . ; t1 ~> m' E'' ; t3']]$ for some $E''$, $m'$, and $t_3'$
with $t_3 = [[close(E'',t3')]]$.  Suppose w.l.o.g. that $m'>m$.  By
determinism of reduction, we have have $[[ G  |- E' ; t2 ~> ( m' -
    m ) E'' ; t3' ]]$.  By Lemma~\ref{lem:close}, this implies $[[ G ,
    E |- . ; close(E',t2) ~> ( m' - m + 1 ) E'' ; t3' ]]$.  So we can
take $t_4$ to be $t_3'$, $E''$ to be $E''$, and $m'$ to be $m'-m + 1$.
The required properties then hold (again using $0$-step evaluation to
prove $[[ G  |- [ t ?  : t3 ] -> t3]]$, and using $t_3 =
[[close(E'',t3')]]$).

Suppose now that the second assumed derivation is not an inference by
$\ottdrulename{Red\_Eval}$.  The only other possible rules that could
be applied are $\ottdrulename{Red\_CongMu}$ and
$\ottdrulename{Red\_CongApp}$.

\newcommand{\lemRedXXCongMu}[1]{\ottdrule[#1]{%
[[G  , x1 = t1 , ... , xn = tn |- [ _ : t' ] -> t'']] \\ 
}{
  [[G  |- [ t ? : \m xs = ts . t' ] -> \m xs = ts . t'']]}{%
{\ottdrulename{Red\_CongMu}}{}%
}}

\newcommand{\lemMuRed}[1]{\ottdrule[#1]{%
}{
[[G  |- . ; \m xs = ts . t' ~> x1 = t1 , dots , xn = tn ; t']]
}{%
{\ottdrulename{MuRed}}{}%
}}

\begin{itemize}
\item[Case.]
\[\lemRedXXCongMu{}\]

In this case, the assumed reduction sequence must start with a
$\ottdrulename{MuRed}$ step:
\[\lemMuRed{}\]
\noindent This step is then followed by a reduction sequence described
by $[[ G  |- x1 = t1 , dots , xn = tn ; t' ~> m' E' ; t2]]$.  By
Lemma~\ref{lem:invred}, $E' = [[(x1 = t1 , dots , xn = tn , E1)]]$.
By Lemma~\ref{lem:shift}, we have $[[ G , x1 = t1 , dots , xn = tn
    |- . ; t' ~> m' E1 ; t2]]$.  So we may apply our IH to obtain an
$E''$, $m'$, and $t_4$ such that:
\begin{itemize}
\item $[[ G  , xs = ts |- . ; close(E1,t2) ~> m' E'' ; t4]]$
\item $[[ G  , xs = ts |- [ t ? : t'' ] -> t4]]$
\end{itemize}
\noindent Now applying Lemmas~\ref{lem:shift} and~\ref{lem:close} to the
first of these facts,
we obtain 
\[
[[ G  |- . ; close(xs = ts, close(E1,t2)) ~> (m'+1) E'' ; t4]]
\]
\noindent Since $E' = [[(x1 = t1 , dots , xn = tn , E1)]]$, this is
equivalent to the following, which is one of the required properties
we must show (taking $m'+1$ for $m'$, $E''$ for $E''$, and $t_4$ for
$t_4$):
\[
[[ G  |- . ; close(E',t2) ~> m' E'' ; t4]]
\]
\noindent For the other required property, we need to prove:
\[
[[ G  |- [ t ? : \m xs = ts . t'' ] -> \m xs = ts. t4]]
\]
\noindent But this follows from the second bulleted fact above (of the
ones derived by the IH), using the \ottdrulename{Red\_CongMu} rule.


\newcommand{\lemRedXXCongApp}[1]{\ottdrule[#1]{%
\ottpremise{ [[ G |- [ _ : a1 ] => a1']] }%
\ottpremise{ [[ G |- [ _ : a2 ] => a2']]}%
}{
[[ G |- [ t ? : a1 a2 ] => a1' a2']] }{%
{\ottdrulename{Red\_CongApp}}{}%
}}

\item[Case.]
\[\lemRedXXCongApp{}\]

In this case, there are four possibilities for the first reduction 
step.
\begin{itemize}
\item[Case.]

\[
\inferrule* [right=\ifrName{BetaV}] {
[[G |- val a2]]
}{[[G |- E ; (\ cbv x.t') a2 ~> E ; [a2/x]t']]}
\]

\item[Case.]

\[
\inferrule* [right=\ifrName{BetaN}] {
\ 
}{[[G |- E ; (\ cbn x.t') a2 ~> E ; [a2/x]t']]}
\]

\item[Case.]

\[
\inferrule* [right=\ifrName{App1}] {
[[G |- E ; a1 ~> E' ; a1']]
}{[[G |- E ; a1 a2 ~> E'; a'1 a2]]}
\]

\item[Case.]

\[
\inferrule* [right=\ifrName{AppCbv}] {
[[G |- E ; a2 ~> E' ; a2']]
}{[[G |- E ; (\ cbv x . t') a2 ~> E'; (\ cbv x . t') a2']]}
\]

\end{itemize}
\end{itemize}
\end{proof}

\begin{lemma}
\label{lem:close}
If $[[G |- E ; t ~> m E' ; t']]$,
then we also have $[[G |- . ; close(E, t) ~> (m + 1) E' ; t']]$.
\end{lemma}
\begin{proof}
We simply apply the $\ottdrulename{MuRed}$ rule first, to obtain $[[ G
    |- . ; close(E,t) ~> E ; t]]$, and then use
$\ottdrulename{Multi\_Succ}$ to connect that with the assumed
derivation.
\end{proof}

\begin{lemma}
\label{lem:invred}
If $[[ G |- E ; t' ~> m' E' ; t2]]$, then there exists $E''$ such that
$E' = E, E''$.
\end{lemma}
\begin{proof}
This is a straightforward induction on the assumed derivation, and
so we omit the details.
\end{proof}

\begin{lemma}
\label{lem:shift}
If $[[ G |- E ; t ~> m E' ; t']]$, then also $[[ G , E |- . ; t ~> m E' ; t']]$;
the reverse direction also holds.
\end{lemma}
\begin{proof}
Both directions are straightforward inductions on the assumed
derivation, since reduction does not consult the environment $E$.  We
omit the details.
\end{proof}


\section{Type Preservation}

\begin{theorem}

 If $[[G |- t : t']]$ and $[[G |- . ; t ~> E ; t1]]$, then $[[G |- close (E, t1): t']]$.

\end{theorem}


\begin{proof}
We prove this theorem by induction on the derivation of $[[G |- t : t']]$.

  \begin{itemize}
     \item  Var and the TYPE rule, trivial.
     \item PI, self, LAM. trivial.

     \item[Case.] 

     \[ \ottdruleConv{} \]

   By induction hypothesis(IH), we know that if $[[G |- . ; t ~> E; t']]$,
we have $[[G |- close (E, t') : a ]]$. By lemma \ref{lemma0}, we have $[[G |- [ close ( E, t' ) : a ] = b]]$. So we apply the $\ottdrulename{Conv}$ rule again, we get $[[G |- close (E, t') : b]]$.

    \item[Case.]
     \[ \ottdruleApp{} \]

  




   \end{itemize}


\end{proof}

\begin{lemma}
\label{lemma0}
If $[[G |- [ t : t1 ] = t2]]$ and $[[G |- . ; t ~> E ; t']]$, then $[[G |- [ close ( E, t' ) : t1 ] = t2]]$.
\end{lemma}

\begin{proof}
 By inversion on $[[G |- [ t : t1 ] = t2]]$, we have $[[G |- [ t : t1 ] => t3]]$ and $[[G |- [ t : t2 ] => t3]]$. By lemma \ref{lemma1}, we know that there exist a $t_3'$ such that $[[G |- [ close ( E, t' ) : t1 ] => t3']]$ and $[[G |- [ close ( E, t' ) : t3 ] => t3']]$. Also there exist a $t_3''$ such that $[[G |- [ close ( E, t' ) : t2 ] => t3'']]$ and $[[G |- [ close ( E, t' ) : t3 ] => t3'']]$. By conflent of $\triangleright$, we know there exist a $t_4$ such that $[[G |- [ close ( E, t' ) : t3' ] => t4]]$ and $[[G |- [ close ( E, t' ) : t3'' ] => t4]]$. Thus by applying $\ottdrulename{Trans}$  rule, we have $[[G |- [ close ( E, t' ) : t1 ] => t4]]$ and $[[G |- [ close ( E, t' ) : t2 ] => t4]]$ . By $\ottdrulename{Join}$ rule, we get $[[G |- [ close ( E, t' ) : t1 ] = t2]]$.

\end{proof}

\begin{lemma}
\label{lemma1}

If $[[G |- [ t : t1 ] => t2]]$ and $[[G |- . ; t ~> E ; t']]$, then there exist a $t_3$ such that $[[G |- [ close ( E, t' ) : t1 ] => t3]]$ and  $[[G |- [ close ( E, t' ) : t2 ] => t3]]$.

\end{lemma}

\begin{proof}
This lemma can be proved by induction on derivation of $[[G |- [ t : t1 ] => t2]]$.
\
  \begin{itemize}
     \item[Case.]
  
  \[ \ottdruleRedXXEval{} \]

  Since $[[ G |- . ; t1 ~> m E' ; t2]]$, we have $[[G |- [ close ( E, t' ) : t1 ] => t2]]$ and
$[[G |- [ close ( E, t' ) : t2 ] => t2]]$.   
  \
   \item[Case.]

   \[ \ottdruleRedXXSelfSubst{} \]

  We are assuming $[[  G |- [t : self x.t1] => [t/x]t1 ]]$ and $[[G |- . ; t ~> E ; t']]$. We have $[[  G |- [close (E, t') : self x.t1] => [close (E, t')/x]t1 ]]$ by $\ottdrulename{Red\_SelfSubst}$ rule. By lemma \ref{lem:congruence}, we have $[[  G |- [close (E, t') : [t/x]t1] => [close (E, t')/x]t1 ]]$.  



%By $\ottdrulename{Red\_Cong}$ rule, we have $[[ G |-  [close (E, t') : [t/x]t1] = [close (E, t')/x]t1 ]]$. By $\ottdrulename{Red\_Trans}, \ottdrulename{Red\_Sym}$ rule, we have $[[ G |- [close (E, t') : self x.t1] = [t/x]t1 ]]$. So it is the case.
      \
     \item[Case.] 
 \[\ottdruleRedXXPiVal{} \]

 This case is impossible to arise because $[[\ s x.t]]\not\leadsto$.

 \
%    \item[Case.]
% \[\ottdruleRedXXSym{} \]
 
%  By IH, we know that $ [[ G |- [close (E,t') : a] = b ]]$, so again by applying the $\ottdrulename{Red\_Sym}$ rule, we have $[[ G |- [close (E,t') : b] = a  ]]$.

    \item[Case.]
 \[\ottdruleRedXXTrans{} \]
 
Assume $[[ G |- [t : t1] => b ]]$ and $[[ G |- [t : b] => t2 ]]$ and $[[ G |- [t : t1] => t2 ]]$ and $[[G |- . ; t ~> E ; t']]$. To show there exists a $c$ such that $[[ G |- [close (E, t') : t1] => c ]]$ and $[[ G |- [close (E, t') : t2] => c ]]$. By IH, we know there exist $t_3, t_4 $ such that $[[ G |- [close (E, t') : t1] => t3 ]]$ and $[[ G |- [close (E, t') : b] => t3 ]]$ and $[[ G |- [close (E, t') : b] => t4 ]]$ and $[[ G |- [close (E, t') : t2] => t4 ]]$. By confluent of $\triangleright$, we have there exists $c$ such that $[[ G |- [close (E, t') : t3] => c ]]$ and $[[ G |- [close (E, t') : t4] => c ]]$. Thus by $\ottdrulename{Red\_Trans}$ rule, we have $[[ G |- [close (E, t') : t1] => c ]]$ and $[[ G |- [close (E, t') : t2] => c ]]$ .


%By IH, we know that $ [[ G |- [close (E,t') : a] = b ]]$ and $[[ G |- [close (E,t') : b] = c]]$, so again by applying the $\ottdrulename{Red\_Trans}$ rule, we have $[[ G |- [close (E,t') : a] = c  ]]$.
\


    \item[Case.]
   \[ \ottdruleRedXXDef{}\]

Assume $[[x = t2 in G]]$ and $[[G |- . ; t ~> E ; t']]$ and $[[G |- [t : x] => t2]]$. We have $[[ G |- [close (E, t') : x] => t2 ]]$ by $\ottdrulename{Red\_Def}$ and $[[ G |- [close (E, t') : t2] => t2 ]]$ by $\ottdrulename{Red\_Eval}$.


    \item[Case.]

   \[ \ottdruleRedXXCongPi{}\]

Assume $[[  G |- [ t  : ! s x : t1 . t2 ] => ! s x : t1' . t2']]$ and $[[G |- . ; t ~> E ; t']]$ . Since $[[  G |- [ _ : t1 ] => t1']]$ and $[[  G , maybeval(s,x) |- [ _ : t2 ] => t2']]$, we always have $[[  G |- [ close (E, t') : ! s x : t1 . t2 ] => ! s x : t1' . t2']]$ and $[[  G |- [ close (E, t') : ! s x : t1'. t2' ] => ! s x : t1' . t2']]$.

    \item[Case.]

   \[ \ottdruleRedXXCongLam{}\]
Same argument as $\ottdrulename{Red\_CongPi}$.
    \item[Case.]

   \[ \ottdruleRedXXCongApp{}\]
Same argument as $\ottdrulename{Red\_CongPi}$.
    \item[Case.]

   \[ \ottdruleRedXXCongSelf{}\]
Same argument as $\ottdrulename{Red\_CongPi}$.
    \item[Case.]

   \[ \ottdruleRedXXCongMu{}\]
Same argument as $\ottdrulename{Red\_CongPi}$.

   \end{itemize}



\end{proof}


\begin{lemma}
\label{gen}
If $[[ G |- \ s x . t : t' ]]$, then there is some $t_1 , t_2$ such that $[[G , x : t1 , maybeval(s,x) |- t : t2 ]]$ and $[[ G |- [ \ s x.t : t'] = ! s x : t1 . t2 ]]$.
\end{lemma}

\begin{proof}
Consider the derivation for $[[ G |- \ s x . t : t' ]]$. There must be a place in its derivation where it makes use of the $\ottdrulename{Lam}$ rule. And we know that $\ottdrulename{Conv}$ rule doesn't change the subject term $[[\ s x. t ]]$. We can follow the branch of derivation where $[[\ s x. t ]]$ is introduced the first time, after that, we can only apply $\ottdrulename{Conv}$ rule to get to $[[ G |- \ s x . t : t' ]]$. So there is some $t_1 , t_2$ such that $[[G , x : t1 , maybeval(s,x) |- t : t2 ]]$. And we get $[[ G |- [ \ s x.t : t'] = ! s x : t1 . t2 ]]$ by lemma \ref{lem:trans}.
\end{proof}

\begin{definition}
A variable $x$ is subsitutable by $t$ with respect to $[[G]]$ iff they satisfy the following
conditions
\begin{itemize}
 \item $[[ G = G1, x:c, G2]]$ and $[[G1 |- t : c]]$.
 \item If $[[val x in G]]$, then $[[G1 |- val t]]$.
 \item There does not exist $t'$ such that $[[x = t' in G]]$.
\end{itemize}
\end{definition}


\begin{lemma}[Substitution Lemma]
\label{subst}
If $[[ G1, x:a, G2 |- b : c ]]$ and $x$ is subsitutable by $t$ w.r.t. $[[G1, x:a, G2]]$, then $[[ G1, [t/x]G2 |- [t/x]b : [t/x]c ]] $. 

\end{lemma}
\begin{proof}
By induction on the derivation of $[[ G1, x:a, G2 |- b : c ]]$

\begin{itemize}

\item[Case.]
 \[\ottdruleVar{} \]

If $[[G = G1,x : a, G2 ]]$, then we want to show $[[G1,[t/x]G2 |- t : [t/x] a ]]$. Since 
$[[G1 |- a : * ]]$, so $[[ [t/x]a = a ]]$. We know that $[[G1 |- t : a ]]$ implies $[[G1,[t/x]G2 |- t : a ]]$ by weakening.

If $[[G1,x : a, G2' , y: t', G2 |- y : t']]$, then we will have $[[G1,[t/x] G2' , y:[t/x] t',[t/x] G2 |- y :[t/x] t']]$. 

\item[Case.]
  \[\ottdruleType{} \]

In this case, we will always have $[[G1, [t/x]G2 |- * : *]]$.

\item[Case.]
 \[  \ottdrulePi{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |- ! s x: t1.t2 : *]]$. We want to show $[[G1, [t/y]G2 |- ! s x: [t/y]t1.[t/y]t2 : *]]$. By IH, we know that
$[[G1, [t/y]G2 |- [t/y]t1 :* ]]$ and $[[G1, [t/y]G2, x: [t/y]t1, maybeval(s, x) |- [t/y]t2 : *]]$. So by applying the $\ottdrulename{Pi}$ rule again, we get the result.

\item[Case.]
 \[  \ottdruleSelf{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |- self x .t2 : *]]$. We want to show $[[G1, [t/y]G2 |- self x . [t/y]t2 : *]]$. By IH, we know that
$[[G1, [t/y]G2, x: self x.[t/y]t2 |- [t/y]t2 : *]]$. So by applying the $\ottdrulename{Self}$ rule again, we get the result.

\item[Case.]
 \[  \ottdruleConv{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |- t1 : t2 ]]$ and $[[G |- [ t1 : t2] = t2']]$ and $[[G1,y : a, G2 |- t1 : t2' ]]$. We want to show $[[G1, [t/y]G2 |-  [t/y]t1 : [t/y] t2' ]]$. By IH, we know that
$[[G1, [t/y]G2 |- [t/y]t2 : [t/y]t2 ]]$. By lemma \ref{lemma4} we have $[[G1, [t/y]G2 |- [ [t/y]t1 : [t/y]t2] = [t/y]t2']]$. So by applying the $\ottdrulename{Conv}$ rule again, we get the result.

\item[Case.]
 \[  \ottdruleLam{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |- \ s x . t3 : ! s x: t1.t2 ]]$. We want to show $[[G1, [t/y]G2 |- \ s x . [t/y]t3 : ! s x: [t/y]t1.[t/y]t2 ]]$. By IH, we know that
$[[G1, [t/y]G2 |- [t/y]t1 :* ]]$ and $[[ G1, [t/y]G2, x: [t/y]t1, maybeval(s, x) |- [t/y]t3 :[t/y]t2 ]]$. So by applying the $\ottdrulename{Lam}$ rule again, we get the result.

\item[Case.]
 \[  \ottdruleMu{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |- \m x1 = t1 , dots , xn = tn. t' : t'']]$. We want to show $[[G1,[t/y]G2 |- \m x1 =[t/y] t1 , dots , xn = [t/y]tn.[t/y] t' : [t/y]t'']]$. Let $[[ G' = G1, y:a, G2 , x1 : t'1 , ... , xn : t'n , x1 = t1 , ... , xn = tn]]$. By IH, we have $[[forall i in {1,dots,n}. G1,[t/y] G2 , x1 : [t/y]t'1 , ... , xn : [t/y]t'n , x1 = [t/y]t1 , ... , xn = [t/y]tn |- [t/y]ti :[t/y] t'i]]$ and $[[ G1,[t/y] G2 , x1 : [t/y]t'1 , ... , xn : [t/y]t'n , x1 = [t/y]t1 , ... , xn = [t/y]tn |- [t/y]t' : [t/y]t'']]$. So by applying the $\ottdrulename{Mu}$ rule again, we get the result.

\item[Case.]
 \[  \ottdruleApp{}  \]

Assumes $[[G = G1,y : a, G2]]$ and $[[ G1 |- t : a]]$ and $[[G1,y : a, G2 |-t1 t2 : (! s x: t3.t4) t2 ]]$. We want to show $[[G1, [t/y]G2 |- ([t/y]t1)[t/y] t2 : (! s x:[t/y]t3.[t/y]t4) [t/y]t2 ]]$. By IH, we know that
$[[G1, [t/y]G2 |- [t/y]t2 : [t/y]t3 ]]$ and $[[ G1, [t/y]G2 |- [t/y]t1 :! s x:[t/y]t3.[t/y]t4  ]]$. So by applying the $\ottdrulename{App}$ rule again, we get the result.


\end{itemize}
\end{proof}

\begin{lemma}
\label{lemma4}
If $[[G1, y: c, G2 |- [t: a] = b ]]$ and $y$ is substitutable by $t1$ w.r.t. $[[G1, y: c, G2]]$, then $[[G1, [t1/y]G2 |- [ [t1/y]t: [t1/y]a] = [t1/y]b ]]$.
\end{lemma}

\begin{proof}
By inversion on $[[G1, y: c, G2 |- [t: a] = b ]]$, we have $[[G1, y: c, G2 |- [t: b] => t2 ]]$ and $[[G1, y: c, G2 |- [t: a] => t2 ]]$. By lemma \ref{lemma4.5}, we have $[[G1, [t1/y]G2 |- [ [t1/y]t: [t1/y]a] => [t1/y]t2 ]]$ and $[[G1, [t1/y]G2 |- [ [t1/y]t: [t1/y]b] => [t1/y]t2 ]]$. Thus
by $\ottdrulename{Eq\_Join}$ rule, we have $[[G1, [t1/y]G2 |- [ [t1/y]t: [t1/y]a] = [t1/y]b ]]$.

\end{proof}

\begin{lemma}
\label{lemma4.5}
If $[[G1, y: c, G2 |- [t ?: a] => b ]]$ and $y$ is substitutable by $t_1$ w.r.t. $[[G1, y: c, G2]]$, then $[[G1, [t1/y]G2 |- [ [t1/y](t ?): [t1/y]a] => [t1/y]b ]]$.
\end{lemma}

\begin{proof}
Let $[[G = G1, y: c, G2]]$. We prove this by induction on derivation of $[[G1, y: c, G2 |- [t ?: a] => b ]]$.
\begin{itemize}

\item[Case.]
\[\ottdruleRedXXEval{} \]

We want to show $[[G1, [t3/y]G2 |- [ [t3/y](t ?): [t3/y]t1 ] = [t3/y]close(E', t2) ]]$. We know that $[[  G |- . ; [t3/y]t1 ~> m [t3/y]E' ; [t3/y]t2]]$ by lemma \ref{lemma5}. So by applying the $\ottdrulename{Red\_Eval}$ rule again, we get the result.

\item[Case.]
\[\ottdruleRedXXSelfSubst{} \]
In this case we only consider $[[t ? = t]]$.
 We want to show $[[G1, [t1/y]G2 |- [ [t1/y]t: self x.[t1/y]t'] => [t1/y]([t/x]t') ]]$. By $\ottdrulename{Red\_SelfSubst}$ rule we know that $[[G1, [t1/y]G2|- [ [t1/y]t: self x.[t1/y]t'] => [ [t1/y]t/x]([t1/y]t') ]]$. So it is the case.

\item[Case.]
\[\ottdruleRedXXPiVal{} \]

We want to show $[[G1, [t3/y]G2 |- [\ s x . [t3/y]t : ! s x : [t3/y]t1. [t3/y]t2 ] => ! s x : [t3/y]t1'. [t3/y]t2' ]]$. By IH, we have $[[G1, [t3/y]G2 |- [ _ :  [t3/y]t1 ] =>  [t3/y]t1' ]]$ and $[[G1, [t3/y]G2 |- [ [t3/y]t :  [t3/y]t2 ] =>  [t3/y]t2' ]]$. So applying $\ottdrulename{Red\_PiVal}$ rule again, we get the result.

\item[Case.]
\[\ottdruleRedXXTrans{} \]

We want to show $[[G1, [t3/y]G2 |- [ [t3/y]t :  [t3/y]t1 ] =>  [t3/y]t4 ]]$. By IH, we have $[[G1, [t3/y]G2 |- [ [t3/y]t :  [t3/y]t1 ] =>  [t3/y]t2 ]]$ and $[[G1, [t3/y]G2 |- [ [t3/y]t :  [t3/y]t2 ] =>  [t3/y]t4 ]]$. So applying $\ottdrulename{Red\_Trans}$ rule again, we get the result.

\item[Case.]
\[\ottdruleRedXXDef{} \]
In this case, assume $[[G = G1, y:c, G2]]$ and $[[x = t in G2]]$. So $[[x = [t3/y]t in [t3/y]G2]]$. Thus by applying $\ottdrulename{Red\_Def}$ rule, we have $[[ G1, [t3/y]G2 |-  [ [t3/y](t ?) :x] => [t3/y]t ]] $. If $[[x = t in G1]]$, then $[[y notin FV ( t ) ]]$ by ASSUMPTION about 
the $[[G]]$. Thus $[[ G1, [t3/y]G2 |-  [ [t3/y](t ?) :x] => t ]]$. 

    \item[Case.]

   \[ \ottdruleRedXXCongPi{}\]
Same argument as $\ottdrulename{Red\_Trans}$.

    \item[Case.]

   \[ \ottdruleRedXXCongLam{}\]
Same argument as $\ottdrulename{Red\_Trans}$.
    \item[Case.]

   \[ \ottdruleRedXXCongApp{}\]
Same argument as $\ottdrulename{Red\_Trans}$.
    \item[Case.]

   \[ \ottdruleRedXXCongSelf{}\]
Same argument as $\ottdrulename{Red\_Trans}$.
    \item[Case.]

   \[ \ottdruleRedXXCongMu{}\]
Same argument as $\ottdrulename{Red\_Trans}$.


\end{itemize}

\end{proof}


\begin{lemma}
\label{lemma5}
If $[[G1, y: c,G2 |- E ; t ~> E' ; t' ]]$ and $y$ is substitutable by $t1$ w.r.t. $[[G1, y: c, G2]]$, then $[[G1, [t1/y]G2 |- [t1/y]E; [t1/y]t ~> [t1/y]E';[t1/y]t' ]]$.

\end{lemma}

\begin{proof}
We will prove this by induction on derivation of $[[G1, y: c,G2 |- E ; t ~> E' ; t' ]]$.  

\begin{itemize}
\item[Case.]
\[ \ottdruleBetaV{} \]

Let $[[G = G1, y: c,G2]]$, we have $[[G1, y: c,G2 |- val t]]$ and $[[G1, y: c,G2 |- E; (\ cbv x . t') t ~> E; [t/x] t' ]]$. We
want to show that $[[G1, [t1/y]G2 |- [t1/y]E; (\ cbv x .[t1/y] t') [t1/y]t ~> [t1/y]E;[t1/y] ([t/x] t') ]]$. By lemma \ref{lemma6}
we know that $[[G1, [t1/y]G2 |- val [t1/y]t]]$. So by apply the rule $\ottdrulename{BetaV}$, we have $[[G1, [t1/y]G2 |- [t1/y]E; (\ cbv x .[t1/y] t') [t1/y]t ~> [t1/y]E;[ [t1/y]t/x] ([t1/y] t') ]]$. Thus it is the case.

\item[Case.]
\[ \ottdruleBetaN{} \]
Let $[[G = G1, y: c,G2]]$, we have $[[G1, y: c,G2 |- E; (\ cbn x . t') t ~> E; [t/x] t' ]]$. We want to show that $[[G1, [t1/y]G2 |- [t1/y]E; (\ cbn x .[t1/y] t') [t1/y]t ~> [t1/y]E;[t1/y] ([t/x] t') ]]$. So by apply the rule $\ottdrulename{BetaN}$, we have $[[G1, [t1/y]G2 |- [t1/y]E; (\ cbn x .[t1/y] t') [t1/y]t ~> [t1/y]E;[ [t1/y]t/x] ([t1/y] t') ]]$. Thus it is the case.

\item[Case.]
\[ \ottdruleMuRed{} \]

Assume $y$ is substitutable by $a$ w.r.t. $[[G]]$. Let $[[G = G1, y: c,G2]]$, we have $[[   G |- E ; \m xs = ts . t ~> E , x1 = t1 , dots , xn = tn ; t ]]$. We want to show that $[[ G1, [a/y]G2 |- [a/y]E ; \m  x1 = [a/y]t1 , dots , xn = [a/y]tn .[a/y] t ~> [a/y]E , x1 = [a/y]t1 , dots , xn = [a/y]tn ; [a/y]t ]]$. This is by directly apply the $\ottdrulename{MuRed}$ rule.

\item[Case.]
\[ \ottdruleAppOne{} \]

Assume $y$ is substitutable by $a$ w.r.t. $[[G]]$ and $[[G = G1, y: c,G2]]$. By IH, we have 
$[[ G1, [a/y]G2 |- [a/y]E ;[a/y]t1 ~> [a/y]E';[a/y]t1' ]]$. Thus by applying $\ottdrulename{App1}$ rule again, we get $[[ G1, [a/y]G2 |- [a/y]E ;[a/y]t1 ([a/y]t2)~> [a/y]E';[a/y]t1'([a/y]t2) ]]$. 

\item[Case.]
\[ \ottdruleAppCbv{} \]

Assume $y$ is substitutable by $a$ w.r.t. $[[G]]$ and $[[G = G1, y: c,G2]]$. By IH, we have 
$[[ G1, [a/y]G2 |- [a/y]E ;[a/y]t2 ~> [a/y]E';[a/y]t2' ]]$. Thus by applying $\ottdrulename{AppCbv}$ rule again, we get 

$[[ G1, [a/y]G2 |- [a/y]E ;(\ cbv x .[a/y]t1) ([a/y]t2)~> [a/y]E';( \ cbv x. [a/y]t1)([a/y]t2') ]]$. 


%Let $[[G = G1, y: c,G2]]$, we have $[[G1, y: c,G2 |- E;  x  ~> E; t ]]$ and $[[x = t in E]]$. Thus $[[x notin dom (G) ]]$. We want to show that $[[G1, [t1/y]G2 |- [t1/y]E; x  ~> [t1/y]E;[t1/y] t ]]$. Since $[[ x = [t1/y]t in [t1/y]E ]]$, it is the case.

%\item[Case.]
%\[ \ottdruleSubstGlobal{} \]
%Let $[[G = G1, y: c,G2]]$, we have $[[G1, y: c,G2 |- E;  x  ~> E; t ]]$ and $[[x = t in G]]$. We want to show that $[[G1, [t1/y]G2 |- [t1/y]E; x  ~> [t1/y]E;[t1/y] t ]]$. Since $[[ x = [t1/y]t in [t1/y]E ]]$, it is the case.




\end{itemize}

\end{proof}


\begin{lemma}
\label{lemma6}
If $[[G1, y: c,G2 |- val t]]$ and $y$ is substitutable by $t1$ w.r.t. $[[G1, y:c, G2]]$, then $[[G1, [t1/y]G2 |- val [t1/y]t]]$.
\end{lemma}

\begin{proof}
By inversion on the derivation of $[[G1, y: c,G2 |- val t]]$.
\begin{itemize}
\item[Case. ]
 \[ \ottdruleValueStar{} \]
 In this case, we always get $[[G1, [t1/y]G2 |- val *]]$.

\item[Case.]
 \[\ottdruleValueVar{} \]
 If $x \not = y$, then it will be the case. If $x = y$, then 
we have $[[G1, [t1/y]G2 |- val t1]]$, because this follows from $[[G1 |- val t1]]$ by Lemma~\ref{lem:weakening}. So it is the case.

\item[Case.]
\[\ottdruleValueLam{} \]
We always have $[[G1, [t1/y]G2 |- val \ s x. [t1/y]t]]$.

\item[Case.]
\[\ottdruleValueSelf{} \]
We always have $[[G1, [t1/y]G2 |- val self x. [t1/y]t]]$.

\item[Case.]
\[\ottdruleValuePi{} \]
We always have $[[G1, [t1/y]G2 |- val ! s x: [t1/y]t'. [t1/y]t]]$.

\end{itemize}

\end{proof}

\begin{lemma}[Weakening]
\label{lem:weakening}
If $[[G |-  t: t']]$, then $[[G, y: c  |- t : t']]$.
\end{lemma}


\begin{lemma}[Transitivity]
\label{lem:trans}
If $[[G |-  [t: t1] = t2 ]]$ and $[[G |-  [t: t2] = t3 ]]$, then $[[G |-  [t: t1] = t3 ]]$.
\end{lemma}

\begin{proof}
This is by confluent of $\triangleright$ relation.
\end{proof}

\begin{lemma}[Congruence]
\label{lem:congruence}
If $[[G |- . ; t ~> E; t' ]]$ then $[[G |-  [ t ? : [t/x]t'' ] => [close (E , t)/x]t'' ]]$.
\end{lemma}

\begin{proof}

\end{proof}

\end{comment}
\end{document}
