\iffalse
** COMMANDS: 
ctrl C + ctrl C          = Compile
ctrl C + ctrl V          = View
ctrl C + ctrl T + ctrl P = toggle .pdf and .dvi

** TODO: final

There must not be any 'I', 'we', ... It has to be completely impersonal
The tempus must be present tense
Read it all through as if I didn't know what the thesis was about
Go through all CAPS parts and replace them appropriately
Make sure none of the lines go right of the margin...
   particularly math mode names $...$
   also the code boxes
Preservation and progress definitions are correct! Do not change them.
Do not reference THIS as a paper, but as a thesis!

\fi

\documentclass{article}

\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{proof}
\usepackage[backend=bibtex,style=numeric]{biblatex}
\usepackage{tabu}
\usepackage{datetime}
\usepackage{epstopdf}
\usepackage{epsfig}

% equation
\newcommand{\E}[1]{\begin{equation} \begin{split} #1 \end{split} \end{equation}}
% quotes
\newcommand{\Q}[1]{\begin{quotation} #1 \end{quotation}}
% code
\newcommand{\C}[1]{\texttt{#1}}
% ephasized - used for function names
\newcommand{\N}[1]{\emph{#1}}
% overline code
\newcommand{\OC}[1]{$\overline{\C{#1}}$}
% space
\renewcommand{\S}[0]{\text{ }}
% double space
\renewcommand{\SS}[0]{\text{  }}
% standard box with short bottom margin
\newcommand{\BB}[1]{\C{\begin{center} \begin{tabular}{|l|} \hline #1 \hline \end{tabular} \end{center}}}
% syntax box
\newcommand{\BS}[1]{\C{\begin{center} \begin{tabular}{|l l r|} \hline #1 \hline \end{tabular} \end{center}}\text{}\\}
% rule box - used for the grammar
\newcommand{\BR}[1]{\begin{center} \tabulinesep=0.6mm \begin{tabu}{|l c r|} \hline #1 \hline \end{tabu} \end{center}\text{}\\}
% header for rule box
\newcommand{\BRH}[1]{\multicolumn{2}{|l}{\textbf{#1}} &\\ &&\\}

\addbibresource{refs}

\begin{document}

\pagestyle{empty} 
\pagenumbering{roman} 
\vspace*{\fill}\noindent{\rule{\linewidth}{1mm}\\[4ex]
{\huge\sf Lightweight Family Polymorphism revisited}\\[2ex]
{\huge\sf Mads Pedersen, 20083364}\\[2ex]
\noindent\rule{\linewidth}{1mm}\\[4ex]
\noindent{\Large\sf Master's Thesis, Computer Science\\[1ex] 
\monthname\ \the\year  \\[1ex] Advisor: Erik Ernst\\[15ex]}\\[\fill]}
\epsfig{file=logo.eps}\clearpage

\pagestyle{plain} 
\pagenumbering{roman} 

\begin{abstract}
Object oriented languages have a deficiency when it comes to handling inheritance over groups of mutually referencing classes. No currently available constructions exist in the mainstream object oriented languages that allow sound and reusable inheritance between groups of classes. The concept of Family Polymorphism by Ernst solves the problem by introducing the term class families to serve as containers for groups of classes. These families can inherit from each other, and a class nested in one family can inherit from an appropriate class in another class family, when that class family is a super of the first. Classes that can exist in more than one class family are called relative classes and their actual type is resolved at runtime using late binding.

Lightweight Family Polymorphism (.FJ) is a formalization of Family Polymorphism in Java that is less general but for which a type safety proof is much simpler. The goal of this thesis is to mechanically prove .FJ type safe using the Coq proof assistant. Type safety will be defined as preservation + progress as proposed by Milner and a formal definition is introduced and discussed.

The Coq proof assistant is a program that allows the user to write mathematical definitions and proofs using a formal language and have the proofs evaluated. Coq itself relies on a small core that has been extensively checked and the primary strength of Coq is that the core does not change for each new proof and need not be checked as it would for proofs by pen and paper.

.FJ is an extension to Featherweight Java (FJ) by Igarashi et al. and FJ is well-suited for proving type safety of extensions to Java. It omits many features that would crowd a proof of type safety but retains the most important and characteristic features of Java, albeit a completely functional Java. The .FJ calculus is an extension of FJ, also made by Igarashi et al. and is used as the calculus in this thesis. A formalization of the .FJ calculus in Coq is proposed with some reasonable changes, all of which are discussed. The formalization is then proven type safe by following the structure of another proof of type safety for FJ by De Fraine et al. The proof of FJ is also done in Coq and lack of type arguments and relative types makes for a considerably simpler proof.

This thesis introduces the concepts needed to understand the calculus and its type safety proof in Coq, and discusses the more interesting choices made. The more difficult proofs of the intermediate lemmas are caused by not omitting type parameters in most of the proof, which means that type arguments and consequently type substitution has to be implemented.

In the end, formalizing a proof in Coq makes for an extremely rigorous proof and thus, while parsimony is vital in proofs by pen and paper, verbosity can be preferable for added clarity in this context.
\end{abstract}

\newpage

\newpage
\tableofcontents
\newpage

\pagestyle{plain}
\pagenumbering{arabic} 

\section{Introduction}
The purpose of this thesis is to prove that Lightweight Family Polymorphism (.FJ), a formalization of Family Polymorphism in Java proposed by Igarashi et al.\cite{dotFJ}, is type safe using the Coq Proof Assistant.\\

The concept of Family Polymorphism\cite{fampoly} is introduced and it is argued that it can be immensely helpful when dealing with groups of mutually referencing classes. Since Family Polymorphism introduces some new constructions to programming that might seem foreign, an example from the .FJ paper is given, where the inheritance relations of two families of mutually referencing classes are briefly described. This relation is formally defined later in the thesis.\\

When implementing new language features, it is good to know that, once implemented, they will not compromise the type safety of the language - ie. any type error will be caught at compile time and not show up as a runtime crash. Alternatively, proving a language feature type safe after having implemented it, as was the case with generics in Java\cite{oracleGenerics}, is also helpful. Generics, however, at the time of the implementation in Java (2004 within J2SE 5.0) was considered a well understood problem, even though it was not explicitly proven type safe.\\

To this end, a formalization of Java called Featherweight Java (FJ) proposed by Igarashi et al.\cite{FJ} is introduced as the background for the .FJ calculus.
Then the .FJ calculus is explained in detail. First its differences from the FJ calculus are explained, then its grammar and rules. Finally a type safety proof is discussed.\\ 

The proof of type safety presented by this thesis adheres to the calculus of .FJ and only strays when it is strictly necessary. It is an adapted version of a type safety proof of FJ proposed by Fraine et al.\cite{FJcode}. However, the addition of type parameters and Family Polymorphism makes the formalization and proof more complicated.\\

The notion of types and errors are defined in order to strictly define type safety using conventional logic. Then the Coq Proof Assistant is introduced and a short introduction to its syntax is given with an example of some of the code from the proof presented by this thesis. Coq's differences from other proof assistants is also discussed along with an introduction to proof assistants in general.\\

The proof of type safety presented by this paper is then discussed, starting with its structure and how to appropriately formalize type safety in Coq, when the proof spans several files. It should be noted that throughout the thesis, a proof is only paraphrased when it shows something central or particularly interesting and to see how the proofs are \N{actually} constructed, it is best to look at the code in the appendix.\\

Due to time constraints there is one admitted lemma, which means that the lemma has not been proven and is only assumed correct. It is an intermediate lemma that should not cause any problems but will most likely be time consuming to implement because it handles method invocations on special types that need some work to resolve correctly. The admitted lemma is discussed as part of the more general lemma it helps to prove and in the conclusion it is discussed again and some perspective is given.\\

The thesis is structured as follows: Section 2 explains most of the background material such as what is Family Polymorphism, how is Java represented in the proof and how exactly is the proof being formalized. Section 3 briefly covers related work in the theorem proving field using proof assistants. Section 4 discusses implementation issues, such as problems and the more interesting choices made. Section 5 describes some experiments with example code and presents the findings. Finally, section 6 concludes.\\

\section{Background} 
This thesis introduces many aspects of types in object oriented languages, mathematical proving, and formal systems. Therefore the following will be a short introduction to the background material of the most important constructions used throughout the thesis. Some basic knowledge about typing and logic is assumed. 

When referencing sections throughout the thesis, the numbers will correspond to the appropriate section or subsection. When referencing a nested subsection, which is indicated by three numbers such as 1.2.3, only the numbers of the subsection is given.\\

\subsection{Family Polymorphism}
The term Family Polymorphism was coined by Erik Ernst in \cite{fampoly} and is a programming language feature that allows inheritance relations between groups of classes. This work was prompted by the current inability of most object oriented languages to ensure statically that a particular subclass $x'$ of some class $x$ is always paired up with the appropriate subclass $y'$ of some class $y$, in such a way that there is relation between $x'$ and $y'$ similar to that of $x$ and $y$.\\

Some attempts at handling family polymorphism using currently available constructions are discussed in Ernst's paper and they are all found to sacrifice either flexibility or safety - both of which are highly desirable. This effectively argues that there is a current inability in the way Java and C++ handles polymorphism between groups of classes. Ernst then argues that it is not at all specific to those two languages but rather a fault of traditional polymorphism and no current solutions, except Family Polymorphism, are known. Additionally, the problem is most likely to become increasingly visible as the amount of variability in software increases.\\

To mend this problem, Ernst introduces the term class families that act as containers for mutually referencing classes and allowing these class families to inherit from each other under specific rules which ensures flexibility and safety in multi-object inheritance. The main concept that facilitates class families is dependent types, which depending on what family they are in, have different types. By the use of late binding, a class family is known statically to be \N{some} class family but the actual binding is not known statically.\\

An actual implementation of Family Polymorphism in gbeta is also presented in Ernst's paper. In that construction, classes are attributes of objects and these objects then act as class families. However, this thesis will not discuss the gbeta implementation.\\

\subsubsection{Example program}
To show off Family Polymorphism, the canonical example is formalizing a graph. Imagine two classes \N{Node} and \N{Edge} being members of the family \N{Graph} and mutually refer to each other; each node has a list of references to connected edges and each edge has references to its source and destination nodes. Now a specialized graph is created, much like the \N{OnOffGraph} in Ernst's paper, called \N{ColorWeightGraph} with \N{ColorNode} and \N{WeightEdge} so the weights of the edges depend on the color of their source and destination nodes. By letting \N{ColorWeightGraph} extend \N{Graph} in a way that lets the use of \N{Node} and \N{Edge} be changed to their more specific counterparts when used in an appropriately specific context, code reuse is safely achieve and completely without the use of type casts.\\

\subsection{Featherweight Java}
Featherweight Java\cite{FJ} (FJ) is another lightweight version of Java where most complex features are omitted.\\

There will always be a trade-off between completeness and compactness of a model. If you choose to model a language in complete detail, it will incur the cost of added complexity that might not be worth the extra features when it comes to proving things about the model. If, however, you choose to omit all but the core features of a language, you get a strict subset of the original language that can resemble it quite well, while being much easier to prove things about and considerably more transparent. FJ is a calculus that includes a grammar and rules that govern how to reason about that grammar.
\Q{FJ favors compactness over completeness almost obsessively.\cite{FJ}}
This makes the model easy to use as a basis for a proposed extension to Java, as it allows you to focus on the extension itself and not the complete list of features for the underlying language. A key criterion is that the calculus should retain the most important and characteristic parts of the original language in order to model it well.\\

What exactly is to be proven is usually type safety, and the overall purpose of FJ is to make such a type safety proof short and transparent. To that purpose all features that were not interesting were omitted. Casts, however, are a notable feature that did get included into the model. A very interesting omission was assignment, meaning that FJ is a purely functional model of Java and as such all variables are final. The full list of omissions can be found in the FJ paper.\\

There are some idiosyncrasies belonging to FJ. In FJ, \N{this} is a variable and not a keyword. \N{Object} does not appear in the class table and is regarded as a special case for all rules that operate on it. A program is considered to be a pair, which is a set of class definitions and an expression that is to be evaluated. Dropping assignments makes the language functional and has the effect of dropping side effects. A small-step reduction relation is used for evaluation to be able to reason about each individual step of the evaluation. This has the effect that proving soundness for such a system also implies soundness for the special case of Java's evaluation strategy. The reduction relation from one expression $e$ to another expression $e'$ in one step is written $e \longrightarrow e'$.\\

\subsubsection{Notation}
For the sake of clarity and rigor, the following is a brief explanation of the syntax used in this thesis. It is, whenever possible, consistent with the reference material.\\

The class table \N{CT} is simply a map from a class name to its definition. It does not allow any cycles in the subtyping relation due to well-formedness constraints that will always ensure soundness of the class table. This means that the $subtype$ relation, denoted by $<:$, is antisymmetric. Naturally, mutual recursion between class definitions is allowed.\\

Comma denotes list concatenation and semicolon separates different environments. For n pairs, the over-bar notation behaves like this, using traditional BNF notation:
\E{
\overline{C} \S \overline{f} &::= C_{1} \S f_{1}, C_{2} \S f_{2}, ... \S, C_{n} \S f_{n}\\
this.\overline{f} = \overline{f} &::= this.\overline{f_{1}} = \overline{f_{1}}, this.\overline{f_{2}} = \overline{f_{2}}, ... \S, this.\overline{f_{n}} = \overline{f_{n}}}
When in a mathematical or logical context implications will always be denoted by $\Rightarrow$ and empty lists will be denoted by $\bullet$. When in a Coq context, implications will be denoted by $\rightarrow$ and empty lists by \C{nil}, both of which are actual syntax in Coq.\\

\subsubsection{Syntax}
Below is a short introduction to some of the syntax of FJ.\\

The \N{extends} relation that models the subclass relation between a class and its immediate super is denoted by $\triangleleft$. For example, $C \triangleleft D$ means that some class $C$ has an immediate super class $D$. The triangle can be read as "less than" in the sense of a class' height in the inheritance hierarchy.\\

$fields(C) = \overline{C} \S \overline{f}$ models the field variables of each class. The function returns a list of class names and field variable names given a class name. The class names indicate the type of each field variable name.\\

$mtype(m,C) = \overline{B} \rightarrow B$ models method types. The function returns a list of argument types and a return type given a class name and method name.\\

$mbody(m,C) = \overline{x}.e$ models a method body. The function returns a list of variables and an expression given a class name and method name.\\

$\Gamma : \overline{x} \rightarrow \overline{C}$ is the type environment which is a mapping from variables to types.\\

$\Delta : \overline{X} \rightarrow \overline{N}$ is the bound environment which is a mapping from type variables to non-variable types.\\

$bound_{\Delta}(T)$ is a lookup in the bound environment.\\

$\Delta \vdash S <: T$  means that $S$ is a subtype of $T$ in $\Delta$.\\

$\Delta ; \Gamma \vdash e:T$  means the expression $e$ has type $T$ under the environment $\Delta$ and $\Gamma$.\\

Type parameters are invariant in subtyping. It should also be mentioned that the use of environment naming in this thesis is slightly different from that of the FJ paper, as it seems more natural to name the environments after how they are used and not after their keys.\\

\subsubsection{Existing Coq proof}
The proof presented by this thesis is structured after a cast-free proof of FJ written by Bruno De Fraine, with help from Erik Ernst and Mario Sudholt in 2008\cite{FJcode}. The proof of FJ is considerably shorter and less difficult than the one presented by this thesis because of the additions .FJ makes to FJ. These additions have resulted in thousands of lines of original code to be written and not much of the FJ proof is left except the basic structure. The exact number of lines of code is in the appendix along with all the code. The structure of the proof is explained in section 4.1 and it is made explicit where the additions have been made.\\

\subsection{Lightweight Family Polymorphism}
This section will introduce .FJ, its syntax, and how it is different from FJ. The formal definitions of the .FJ calculus are all taken directly from the .FJ paper and helps to make this thesis self-contained. The inclusion is justified by the mathematical, presented by this thesis, being a significant contribution. No changes have been made, which ensures that the proof presented by this thesis exactly models the .FJ calculus.\\

The .FJ calculus was created by Igarashi, Saito, and Viroli in 2005\cite{dotFJ}. It tackles the challenge of implementing Ernst's formalization of Family Polymorphism on top of FJ. The main modification from Ernst's formalization is the way of representing class families. In .FJ class families are modeled as classes instead of objects, which are static instead of dynamic. Additionally, inheritance is not considered subtyping. This greatly simplifies a proof of soundness but while the calculus is still Turing complete it does lose expressive power - ie. not just syntactic sugar but actually less expressive than Java, using Felleisen's definition\cite{exp}.\\

Families are represented as top-level classes and its members as nested classes. The example code that explains Family Polymorphism is taken from the .FJ paper and those class names will be used as examples. Types that are on the form \N{Graph.Node} and \N{Graph.Edge} are known as fully qualified types. Types on the form \N{.Node} and \N{.Graph} are relative path types which means that they are to be looked up in the current family. To sum up, this means that \N{ColorWeightGraph.Node} inherits all properties of \N{Graph.Node} but is not considered a subtype. In the .FJ paper, the ability to subclass families comes mostly from the use of relative class names which are resolved using late binding.\\

The formalization in .FJ only deals with a single level of class nesting instead of an arbitrary level as in standard Java. Some expressive power is lost by this omission but the feature is not strictly needed and makes for a more complicated type safety proof. Type casts are omitted since the point of the .FJ formalization is to model Family Polymorphism without casts. Much of the syntactic sugar that Java offers has been replaced by stricter syntax rules, such as every parametric method invocation has to provide its type arguments. Method invocation of super is also omitted.\\ 

It is important to note that .FJ is an entirely functional model of Java with the omission of assignments and as such. This means that a proof of this model is not a proof of the imperative language of Java.\cite{dotFJ}\\

\subsubsection{Syntax}
Below is the grammar for .FJ. \N{decls} is an abbreviation for declarations. Since there is no ambiguity, the terms \N{fully qualified class names} and \N{qualified class names} will both be used throughout the thesis but will mean exactly the same. Usually when \N{fully} is included in the term, it is in an explicit context or it is to emphasize that it is a completely resolved class name.
\BS{
P,Q &::= \SS C $|$ X &family names\\
A,B &::= \SS C $|$ C.C &qualified class names\\
S,T,U &::= \SS P $|$ P.C $|$ .C &types\\
L &::= \SS class C $\triangleleft$ C \{\OC{T}  \OC{f}; K \OC{M} \OC{N}\} &top class decls\\
K &::= \SS C(\OC{T} \OC{f})\{super(\OC{f}); this.\OC{f} = \OC{f}\} &constructor decls\\
M &::= \SS <\OC{X} $\triangleleft$ \OC{C}> T m(\OC{T} \OC{x})\{return e;\} &method decls\\
N &::= \SS class C \{\OC{T} \OC{f}; K \OC{M}\} &nested class decls\\
d,e &::= \SS x $|$ e.f $|$ e.m<\OC{P}>(\OC{e}) $|$ new A(\OC{e}) &expressions\\
v &::= \SS new A(\OC{v}) &values\\
}

\subsubsection{Rules}
This section contains all the rule boxes from .FJ and an explanation of the naming conventions that are used throughout the thesis, except for types which are handled slightly different than in the .FJ paper and will be explained in section 4.5.

$C,D,E$ are simple class names. $X,Y$ are type variable names. $f,g$ are field names, $m$ is a method name, $x$ is a variable.\\

Just like field lookup in FJ, $fields(C) = \overline{C} \S \overline{f}$ also describes field lookup in .FJ.
\BR{
\BRH{Field Lookup}
& \C{$fields$(Object) = $\bullet$} &(F-TObject)\\
&&\\
& \C{class C$\triangleleft$D \{\OC{T} \OC{f};...\}  \SS  $fields$(D) = \OC{U} \OC{g}} &(F-TClass)\\
\cline{2-2}
& \C{$fields$(C) = \OC{U} \OC{g}, \OC{T} \OC{f}} &\\
&&\\
& \C{$fields$(Object.C) = $\bullet$} &(F-NObject)\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{N}\}  \SS  class E \{\OC{T} \OC{f};...\}$\in$\OC{N}} &(F-NClass)\\
& \C{$fields$(D.E) = \OC{U} \OC{g}} &\\
\cline{2-2}
& \C{$fields$(C.E) = \OC{U} \OC{g}, \OC{T} \OC{f}} &\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{N}\}  \SS  E$\notin$\OC{N}  \SS  $fields$(D.E) = \OC{U} \OC{g}} &(F-NSuper)\\
\cline{2-2}
& \C{$fields$(C.E) = \OC{U} \OC{g}} &\\
}

The \N{mtype} function now returns bounds for the function's type variables, if any exist. This means that \N{mtype} now looks like this\\ 
$mtype(m,A) = \C{<}\overline{X}\triangleleft\overline{C}\C{>}\overline{T} \rightarrow T_{0}$\\
Also like in the FJ paper, \N{Object} is handled as a special case so that\\
$mtype(m,Object)$ and $\forall C, \S mtype(m,Object.C)$ are both undefined.
\BR{
\BRH{Method Type Lookup}
& \C{class C$\triangleleft$D \{...\OC{M}\}  \SS  <\OC{X}$\triangleleft$\OC{C}>T$_{0}$ m(\OC{T} \OC{x})\{ return e; \}$\in$\OC{M}} &(MT-TClass)\\
\cline{2-2}
& \C{$mtype$(m,C) = <\OC{X}$\triangleleft$\OC{C}>\OC{T}$\rightarrow$T$_{0}$} &\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{M}...\}  \SS  m$\notin$\OC{M}} &(MT-TSuper)\\
\cline{2-2}
& \C{$mtype$(m,C) = $mtype$(m,D)} &\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{N}\}  \SS  class E \{...\OC{M}\}$\in$\OC{N}} &(MT-NClass)\\
& \C{<\OC{X}$\triangleleft$\OC{C}>T$_{0}$ m(\OC{T} \OC{x})\{ return e; \}$\in$\OC{M}} &\\
\cline{2-2}
& \C{$mtype$(m,C.E) = <\OC{X}$\triangleleft$\OC{C}>\OC{T}$\rightarrow$T$_{0}$} &\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{N}\}  \SS  class E \{...\OC{M}\}$\in$\OC{N}  \SS  m$\notin$\OC{M}} &(MT-NSuper1)\\
\cline{2-2}
& \C{$mtype$(m,C.E) = $mtype$(m,D.E)} &\\
&&\\
& \C{class C$\triangleleft$D \{...\OC{N}\}  \SS  E$\notin$\OC{N}} &(MT-NSuper2)\\
\cline{2-2}
& \C{$mtype$(m,C.E) = $mtype$(m,D.E)} &\\
}

The \N{bound} function works like the one in FJ but needs some additions because of the separation between top and nested classes and the introduction of type variables:\\
\C{$bound_{\Delta}$(A) = A, \S $bound_{\Delta}$(X) = $\Delta$(X), \S $bound_{\Delta}$(X.C) = $\Delta$(X).C}\\

Just like in FJ, the subtyping relation looks like $\Delta \vdash S <: T$, for the bound environment $\Delta$.
\BR{
\BRH{Subtyping}
& \C{$\Delta \vdash$ T <:T} &(S-Refl)\\
&&\\
& \C{$\Delta \vdash$ X <:$\Delta$(X)} &(S-Var)\\
&&\\
& \C{$\Delta \vdash$ T <:Object} &(S-Object)\\
\cline{2-2}
& \C{$\Delta \vdash$ T <:T} &\\
&&\\
& \C{$\Delta \vdash$ S <:T  \SS $\Delta \vdash$ T <:U} &(S-Trans)\\
\cline{2-2}
& \C{$\Delta \vdash$ S <:U} &\\
&&\\
& \C{class C $\triangleleft$ D \{...\}} &(S-Class)\\
\cline{2-2}
& \C{$\Delta \vdash$ C <:D} &\\
}

$\Delta ; A \vdash$ \N{T ok}  means that T is well-formed in class A under the environment $\Delta$. Object is well-formed and the type checker makes sure that all classes in the class table are well-formed as well.
\BR{
\BRH{Type Well-formedness}
& $\Delta \vdash$ \C{Object} ok in \C{A} &(WF-Object)\\
&&\\
& \C{A} $\in$ \N{dom}(\N{CT}) &(WF-Cls)\\
\cline{2-2}
& $\Delta \vdash$ \C{A} ok in \C{B} &\\
&&\\
& \C{class C $\triangleleft$ D \{...N\}  \SS  E $\notin$ \OC{N}} &(WF-SNCls)\\
& $\Delta \vdash$ \C{D.E} ok in \C{A} &\\
\cline{2-2}
& $\Delta \vdash$ \C{C.E} ok in \C{A} &\\
&&\\
& $\Delta \vdash \Delta$ \C{(X)} ok in \C{A} &(WF-Var)\\
\cline{2-2}
& $\Delta \vdash$ \C{X} ok in \C{A} &\\
&&\\
& $\Delta \vdash \Delta$ \C{(X).C} ok in \C{A} &(WF-AbsFam)\\
\cline{2-2}
& $\Delta \vdash$ \C{X.C} ok in \C{A} &\\
&&\\
& $\Delta \vdash$ \C{C.E} ok in \C{C.D} &(WF-Rel)\\
\cline{2-2}
& $\Delta \vdash$ \C{.E} ok in \C{C.D} &\\
}

Most of the rules require absolute path types as arguments, which means that relative types will need to be resolved. To this end, the at expression \C{T@S} is introduced and it denotes the class name that \C{T} refers to in \C{S}. Essentially, \C{T@S} denotes the resolution of \C{T} at \C{S}.\\
\C{.D@P.C = P.D, \S .D@.C = .D, \S P@T = P, \S P.C@T = P.C}\\

\N{typing} in .FJ has the same structure as in FJ. $\Delta ; \Gamma \vdash e:T$ means that $e$ has type $T$ in \N{Delta}.
\BR{
\BRH{Expression Typing}
& $\Delta ; \Gamma \vdash$ \C{x:$\Gamma$(x)} in \C{A} &(T-Var)\\
&&\\
& $\Delta ; \Gamma \vdash$ \C{e$_{0}$:T$_{0}$} in \C{A  \SS  $fields$($bound_{\Delta}$(T$_{0}$@A))} = \OC{T} \OC{f} &(T-Field)\\
\cline{2-2}
& $\Delta ; \Gamma \vdash$ \C{e$_{0}$.f$_{i}$:T$_{i}$@T$_{0}$} in \C{A} &\\
&&\\
& $\Delta ; \Gamma \vdash$ \C{e$_{0}$:T$_{0}$} in \C{A  \SS  $mtype$(m, $bound_{\Delta}$(T$_{0}$@A))} = \C{<}\OC{X}$\triangleleft$\OC{C}\C{>}\OC{U}$\rightarrow$\C{U} &(T-Invk)\\
& $\Delta \vdash$ \OC{P} \C{<:} \OC{C} \C{  \SS  } $\Delta ; \Gamma \vdash$ \OC{e} \C{:} \OC{T} in \C{A  \SS  $\Delta \vdash$ \OC{T} <: ([\OC{P}/\OC{X}]\OC{U})@T$_{0}$} &\\
\cline{2-2}
& $\Delta ; \Gamma \vdash$ \C{e$_{0}$.m<\OC{P}>(\OC{e}):([\OC{P}/\OC{X}]U)@T$_{0}$} in \C{A} &\\
&&\\
& \C{$fields$(A$_{0}$) = \OC{T} \OC{f}  \SS  $\Delta ; \Gamma \vdash$ \OC{x}:\OC{U}} in \C{A  \SS  $\Delta \vdash$ \OC{U}<:\OC{T}@A$_{0}$} &(T-New)\\
\cline{2-2}
& \C{$\Delta ; \Gamma \vdash$ new A$_{0}$(\OC{e}):A@$_{0}$} in \C{A} &\\
}

The method typing rule has an if statement that handles method overriding. Essentially, if a method is already declared in a super class then the overriding method's type arguments, argument types, and return type are all the same as in the super.
\BR{
\BRH{Method Typing}
& \C{$\Delta$ = \OC{X}<:\OC{C}  \SS  $\Delta$;\OC{x}:\OC{T},this:$thistype$(A)$\vdash$e$_{0}$:U$_{0}$} in \C{A} &(T-Method)\\
& \C{$\Delta \vdash$U$_{0}$<:T$_{0}$  \SS  $\Delta \vdash$T$_{0}$,\OC{T},\OC{C}} ok in \C{A} &\\
& if \C{$mtype$(m,$superclass$(A)) = <\OC{Y}$\triangleleft$\OC{D}>\OC{S}$\rightarrow$S$_{0}$} &\\
& then \C{\OC{C}=\OC{D}} and \C{\OC{T},T$_{0}$=[\OC{X}/\OC{Y}](\OC{S},S$_{0}$)} &\\
\cline{2-2}
& \C{<\OC{X}$\triangleleft$\OC{C}>T$_{0}$ m(\OC{T} \OC{x})\{ return e$_{0}$; \}} ok in \C{A} &\\
}

The definitions \N{thistype} and \N{superclass} are necessary and do not exist in FJ. Both of the definitions do what their names suggest.\\
\C{$thistype$(C) = C, \S $thistype$(C.E) = .E}\\
\C{$superclass$(C) = D, \S $superclass$(C.E) = D.E}, \S where \C{class C $\triangleleft$ D\{...\}}\\
Also, Ø is used to mean the empty environment.
\BR{
\BRH{Class Typing}
& \C{K = E(\OC{U} \OC{g}, \OC{T} \OC{f})\{super(\OC{g}); this.\OC{f}=\OC{f}\}} &(T-NClass)\\
& \C{$fields$($superclass$(C).E)=\OC{U} \OC{g}  \SS  \OC{M}} ok in \C{C.E  \SS  Ø$\vdash$\OC{T}} ok in \C{C.E} &\\
\cline{2-2}
& \C{class E\{\OC{T} \OC{f}; K \OC{M}\}} ok in \C{A} &\\
&&\\
& \C{K=C(\OC{U} \OC{g}, \OC{T} \OC{f})\{super(\OC{g}); this.\OC{f}=\OC{f}\}} &(T-TClass)\\
& \C{$fields$(D)=\OC{U} \OC{g}  \SS  \OC{M}} ok in \C{C  \SS  \OC{N}} ok in \C{C  \SS  Ø$\vdash$\OC{T},D} ok in \C{C} &\\
\cline{2-2}
& \C{class C$\triangleleft$D\{\OC{T} \OC{f}; K \OC{M} \OC{N}\}} ok &\\
}

To express the operational semantics, a lookup function is needed for method body. Just like in \N{mtype}, \N{Object} is handled as a special case and this means that\\
$mbody(m\C{<}\overline{P}>,Object)$ and $\forall C, \S mbody(m\C{<}\overline{P}>,Object.C)$ \\
are both undefined.
\BR{
\BRH{Method Body Lookup}
& \C{class C$\triangleleft$D\{...\OC{M}...\}  \SS  <\OC{X}$\triangleleft$\OC{C}>T m(\OC{T} \OC{x})\{ return e$_{0}$; \}$\in$\OC{M}} &(MB-TClass)\\
\cline{2-2}
& \C{$mbody$(m<\OC{P}>,C)=\OC{x}.[\OC{P}/\OC{X}]\OC{e}$_{0}$} &\\
&&\\
& \C{class C$\triangleleft$D\{...\OC{M}...\}  \SS  m$\notin$\OC{M}} &(MB-TSuper)\\
\cline{2-2}
& \C{$mbody$(m<\OC{P}>,C)=$mbody$(m<\OC{P}>,D)} &\\
&&\\
& \C{class C$\triangleleft$D\{...\OC{N}\}  \SS  class E\{...\OC{M}\}$\in$\OC{N}} &(MB-NClass)\\
& \C{<\OC{X}$\triangleleft$\OC{C}>T m(\OC{T} \OC{x})\{ return e$_{0}$; \}$\in$\OC{M}} &\\
\cline{2-2}
& \C{$mbody$(m<\OC{P}>,C.E)=\OC{x}.[\OC{P}/\OC{X}]e$_{0}$} &\\
&&\\
& \C{class C$\triangleleft$D\{...\OC{N}\}  \SS  E$\notin$\OC{N}} &(MB-NSuper1)\\
\cline{2-2}
& \C{$mbody$(m<\OC{P}>,C.E)=$mbody$(m<\OC{P}>,D.E)} &\\
&&\\
& \C{class C$\triangleleft$D\{...\OC{N}\}  \SS  class E\{...\OC{M}\}$\in$\OC{N}  \SS  m$\notin$\OC{M}} &(MB-NSuper2)\\
\cline{2-2}
& \C{$mbody$(m<\OC{P}>,C.E)=$mbody$(m<\OC{P}>,C.E)} &\\
}

The reduction relation on the form $e \longrightarrow e'$ (long right arrow) means that $e$ reduces to $e'$ in one step.
\BR{
\BRH{Computation}
& \C{$fields$(A) = \OC{T} \OC{f}} &(R-Field)\\
\cline{2-2}
& \C{new A(\OC{e}).f$_{i}$ $\longrightarrow$ e$_{i}$} &\\
&&\\
& \C{$mbody$(m<\OC{P}>,A) = \OC{x}.e$_{0}$} &(R-Invk)\\ 
\cline{2-2}
& \C{new A(\OC{e}).m<\OC{P}>(\OC{d}) $\longrightarrow$ [\OC{d}/\OC{x}, new A(\OC{e})/this]e$_{0}$ }&\\
}

Because the reduction relation is expressed as it is, congruence rules are needed to do evaluation in different contexts. They are formalized like this:
\BR{
\BRH{Congruence}
& \C{e$_{0}$ $\longrightarrow$ e$_{0}$'} &(RC-Field)\\
\cline{2-2}
& \C{e$_{0}$.f $\longrightarrow$ e$_{0}$'.f} &\\
&&\\
& \C{e$_{0}$ $\longrightarrow$ e$_{0}$'} &(RC-Invk-Recv)\\
\cline{2-2}
& \C{e$_{0}$.m(\OC{e}) $\longrightarrow$ e$_{0}$'.m(\OC{e})} &\\
&&\\
& \C{e$_{i}$ $\longrightarrow$ e$_{i}$'} &(RC-Invk-Arg)\\
\cline{2-2}
& \C{e$_{0}$.m(...,e$_{i}$,...) $\longrightarrow$ e$_{0}$.m(...,e$_{i}$',...)} &\\
&&\\
& \C{e$_{i}$ $\longrightarrow$ e$_{i}$'} &(RD-New-Arg)\\
\cline{2-2}
& \C{new C(...,e$_{i}$,...) $\longrightarrow$ new C(...,e$_{i}$',...)} &\\
}

\subsection{Type safety}
The following definitions are quite simple but help to define exactly what type safety is. An informal definition by Milner is presented and then a more precise, although still informal, definition is presented.\\

A \N{type system} is a collection of typing rules that each operate on the syntax of a language and assign a \N{type} to each construction. This could be expressions, variables, etc.\\

At \N{Compile time}, the type information present is referred to as static type information since it does not change during compilation - although some of it may be lost. Much research has gone into exploring the boundaries of what can be done with static type information. A typing based on static type information will usually be a conservative guess at an upper bound of the type.\\

At \N{Run time}, dynamic information is available as expressions are evaluated and this provides the actual types instead of a conservative typing as at compile time.\\

A \N{Type error} is an error that happens whenever a type is used in a way that it cannot. For instance, it is not allowed to compile code that assigns an instantiation of class $A$ to a variable of type $B$, given that no reasonable translation from $A$ to $B$ exists.\\

A language is \N{type safe} if and only if the compiler will not allow the use of types that will cause type errors on runtime. This is why Java captures many bugs at compile time that will appear as sporadic crashes in languages that are inherently type unsafe such as JavaScript, Python, and many others. Sometimes type safety is considered to be a gradient property such that a language or even a program can be said to have a high degree of type safety but not absolute type safety. This thesis will only deal with the absolute form of type safety.\\

Most type errors can likely be caught by test cases at runtime but it is very difficult to test every possible program state. These errors are instead caught at compilation and thus an entire category of errors disappears from compiled code, making the property of type safety highly desirable.\\

The following colloquial definition was offered by Robin Milner to describe type safety:
\Q{Well-typed programs cannot "go wrong".\cite{milner}}

\subsubsection{Formalizing type safety}
When formalizing type safety, it is necessary to know exactly how to model that some code does not "go wrong". The canonical definition of type safety has become the following by Wright and Felleisen\cite{wright} and is used by many others, one of whom being Pierce\cite{pierce}. Note that this definition leaves out environments and is thus informal. A formal definition and the corresponding Coq code is given in section 4.2.\\
\E{
\text{Ty}& \text{pe safety} = \text{Preservation} + \text{Progress}\\
\\
\text{Progress: }& t : T \Rightarrow t \text{ value} \lor \exists \S t' \text{ st. } t \longrightarrow t'\\
\text{Preservation: }& (t : T \land \exists \S t' \text{ st. } t \longrightarrow t') \Rightarrow t' : T
}\\

Progress means that if a term $t$ correctly resolves to a type $T$ then $t$ is either on the form of a value and cannot be evaluated further or there exists another term $t'$ where $t$ evaluates to $t'$. Equivalently, preservation means if a term $t$ of type $T$ evaluates to another term $t'$ in one step then $t'$ is also of type $T$.\\

The reason for the parenthesis in the definition of preservation is because of the following order of binding strength of logical operators:\\
$\lnot$ binds stronger than $\Rightarrow$ binds stronger than $\land$ binds stronger than $\lor$.\cite{skalka}\\

Using this formalization of type safety together with a small-step semantics where computation is on a step-by-step basis, a proof of type safety becomes more manageable and modular.\\

\newpage
\subsection{The Coq Proof Assistant}
Here is the definition taken from the Coq website:
\Q{Coq is a formal proof management system. It provides a formal language to write mathematical definitions, executable algorithms and theorems together with an environment for semi-interactive development of machine-checked proofs.\cite{frontCoq}}

Coq is written in OCalm (with a bit of C) and uses a language called Gallina. The project started in 1984 as a separate program written by Thierry Coquand and Gérard Huet and was in 1991 extended by Christine Paulin. About forty people have worked on the project and it is distributed under the GNU Lesser General Public License Version 2.1 (LGPL).\cite{aboutCoq}\\

Coq, being a proof assistant, can be conservative, which sometimes causes proofs to be verbose, although still less so with its procedural style than the declarative counterparts which will be discussed later. However, there is also a certain freedom in showing proofs in their entirety. As it is with proofs by pen and paper, they are no stronger than the people having reviewed them and many proofs published in journals contain obvious mistakes, not all of which are simple typos. Some of the mistakes are easily fixable by someone competent in that field but the mere fact that such errors make it through peer review makes mathematical rigor and parsimony highly desirable. The entire category of errors caused by typos and unjustified jumps in logic disappears when using a proof assistant such as Coq. It is a trade-off, however. The Coq core code must then be trusted instead, but it is quite small and does not change for each new proof.\\

\subsubsection{Coq introduction}
For the introduction to Coq, the $type$ construction from the proof is first introduced. Then a lemma is presented as an example of a logical statement along with the corresponding Coq code, and finally some insight into their equivalence is discussed.\\

The congregate $type$ in the Coq proof is a definition that subsumes family names and qualified class names. These compound types will be discussed in detail in section 4.5 and for now it is enough to know that they are a container for all possible types. 

$type$ is defined by specifying constructors that, given some input, will create an appropriate $type$. Two of such constructors are \N{t\_C C} meaning a top class $C$ and \N{t\_CdotC C D} meaning a nested class $D$ in a top class $C$. 

Both the top class and nested class constructors have the property of being qualified class names. To check if a given type is a qualified class name, \N{typeis\_qcname} takes one $type$ as an argument and checks if the structure of the given $type$ is either of the two constructors described, which are the only kinds of qualified class names. Three other constructors exist for $type$ but they are not important for the purpose of this section.\\

Finally, \C{++} denotes concatenation of lists in both Coq and the following examples. The rest of the syntax should be recognizable from the FJ and .FJ sections.\\

Expressing a lemma with traditional predicate logic:
\E{
&typeis\_qcname \S u \S \land \\ 
&E \vdash u <: v \S \land \\
&fields(v) = fsv \Rightarrow \\
&\exists fs : \S fields(u) = (fsv \C{++} fs)
}

Expressing the same lemma in Coq:
\BB{
forall E u v fsv, \\
\hspace{2.5 mm} typeis\_qcname u $\rightarrow$ \\
\hspace{2.5 mm} sub E u v $\rightarrow$ \\
\hspace{2.5 mm} fields v fsv $\rightarrow$ \\
\hspace{2.5 mm} exists fs, \\
\hspace{5 mm} fields u (fsv ++ fs).\\
}

Paraphrasing the \N{typeis\_qcname} rule slightly for the purpose of this example, it can be written as
\E{\exists C : \S u = t\_C \S C \lor \exists C,D : \S u = t\_CdotC \S C \S D}
Writing this in the appropriate syntax as the definition of \N{typeis\_qcname} in both lemmas shows how to destruct intermediate lemmas and rules.\\

Again, with predicate logic:
\E{
&\exists C : \S u = t\_C \S C \lor \exists C,D : \S u = t\_CdotC \S C \S D \S \land \\
&E \vdash u <: v \S \land \\
&fields(v) = fsv \Rightarrow \\
&\exists fs : \S fields(u) = (fsv \C{++} fs)
}

And the corresponding Coq code:
\BB{
forall E u v fsv, \\
\hspace{2.5 mm} (exists C, u = t\_C C $\lor$ \\
\hspace{2.5 mm} exists C D, u = t\_CdotC C D) $\rightarrow$ \\
\hspace{2.5 mm} sub E u v $\rightarrow$ \\
\hspace{2.5 mm} fields v fsv $\rightarrow$ \\
\hspace{2.5 mm} exists fs, \\
\hspace{5 mm} fields u (fsv ++ fs). \\
}

This comparison shows how to translate back and forth from predicate logic to Coq syntax but also shows how similar the two are.\\

As explained earlier, a single right arrow has been used instead of the dash-greater-than syntax that is traditionally used in Coq, although the single right arrow can be used as well. Also the logical AND and OR operators have been used instead of slashes and backslashes. However, symbols for forall and exists are not used as in the Coq code examples and thus will only appear in logical contexts.\\

\subsubsection{Proof assistants in general}
There are different kinds of proof assistants. Some are declarative and some procedural. Coq is of the procedural kind, in the sense that the code must be evaluated step by step to see what the next piece of code does. It can be inferred to some degree with proper conventions such as naming and indentation and makes for much smaller proofs than its counterpart. Here is an example from H. Geuvers\cite{geuvers} where a theorem is formulated that says doubling a natural number and then dividing it by two returns that very same number.\\

Procedural language Coq:
\BB{
Theorem double\_div2: forall (n : nat), div2 (double n) = n.\\
simple induction n; auto with arith.\\
intros n0 H.\\
rewrite double\_S; pattern n0 at 2; rewrite $\leftarrow$ H; simpl; auto.\\
Qed.\\
}

Declarative language Corbineau:
\BB{
Theorem double\_div2: forall (n : nat), div2 (double n) = n.\\
proof.\\
assume n:nat.\\
per induction on n.\\
suppose it is 0.\\
thus thesis.\\
suppose it is (S m) and IH:thesis for m.\\
have (div2 (double (S m)) = div2 (S (S (double m)))).\\
\textasciitilde = (S (div2 (double m))).\\
thus \textasciitilde = (S m) by IH.\\
end induction.\\
end proof.\\
Qed.\\
}

Clearly there are trade-offs between the two coding styles. For someone to read the code, looking to understand it in great detail, the declarative style seems like the best way to represent a proof. If one is more interested in the result and the structure of the proof, then a procedural style provides a shorter and potentially faster proof. Faster in this context means only the time it takes to write the proof, which, along with the time it takes to evaluate the proof, will not be discussed further. Moreover, this is not a proof over a complex structure such as a programming language but rather over natural numbers so the induction proof is much shorter with many fever cases which is another reason for this thesis to go with the procedural style of proving.\\

\section{Related work}
Much work has been done in the type safety field using Coq and there is a substantial amount of work going into formalizing many mathematical theorems using Coq.\cite{ccorn}\cite{coqmath} And as explained in section 2.5 about the proof assistant, there are other proof assistants out there. This is indicative of a growing and worth-while field, since proof assistants produce extremely rigorous proofs for complex problems.\\

As far as the author of this thesis knows, there is no other proof for .FJ or any other Java implementation with Family Polymorphism that is checked mechanically using a proof assistant.\\

Below are two examples of the use of Coq. The first of both historical and mathematical value and the second, while also not being very close to the topic of this thesis, deals with type safety in ML.

The next three papers are closer to the topic of this thesis and they all deal with type safety in Java. The papers on Featherweight Java and Lightweight Family Polymorphism are also closely related to this thesis but have already been discussed in detail and as such there is no point in describing them again.\\

\subsection{Four Color Theorem}
The Four Color Theorem states that for any plane separated into regions, only four colors are necessary to ensure that no two adjacent regions will be of the same color. It was proposed in 1852 by Francis Guthrie and was not proven until 1976 by Appel and Haken\cite{4color}. A part of this proof was a huge case analysis that was checked by a computer. This was the first time a proof was performed on a computer before mathematicians with pen and paper. Many flaws were later found in the proof and much effort went into fixing it. One such research effort to formally prove the theorem was that of Gonthier in 2005\cite{gonthier}, who meticulously defined the problem in Coq and proved it using over 60.000 lines of code. The proof is based on another proof from 1994 by Robertson, Sanders, Seymour and Thomas. This thesis will not go into the proof itself as it is somewhat orthogonal to proving type safety, but this example of a longstanding theorem being proven in Coq makes the case for Coq being of great value in mathematics and formal modeling.\\

\subsection{ML type safety}
Another example, this one by Catherine Dubois\cite{ML} and closer to the topic of this thesis, proves a formalization of ML called W type safe but using a big-step semantics instead of small-step as this thesis uses. The W calculus is defined in Coq and a proof is discussed. Dubois uses the same formalization of type safety as in this thesis, which is Milner's definition presented in the section 2.4 and later formally defined in section 4.2.\\

\subsection{Adding Wildcards to Java}
The paper "Adding Wildcards to the Java Programming Language" by Torgersen, Hansen, Ernst, Ahe, Bracha, and Gafter in 2004\cite{addingWildcards} discusses the issues in adding Wildcards, which was a new language construction at the time. The point of Wildcards was:\\
\Q{[...] to increase the flexibility of object oriented type systems with parameterized classes.\cite{addingWildcards}}
The paper was a collaboration between Aarhus University and Sun Microsystems, and the point was to discuss the implementation of Wildcards in the new version a Java that would be extended with parametric polymorphism, also known as generics. Some implementational issues are discussed and the authors argue why the chosen form of Wildcards is better suited for expression unspecified type arguments.\\

\subsection{GJ: Extending Java with type parameters}
The paper "GJ: Extending the Java TM programming language with type parameters" by Bracha, Odersky, Stoutamire, and Wadler in 1998\cite{GJ} discusses the issues in adding type parameters to Java. The semantics of GJ is given by a translation that erases type arguments, replaces type variables with their bounds, adds casts, and inserts new methods to facilitate overriding. Erasures as part of GJ are also discussed in the final part of the FJ paper, which is not discussed in this thesis. The paper gives examples of how its implementation of type parameters works and discusses the problems and choices made in doing so. Throughout the paper, static safety of the type parameters is discussed, and their construction is deemed type safe.\\

\subsection{Adding genericity to Java}
The paper "Making the future safe for the past: Adding Genericity to the Java TM Programming Language" by Bracha, Odersky, Stoutamire, and Wadler in 1998\cite{addingGenericity} also discusses the issues in adding generic types and methods to Java in the form of GJ. It discusses many of the same issues as the other Bracha et al. paper, but this paper compares GJ to another formalization called NextGen that is a superset of GJ just like GJ is a superset of Java.\\

\section{Proving type safety}
The point of this thesis is to formally prove .FJ type safe using the Coq Proof Assistant. The following section will discuss some of the more important choices that have been made and pitfalls of implementing such a proof. First, however, a short introduction to the overall structure of the proof and how exactly type safety is modeled in Coq.\\

\subsection{Proof modules}
The proof presented by this thesis is separated into three main files describing the formalization of .FJ and the type safety proof. The fourth file contains example code and it is interesting to see how .FJ can be used in practice. These first four files contain very little of the original code and they are the real contribution of this thesis. There are also three files containing auxiliary functionality and they are mostly unmodified. Finally there is a short description of compilation.\\

Understanding the structure of the first three files greatly helps to understand the proof and its basic structure along with the original FJ proof.\\

\subsubsection{Main files}
FJ\_Definitions.v contains most of the rules and definitions. This is where the syntax is defined along with the environments and declarations. The parameters for the class table \N{CT}, \N{this}, and \N{Object} are declared and auxiliary functions for are defined to turn a program into environments that can be clearly reasoned about. It also contains a fair amount of intermediate lemmas about the rules. At the end of the file, type safety is defined.\\

FJ\_Facts.v contains facts about the rules defined in FJ\_Definitions.v and intermediate lemmas. Most of the file is related to proving that type variables behave correctly.\\

FJ\_Properties.v is where it all comes together for the major theorems of progress and preservation. At the end of the file, it is checked that the definition of type safety from FJ\_Definitions.v has indeed been proven.\\

\subsubsection{Example file}
FJ\_Example.v models an example given in the FJ paper, to show the use of classes, fields, instantiation, and method invocation. The changes to the syntax to accommodate generics and Family Polymorphism have required substantial rewrites in the formalization of the example and its proofs but the overall structure closely resembles the original code.\\

\subsubsection{Auxiliary files}
AdditionalTactics.v contains some tactics that are borrowed from the POPLmark Wiki\cite{popl} and Cocorico Wiki\cite{cocorico}. The proofs could have been written without these additional tactics but they simplify some proofs and has mainly made it into this thesis because the original proofs use the file extensively. The file is used for delineating subgoals of proofs and not much more. The reference in the file to the Cocorico Wiki is deprecated but has been left in its original state, so the file is completely unaltered. The currently working link is is the references.\\

Atom.v was also borrowed elsewhere. It was written by Arthur Chargueraud and Brian Aydemir and describes a way of getting unique objects from a finite collection. They are modeled on top of natural numbers and equality is handled using these numbers. It is a simple way to make new types and this file as also not altered. The file exists in many versions online.\\

Metatheory.v has some lemmas on equality over atoms but mostly describes the basic structure of environments and how to do lookup on them with the functions \N{binds} and \N{get}. This will be described in more detail in section 4.7. The file also implements functionality to operate on lists of pairs which is used by the environment but also extensively on intermediate types in the lemmas and rules, such as keys and images. Also in this file are some facts that facilitate the use of \N{binds} and \N{get} in the proofs throughout the rest of the files. This file was not altered.\\

The order of compilation is dictated by the \C{Require Import} statements in the files, which in turn are dictated by the order of the definitions. The three auxiliary files must be compiled first in the order in which they appear above. Then the three main files in the order in which they appear and finally the example file. A simple make file with a few lines calling \C{coqc} on all the files should do the trick regardless of operating system.\\

\subsection{Expressing type safety in Coq}
The previous definition of type safety based on Milner's definition was informal and this section needs a formal definition in order to model it in Coq. This is done by including environments in the definition as done below. $\Delta$ is the bound environment and $\Gamma$ is the type environment.\\
\E{
\text{Progress: }& \Delta ; \Gamma \vdash t : T \Rightarrow t \text{ value} \lor \exists \S t' \text{ st. } t \longrightarrow t'\\
\text{Preservation: }& (\Delta ; \Gamma \vdash t : T \land \exists \S t' \text{ st. } t \longrightarrow t') \Rightarrow \Delta ; \Gamma \vdash t' : T
}

In FJ\_Definitions.v the two following parameters are defined which together imply type safety as explained above.
\BB{
Module Type Safety (H: Hyps).\\
\hspace{2.5 mm} Parameter preservation: forall E tE e e' T A,\\
\hspace{5 mm} typing E tE e T A $\rightarrow$\\
\hspace{5 mm} eval e e' $\rightarrow$\\
\hspace{5 mm} wide\_typing E tE e' T A.\\
\\
\hspace{2.5 mm} Parameter progress: forall e ae A,\\
\hspace{5 mm} typing nil nil e ae A $\rightarrow$\\
\hspace{5 mm} value e $\lor$ (exists e', eval e e').\\
End Safety.\\
}

In FJ\_Properties.v, another module is created and proven, and at the very end of the file, it is checked if the previous definition of type safety from FJ\_Definitions.v has indeed been proven. This is done by the following line:
\BB{
Module SafetyProof : Safety := Properties.\\
}

By proving the definitions of preservation and progress above correct in FJ\_Properties.v, this final module will check that the definitions match. Since they both turn out to be equivalent to the original formalization and are also proven correct, type safety is proven.\\

\subsection{Separate syntax and semantics}
There was more than one iteration of formalizing the syntax and rules of .FJ. In the first iteration, several compromises were made with the basic constructs which made writing the auxiliary functionality easier. Because the syntax also represented the semantics, compromises had to be made in the typing and evaluation rules.\\

The compromises can be omitted by strictly separating the syntax and the semantics. All syntactic constructs are called declarations - including the class table as it is commonly known, which is called cdecls - and the semantic constructs are called environments. This means that auxiliary functions are needed to handle the conversion from declarations to environments and these are discussed below. The conversion is only done once, when the typing is initiated at the top level which thus making it very manageable. The separation is not necessary but makes for a much cleaner formalization and proof.\\

Continuing with the pragmatic approach of separating all instances of the syntax from instances of the semantics, the declarations were translated to environments in a somewhat verbose manner. 
\BB{
d ::= declaration\\ 
e ::= environment\\
f ::= declaration $\rightarrow$ environment\\
}

\noindent{The shortest / simplest way would be to write}
\BB{
... (f  d) ...\\
} 

\noindent{when an environment was needed, but following the pragmatic approach, writing}
\BB{
e = f d $\rightarrow$\\
... e ...\\
}
yields a stronger separation between syntax and semantics, never using a syntactic construct in a semantic context. Another property of the separation is that the expression $e$ above now matches many more cases, thus making the proofs easier.\\

\subsubsection{Creating the environments}
As mentioned in section 2.2, a program is a set of class definitions and an expression. The class definitions, or declarations, are considered the syntactical constructions that need to be translated into environments. This in done by the \N{make\_cenv} relation that, given a list of class declarations will produce a class environment, or class table, called CT throughout the thesis.\\

The definitions that do that actual translation are listed below and whenever lists are used, the function \N{List.map} is used from the standard Coq library. It works like any other map function, applying the first argument to all elements of the second argument which should be a list.
\BB{
Definition fdecl2fsin (fs:fdecl) : fsin :=\\
\hspace{2.5 mm} match fs with (T,fn) $\Rightarrow$ (fn,T) end.\\
\\
Definition kdecl2kenv (kd:kdecl) : kenv :=\\
\hspace{2.5 mm} match kd with (C,fs) $\Rightarrow$ (C,List.map fdecl2fsin fs) end.\\
\\
Definition mdecl2msin (md:mdecl) : msin :=\\
\hspace{2.5 mm} match md with (E,T,m,tE,e) $\Rightarrow$ (m,(E,T,tE,e)) end.\\
\\
Definition ncdecl2ncsin (nd:ncdecl) : ncsin :=\\
\hspace{2.5 mm} match nd with (C,fs,k,ms) $\Rightarrow$\\
\hspace{5 mm} (C,(List.map fdecl2fsin fs, kdecl2kenv k, \\
\hspace{5 mm}     List.map mdecl2msin ms)) end.\\
\\
Definition cdecl2csin (cd:tcdecl) : csin :=\\
\hspace{2.5 mm} match cd with  (C,D,fs,k,ms,ns) $\Rightarrow$\\
\hspace{5 mm} (C, (D, List.map fdecl2fsin fs, kdecl2kenv k,\\
\hspace{5 mm}  List.map mdecl2msin ms, List.map ncdecl2ncsin ns)) end.\\
\\
Definition make\_cenv (cds:cdecls) : cenv :=\\
\hspace{2.5 mm} List.map cdecl2csin cds.\\
}

This is what facilitates the separation between syntax and semantics. The two can be mixed in the proof, however, if the gains of doing so clearly outweighs the cost of straying from the clear split.\\

\N{fdecl2fsin} turns \N{fdecl} into \N{fsin} by flipping the key and image. \N{kdecl2kenv} turns \N{kdecl} into \N{kenv} by calling \N{List.map} with \N{fdecl2fsin} on the field. \N{mdecl2msin} turns \N{mdecl} into \N{msin} by making a pair with the method name as the key and the rest as the image. This way, a method definition is found by looking up atomic method names in the environment. \N{ncdecl2ncsin} turns \N{ncdecl} into \N{ncsin} by making a pair as before and calling \N{List.map} with \N{fdecl2fsin} on the fields, \N{kdecl2kenv} on the constructor, and \N{List.map} with \N{mdecl2msin} on the methods. \N{cdecl2csin} turns \N{cdecl} into \N{csin}, again by making a pair to do lookup on the atomic top class names, and calling the same functions as before along with \N{List.map} with \N{ncdecl2ncsin} on the nested classes. \N{make\_cenv} calls \N{List.map} with \N{cdecl2csin} on cdecls, which is the list of class declarations that along with an expression makes up a program, and this recursively applies all the previous definitions to create the entire class table.\\ 

Alternatively, the above could have been written as Fixpoints operating on lists when appropriate to eliminate the use of \N{List.map}, but it would require more code and be less transparent. The above clearly expresses the intent.\\

The definitions can be seen in action in the example file where an actual list of class declarations is transformed into a class environment. Given a definition of a list of field declarations \N{pair\_flds'} and a list of method declarations \N{pair\_mths'}, a list of class declarations called \N{class\_decls} is defined as follows:\\

\noindent{\N{(a,Object,nil,(a,nil),nil,nil),\\ 
(b,Object,nil,(b,nil),nil,nil),\\
(pair,Object,pair\_flds',(pair,pair\_flds'),pair\_mths',nil)}}\\

where comma denotes list concatenation as explained in section 2.2.\\ 

The class table \N{CT} is fixed to \N{make\_cenv class\_decls}, which is what turns the definition of the program into a semantical representation that is easier to work with. The declarations are only used to construct the class declaration list and the environments are only used to semantically to reason about the type safety. Looking over the example code can be immensely helpful in understanding this section and how .FJ works generally.\\

\subsection{Well-formedness of the type environment}
The type environment $\Gamma$ is only checked in the typing rule $t\_var$. This means that as long as the environment is not used to lookup some variable, then it does not matter if it is not well-formed. For the type environment to be well-formed, it must not contain duplicate keys so that a lookup could be ambiguous.\\
A different approach could have been to put another such check in the rule $t\_invk$ which would mean that duplicate keys would never be put into the type environment, thus ensuring continual well-formedness. This, however, is not needed to show type safety and an unused type environment that is not well-formed can be disregarded.\\

\subsection{Family names, fully qualified class names, and types}
In the .FJ paper, family names are primarily denoted by $P$, fully qualified class names by $A$, and types by $T$. Additionally they are modeled as separate constructs in the following way:
\BB{
P ::= C $|$ X\\
A ::= C $|$ C.C\\
T ::= P $|$ P.C $|$ .C\\
}

The first two are subsets of the third, encouraging them to be modeled as one construction \N{type} and supplying predicate definitions \N{typeis\_qcname} and \N{typeis\_famname} that specifies if a given type is a \N{fully qualified class name} or a \N{family name}. When the constructions are consolidated, types need to be expressed only in terms of atoms since \C{P} is no longer available.
\BB{
T ::= C $|$ X $|$ C.C $|$ X.C $|$ .C\\
}

This also exposes the, slightly obvious, structure of the syntax as being non-circular and therefore finite. This results in shorter and simpler proofs which in turn makes for better readability.\\

The new definitions \N{typeis\_qcname} and \N{typeis\_famname} check that a given type has the correct form of a \N{fully qualified class name} and \N{family name}, respectively. In Coq, this is done by checking the constructors of the types. For \N{typeis\_qcname}, this is \N{t\_C} or \N{t\_CdotC}, modeling a top class name and a nested class name. For a \N{typeis\_famname}, this is \N{t\_C} or \N{t\_X}, modeling a top class name and a top class variable name. This clearly implies that a top class name is both a \N{typeis\_qcname} and a \N{typeis\_famname}, as is the case with the original syntax.\\

The additional definitions \N{typeis\_nested} and \N{typeis\_top} proved helpful to write and prove intermediate lemmas. The appropriate constructors for a nested type are \N{t\_CdotC C D}, \N{t\_XdotC X D}, and \N{t\_dotC C}. For a top type they are \N{t\_C C} and \N{t\_X X}. The five constructors mentioned are all the constructors of the type definition modeling the box above.\\

\subsection{Omitting the values construct}
In the .FJ syntax, values are explicitly modeled as 
\BB{
v ::= new A(\OC{v})\\
}

but they are only used to denote if a term is on the form of a value which cannot be evaluated further; More specifically, a value is a term that only consists of a \N{e\_new} expression. This means that such a construction is not necessary. Using a predicate definition that checks if the term is on the following form reduces clutter in the proofs and simplifies the syntax. Such a function will act on input of the form
\BB{
value (e\_new T es)\\
}

where \N{T} must be a qualified class name and all the expressions in the list of expressions \N{es} must themselves be values.\\

\subsection{Structure of the class table}
The .FJ paper defines the class table as a mapping from fully qualified class names to top level or nested classes. This thesis models the class table as a class environment which is a collection of top level classes. Each of these contains a list of nested classes, in accordance with the .FJ formalization.\\
Nested classes are not allowed to appear directly in the class table but only indirectly through its top level class, which means that the lookup functionality can be written uniformly to work for all collections in the language, using the \N{binds} definition which uses the recursively defined \N{get}, as explained below.\\

The three definitions below use "option types". First a quick reminder of what it means to be a total or partial function in mathematics. Total functions map all elements in their domain to elements in their co-domain. Partial functions do not, which means that there are some elements in the function's domain that are not mapped to its co-domain, and thus not all inputs produce an output.\\

The option types act as containers for types and are used in definitions that are not total. Coq only allows total definitions which means every definition in Coq is total. Option types are a way to simulate partial definitions and they can either be \N{Some value} or \N{None}. The first models an actual return type and the second models no return type.\\

\N{get} takes as arguments an atom \N{x} and a list of pairs of atoms and something \N{(y,v)}. It returns an option type of \N{v}. If \N{x} is found to be the key in one of the pairs of \N{(y,v)}, in other words \N{x=y} for some \N{y}, then \N{Some v} is returned. Otherwise \N{None} is returned.\\

\N{binds} simply builds on this definition. Given the arguments atom \N{x}, something \N{v} and a list of pair of atoms and something \N{E}, it checks if \N{get x E} is structurally equivalent to \N{Some v}.\\

\N{binds} can then look up class names in the \N{CT}. \N{binds C CD CT} means that \N{C} is bound to its definition \N{CD} in the class table \N{CT}. \N{C} would be a simple class name while \N{CD} would have the following structure:\\
(simple class name, field declarations, constructor, method declarations, nested classes); usually written with variables like this: \N{binds C (D,fs,c,ms,ns) CT}.\\

Looking up a nested class is done by looking into the nested class list \N{ns} of the top class. This is done in a very similar fashion, but the structure of the definition of a nested class is a little different. \N{binds E ED ns} means that \N{E} is bound to its definitions \N{ED} in its containing top class' list of nested classes \N{ns}. \N{E} would again be a simple class name while \N{ED} would have the following structure:\\
(field declarations, constructor, method declarations); usually written with variables like this: \N{binds E (fs',c',ms') ns}. The constructors of the top level class and the nested class are not the same so the nested constructor is denoted with a prime, same as its methods.\\

\subsection{Quantifying with forall vs. exists}
There is a big difference between quantifying with $\forall$ or $\exists$ over variables in proofs but in a complex context it can be non-obvious which type of quantification to use. When quantifying over variables in the assumptions, one should always use $\forall$ if the variable is constrained sufficiently in the assumptions. Either the assumptions do not sufficiently constrain the variables to prove the goal and adding other assumptions should be considered, or the assumptions are strong enough to allow for a quantification with $\forall$. Here is a very simple example to show this, where the predicate \N{odd} and \N{even} should be self-explanatory:
\E{\exists x : odd \S x \Rightarrow \lnot even \S x}
The assumption \N{odd} clearly constrains \N{x} enough to prove the goal and as such there is no reason to use $\exists$ as a quantifier. Many examples that quantify with $\exists$ can be written, none of which are interesting. It simply means that instead of constraining variables with assumptions that can be reasoned about, they are constrained in the quantification and that is poor practice.\\

When the quantifications happen in the goal, it works much in the same way as in the assumptions but using an exists is however reasonable sometimes. Here is another simple example to show the difference, where the predicate $prime$ should again be self-explanatory:
\E{\forall x : prime \S x \Rightarrow \forall y : \lnot(y|x) \lor y = x \lor y = 1}
\E{\forall x : prime \S x \Rightarrow \exists y : \lnot(y|x) \lor y = x \lor y = 1}
Clearly the first statement is the strongest and the second is vacuously true. Sometimes, the predicate in your goal are not strong enough to warrant a quantification with $\forall$, then it is completely reasonable to use $\exists$. Moreover, one might not want to use $\forall$ in the goal even though it is provable, simply because it is an intermediate statement in a larger proof and does not need to be as strong as it can be. This is the true difference between quantifying in the assumptions and in the goal. You need to leverage how strong the lemmas must be to do what you want.\\

\subsection{Dividing rules based on their formal parameters}
For some rules, their parts had different formal parameters. An example of this is FJ\_Definitions.v wherein \N{method} has parts for methods in the given context and in its super. To handle these different parts, one could either split the rule into several smaller rules based on formal parameters or add extra parameters so that all parts of the rule takes a bit of extra information which it doesn't use. The latter is chosen for this thesis because it makes for more uniform proofs. Also, splitting up the rules would propagate the split in all lemmas relying on the rule, regardless of whether the separation makes sense in that context. This way of dealing with differing formal parameters causes very little extra work for the applier as it only has to come up with the extra terms by quantifying and passing them along to the applyee without any constraints. This could be a source of code bloat and poor readability but with proper naming in the applying lemmas it should not be an issue.\\ 

Taking the converse approach, there would be no such code bloat but instead one might be left with disproportionally many smaller proofs, the amount of which would completely negate the positive effect of removing the passing of unused arguments. However, in another proof context with more or less constructions and smaller or longer proofs, it may prove advantageous to choose a different approach than the one taken in this thesis.\\

\subsection{Extract assumption from ok\_meth}
According to the .FJ paper, there is supposed to be an assumption in \N{t\_meth} of the kind $\Delta = \overline{X} <:\overline{C}$ but this property should always hold for a well-formed  bound environment, if it is well-formed. Consequently it has been extracted as a separate lemma, that should always hold. It is expressed in the following way:
\BB{
Fact subEE:\\
\hspace{2.5 mm} forall \S E, \\
\hspace{5 mm} ok E $\rightarrow$ \\
\hspace{5 mm} forall\_env (sub' E) E. \\
}

The lemma has been proven at the end of FJ\_Definitions and required some ancillary lemmas. One could then argue that \N{ok E} should be inserted into the rule from which the lemma was extracted but since \N{ok\_cenv} has been run on the program, it ensures that any environment will not contain duplicates.\\

\N{sub'} is defined in terms of \N{sub}, which closely models the rule from .FJ, but with slightly different arguments.
\BB{
Definition sub' (E:benv) (X:tvname) (C:scname) : Prop :=\\
\hspace{2.5 mm} sub E (t\_X X) (t\_C C).\\
}

This means that \N{sub'} will always use the \N{s\_var} or \N{s\_refl} case of the \N{sub} rule. The other significant definition in \N{subEE} is \N{forall\_env} which is defined below.
\BB{
Variable P: atom $\rightarrow$ A $\rightarrow$ Prop.\\
Inductive forall\_env: list (atom * A) $\rightarrow$ Prop :=\\
\hspace{2.5 mm} $|$ fa\_nil: forall\_env nil\\
\hspace{2.5 mm} $|$ fa\_cons: forall E x v,\\
\hspace{5 mm} forall\_env E $\rightarrow$\\
\hspace{5 mm} P x v $\rightarrow$\\
\hspace{5 mm} forall\_env ((x,v)::E).\\
}

Given some predicate definition \N{P}, if \N{P} holds for all pairs in the list $E$, then $forall\_env$ holds.\\

\subsection{Uniqueness lemmas}
Some of the definitions in the proof have the property that given the same first couple of arguments the last must be the same as well. Five lemmas are described below that each deal with the same property but for different definitions. These properties are immensely helpful when, for instance, two methods are bound in the assumptions in a proof and the method names and receivers are the same. This is because the rest of the arguments to the \N{method} relations must be the same since no two methods by the same name can be available in the same class, either by direct declaration or inheritance from a super. This causes other assumptions binding the same arguments to be rewritten and it often proves necessary to have such intermediate lemmas.\\

\N{binds\_fun} says \N{binds x a E $\rightarrow$ binds x b E $\rightarrow$ a = b}. This means that if $x$ is bound to $a$ and $b$ in $E$, then $a$ and $b$ must be the same. This property comes from the environment being $ok$.\\

\N{fields\_fun} says \N{fields t fs $\rightarrow$ fields t fs' $\rightarrow$ fs = fs'}. This means that there is only one set of fields for a class.\\

\N{method\_fun} says \N{method m T env t tenv e $\rightarrow$ method m T env' t' tenv' e' $\rightarrow$ env = env' $\land$ t = t' $\land$ tenv = tenv' $\land$ e = e'}. This means that there is only one method declaration for a method name in a class and its subclasses.\\

\N{extends\_fun} says \N{extends C D $\rightarrow$ extends C D' $\rightarrow$ D = D'}. This means that a class' super is unique.\\

\N{eval\_at\_fun} says \N{eval\_at (t,A) T $\rightarrow$ eval\_at (t,A) T' $\rightarrow$ T = T'}. This means that if $t$ evaluates to $T$ and $T'$ in $A$, then $T$ and $T'$ must be equal.\\

\subsection{List of supers}
The structure of the .FJ formalization does not reflect the inheritance hierarchy for each class well. This means that it requires some extra structure apart from the formalization to access the list of supers for a given class (super list). This is strictly necessary and it is implemented as an inductively defined list and a series of proofs.\\

Access to a class' list of supers is vital for object oriented languages but the cost of having the formalization not being sensitive to inheritance hierarchies and only being handled in such rules as fields, and not in intermediate rules, probably dwarfs the cost of the much heavier formalization that would contain super lists. The exact construction made to handle the concept of super lists is shown in the box below.
\BB{
Inductive super\_list: list scname $\rightarrow$ Prop := \\
\hspace{2.5 mm} $|$ sl\_object: super\_list (Object :: nil) \\
\hspace{2.5 mm} $|$ sl\_extends: forall C D supers, \\
\hspace{5 mm} extends C D $\rightarrow$ \\
\hspace{5 mm} super\_list (D :: supers) $\rightarrow$ \\
\hspace{5 mm} super\_list (C :: D :: supers). \\
}

Using the above structure, it is possible to prove the $sub\_fields$ lemma, which is the example used in section 2.5, to show a translation between predicate logic proof trees and Coq syntax. To give a short recap, it says that given a subtype $u$ of some type $v$ with the fields \N{fsv} then the fields for $u$ must be a list starting with \N{fsv}. If $v$ is the super of $u$ and $u$ does not declare any fields itself, then the lists of fields are identical. The list of fields for any subclass of $v$ that declares some fields itself has a strictly longer list of fields where each new field is appended to that list.\\

This extra construction requires a significant amount of work but is nicely encapsulated; Held separate from the majority of lemmas that do not need it. This means that the extra complexity that it adds is largely hidden in the rest of the proofs. This is yet another reason to make sure that lemmas are as modular as possible. Such separations would not be possible with only a few monolithic lemmas.\\

The structure for supers of nested classes has also been added and called $super\_nesting\_list$. It is similar to $super\_list$ but is more complex, to accommodate for the additional lookup in the top class' nested classes. $super\_nesting\_list$ has three cases instead of just two and takes three arguments instead of just one. The arguments are a nested class name $E$, a list of top classes which is just a super list $L$ beginning at $E$'s top class and ending at $Object$, and another list of top classes $L'$, which is a subset of $L$ containing only the top classes that bind $E$ in their nested class lists. 

The first case of the relation is the \N{nil} case where any nested class has the list of top class supers \C{Object::nil}. The second case is the \N{cons} case where the top class does not bind $E$ in its nested class list and thus only prepends the top class name to $L$. The third case is the other \N{cons} case where the top class binds $E$ in its nested class list. This means that the top class name is added to both $L$ and $L'$.\\

The code for the $super\_nesting\_list$ relation is not included here but is interesting, and necessary for the proofs to work, and thus warranted explaining. See the code in the appendix for the actual implementation.\\

There are several auxiliary lemmas written to facilitate the use of super lists. Perhaps one of the most important ones is $super\_list\_from\_fields$ which says that given a top class and its fields, there exists some super list starting with that very top class name.\\

\subsection{The importance of modules}
In the Coq proof, the module called $Hyps$ implies two parameters. They ensure that $Object$ is not bound in the class table and that $ok\_cenv$ is used on the class table and consequently on all classes.
\BB{
Parameter ct\_noobj: no\_binds Object CT.\\
Parameter ok\_ct: ok\_cenv CT.\\
}

Modules are used to encapsulate properties in such a way that those lemmas shown within a certain module is not globally available unless imported. This way you can clearly show what proofs do not rely on any other properties than the ones that are present in the current module, which in essence is the proof context. It is a clean way to show dependencies in larger proofs.\\

In the Facts.v file there are two modules. The first is $NoObjFacts$ and the second is $OkTableAndNoObjFacts$. The latter imports the first out of necessity which is the reason for the compound name. As for the use of camel-casing instead of underscores which seem to have historical value in Coq, this is only for the sake of distinction between modules and everything else.\\

\subsection{Typing all classes}
As described above, the \N{Hyps} module ensures that \N{ok\_cenv} is used on the class table, which makes sure that there are no ambiguous class names and uses the \N{ok\_tclass} rule on all classes to make sure that all classes are well-formed. Among other things, \N{ok\_tclass} makes sure that the field variables are well-formed and the \N{fields} rule in turn recursively checks the fields for all supers and thereby makes sure there is no cyclical inheritance or undefined class names.\\

\N{ok\_tclass} checks the following environments of each top class: fields \N{fenv}, constructor \N{kenv}, methods \N{menv}, and nested classes \N{ncenv}. All these are on the form explained in section 4.3. \N{ok\_tclass} also checks that all types used in the class follow the \N{ok\_type} rule.\\

The \N{ok\_type} is equivalent to the "Type well-formedness" rule in the .FJ introduction. For a class to be well-formed, each mention of a type in that class must resolve to a declared type that resolves to something appropriate in that context.\\

\subsection{Method typing}
The method typing rule both checks that the method is typeable and that the given expression is indeed the method body expression. The dual purpose of the rule adds a little to the complexity but makes up for it in the lemmas that reason about the rule. This is the sort of insight that is not easy to pick up on initially. It is an optimization of the proof, rather than a necessity or a compromise. Splitting the method rule into two and writing all the proofs verbosely using either the first or the second rule would make the proofs more transparent but not enough to outweigh the cost of code bloat.\\

\subsubsection{Renaming of variables}
The \N{method} rule from jFP\cite{jFP}, another version of the .FP paper, is used for a more explicit approach which subsumes the renaming of variables.\\

Besides this change, there are no semantic differences in the rules in the two papers and as such there are no inconsistencies in using the other, less opaque, rule for methods. Specifically, the syntax used is
\BB{
(\OC{X},\OC{C},\OC{T},T$_{0}$) = (\OC{Y},\OC{D},\OC{S},S$_{0}$)\\
}

\noindent{and the original syntax is}
\BB{
\OC{C} = \OC{D} $\land$ \OC{T},T$_{0}$ = [\OC{X}/\OC{Y}](\OC{S},S$_{0}$)\\
}

\subsection{Throwing away static type information}
Static type information in this context is the knowledge of the structure of a class, ie. is it a top class or a nested class inside a top class, at compile time.\\

Essentially, if the static type information is used, most of the rules can be divided into two, one for top classes and one for nested classes. The rules will be simpler and shorter but there will be twice as many - the same goes for the proofs. Using this approach, even the very simple rules will have to divided into two, even though it might not make sense in that particular context. Instead, the type information can be forgotten in only the simpler rules by treating the two possible class structure as the aggregate type structure. However, this solution will again be confusing and is best avoided.\\

The clear advantage of using the static type information is that needless information is never sent to the rules. They can be tailored to only receive the strictly necessary information without any wrapping or unwrapping. This is, in part, because of the way $eval\_at$ and $eval\_bound$ preserves the separation where top classes stay top classes and nested classes stay nested classes.\\

If, on the other hand, the static type information is not used and all classes are considered to be the aggregate type structures, the rules become as compact as possible and intuitively easier to understand, which has proven to be important when dealing with large, complex, formalizations and their proofs. The downside of this is that some amount of unnecessary information has to be passed around in order to satisfy the aggregate rules.\\

Some of the intuition about the rules and proofs is lost by using the type information because the type information does not add much in terms of intuition about the overall structure of the formalization, which is the main reason this thesis does not use the static type information.\\

\subsection{Subclass rule}
The $sub$ relation does not allow nested classes to inherit from anything other than themselves and $Objec$t. In the actual language however, nested classes inherit from the same nested classes in their top class' supers. \\

The $sub$ relation can be written differently to show that nested classes do not work for all cases other than \N{reflexivity} and \N{Object}. This will cause a bit of code bloat but will make the rule more transparent and reduce the number of cases in the proofs using the sub relation. However, the rule is not changed to show this since it is more important to model the .FJ calculus as closely as possible.\\

Another possible change that is \N{not} part of the final thesis is the addition of an assumption in the \N{Object} case of the \N{sub} rule. The assumption will be that only top classes are allowed to inherit from \N{Object}, thus disallowing nested classes to inherit from \N{Object}; An inheritance that is only symbolic since \N{Object} has neither fields nor methods.\\

\subsubsection{Addition to the Object subtyping}
The \N{s\_object} rule in $sub$ can be given an extra assumption of $ok\_type$ on the type that is said to be a subtype of \N{Object}. This is not strictly necessary but it is of no cost to add since it can be extracted from $ok\_cenv$ in all contexts with that assumption. The reason for this is that any type that can be considered a subtype of $Object$ will already have gone through the $make\_cenv$ function and has therefore been checked to both be bound in the class table and be well-formed. In earlier iterations of the proof, this assumption was included and made some of the proofs easier and simpler but as it is not strictly necessary and the focus of this thesis is to model .FJ closely, it was removed in later iterations.\\

\subsubsection{Inheritance vs. subtyping}
There is a need for a new construction to model inheritance since subclassing is a stricter relation and does not model nested class inheritance as explained above. The inheritance relation was then written to subsume the inheritance of the \N{sub} relation along with implementing nested class inheritance and allowing \N{thistype} constructions to relate an explicitly nested type to a relative type.
\BB{
Inductive inheritance (env:benv) : type $\rightarrow$ type $\rightarrow$ Prop :=\\
$|$ inh\_top : forall C D,\\
\hspace{2.5 mm} sub env (t\_C C) (t\_C D) $\rightarrow$\\
\hspace{2.5 mm} inheritance env (t\_C C) (t\_C D)\\
$|$ inh\_nested : forall C D E t tenv m e f,\\
\hspace{2.5 mm} sub env (t\_C C) (t\_C D) $\rightarrow$\\
\hspace{2.5 mm} (method m (t\_CdotC D E) env t tenv e $\lor$ \\
\hspace{2.5 mm}  field (t\_CdotC D E) f t) $\rightarrow$\\
\hspace{2.5 mm} inheritance env (t\_CdotC C E) (t\_CdotC D E)\\
$|$ inh\_tt : forall C E,\\
\hspace{2.5 mm} inheritance env (t\_CdotC C E) (t\_dotC E).\\
}

As seen above, the \N{method} or \N{field} relation is used to check that the nested class $E$ does exist in the top class $D$; either being defined there, inherited from a super, or both. A more explicit approach can be taken to look up the actual bindings, replacing the \N{method} or \N{field} assumption with two $binds$ assumptions for the top class in the class table and the nested class in the top class' list of nested classes, but that will require some digging in the lemmas using the rule. Moreover, the \N{method} or \N{field} assumption is chosen because it is readily available in all the intermediate lemmas using the $inheritance$ relation. This means the approach taken nicely encapsulates the assumption even though it carries more information than is needed.\\

The \N{inheritance} relation is critical for \N{method\_implies\_typing''}, where the primes are present to tell this relation apart from other lemmas, closer to the original. The iterations of lemmas are explained below.
\BB{
Lemma method\_implies\_typing'':\\
\hspace{2.5 mm} forall m t E t0 tE e,\\
\hspace{5 mm} method m t E t0 tE e $\rightarrow$\\
\hspace{5 mm} exists t' A, \\
\hspace{7.5 mm} inheritance E t t' $\land$ \\
\hspace{7.5 mm} thistype A t' $\land$ \\
\hspace{7.5 mm} wide\_typing E ((this,t')::tE) e t0 A. \\
}

\N{method\_implies\_typing''} states that if a method with the body $e$ and return type \N{t0}, then the expression $e$ is a subtype of \N{t0}. The reason for the \N{inheritance} and \N{thistype} relation is to be able to handle the environment and the context in which the typing is \N{ok}.\\

This lemma captures something very intuitive. It is expected that the type of the expression returned from a method is some subtype of the declared type of the method.\\

The reason for the two primes after the name is that the lemma is the third iteration. The first resembles the original lemma but that was found to be much too constrictive when it came to nested and relative types and as such the method could only be in top classes. The second iteration had the flaw that it did not properly model inheritance but only subtyping which meant that vital information was lost. This third iteration retains all the information needed to reason about the type of the return expression and the context in which it is used.\\

To be able to prove \N{method\_implies\_typing''}, the following auxiliary definitions and facts are written in FJ\_Facts.v along with others in FJ\_Definitions.v and FJ\_Properties.v.
\BB{
Fact ok\_ctable\_class: \\
\hspace{2.5 mm} forall ct C D fs ms k ns,\\
\hspace{5 mm} ok\_cenv ct $\rightarrow$\\
\hspace{5 mm} binds C (D,fs,k,ms,ns) ct $\rightarrow$\\
\hspace{5 mm} ok\_tclass C D fs k ms ns.\\
\\
Fact ok\_ctable\_nclass: \\
\hspace{2.5 mm} forall ct C D fs ms k ns F fs' k' ms',\\
\hspace{5 mm} ok\_cenv ct $\rightarrow$\\
\hspace{5 mm} binds C (D,fs,k,ms,ns) ct $\rightarrow$\\
\hspace{5 mm} binds F (fs',k',ms') ns $\rightarrow$\\
\hspace{5 mm} ok\_nclass F fs' k' ms' C.\\
\\
Definition ok\_ct\_class C D fs k ms ns := \\
\hspace{2.5 mm} ok\_ctable\_class \_ C D fs k ms ns ok\_ct.\\
\\
Definition ok\_ct\_nclass C D fs ms k ns F fs' k' ms' := \\
\hspace{2.5 mm} ok\_ctable\_nclass \_ C D fs ms k ns F fs' k' ms' ok\_ct.\\
\\
Definition ok\_ct\_meth C D fs k ms ns m t tE E e H :=\\
\hspace{2.5 mm} ok\_class\_meth C D fs ms m t E tE e k ns \\
\hspace{5 mm}(ok\_ct\_class \_ \_ \_ \_ \_ \_ H).\\
\\
Definition ok\_ct\_nmeth C D fs k ms ns m t tE E e F fs' k' ms' H H2 :=\\
\hspace{2.5 mm}ok\_nclass\_meth C D fs ms m t E tE e k ns F fs' k' ms'\\
\hspace{5 mm} (ok\_ct\_class \_ \_ \_ \_ \_ \_ H)\\
\hspace{5 mm} (ok\_ct\_nclass \_ \_ \_ \_ \_ \_ \_ \_ \_ \_ H H2).\\
}

The type checker has been executed on the code and consequently \N{ok\_tclass} and \N{ok\_nclass} have checked that all classes are well-formed. The two factsthat reason about \N{ok\_tclass} and \N{ok\_nclass} are qualified class is bound in it then that class is \N{ok}, and while they express something quite simple, it does take some proving to dig out the correct class and inspect all its attributes for correctness as well.\\

The Coq proof assistant has its own repository of available lemmas that it will try to apply to solve a goal. This "bag of tricks" can be applied to any goal by using the \N{eauto} tactic and it is possible to tell \N{eauto} to use new lemmas by "hinting" them. These new lemmas will then be added to Coq's "bag of tricks". The two facts above are both hinted to \N{eauto} and are used in FJ\_Properties.v.\\

\subsection{Method invariance}
In programming languages, the types used as method return types, method argument types, and method type arguments are usually allowed to change when the method is overridden. Covariance means that an overriding method type is allowed to be a subtype of the super. Contravariance means that an overriding method type is allowed to be a supertype of the super. Invariance means that the type cannot change in overriding methods. 

In most languages, when the variance is not upheld, overloading is used. Overloading can be seen as the opposite of overriding and means that an entirely new method is declared that has a similar signature to another method, but for some key differences. This is not always intuitive and much research has gone into exploring variance and how to handle it.\\

The current Java version has covariance for method return types but invariant parameter types. If the parameter types are different, the method is then handled with overloading, meaning that a similar method is defined, only with different parameter types and both methods can be called depending on the arguments passed. Contravariance also exists in Java and could be viewed as the inverse of covariance, in the sense that instead of the types being subtypes they must be super types. This form of flexibility in parameter types is, arguably, not used as often as covariance but can be quite useful.\\

Neither of these types of flexibility are allowed in .FJ, even though standard FJ does have covariance. This means that method parameter types are invariant in .FJ. An omission that, like most of the other omissions, is not exactly insignificant but the inclusion of which does not bring anything interesting to the type safety proof.\\

\subsection{Congruence rules}
The structure of the congruence rules of the .FJ calculus have already been introduced in section 2.3, and this section will discuss their implementation.

Instead of explicitly stating the congruence rules, they are expressed in terms of expression contexts. They acts as context free grammars where another expression can be inserted around an existing expression. A typical approach that is also taken elsewhere\cite{ML}.\\

There are two main constructions to facilitate this approach:\\

$exps\_context$ which is a predicate definition given a function from an expression to a list of expressions. This models wrapping an expression in a list of expressions.\\

$exp\_context$ which is also a predicate definition but is given a function from an expression to another single expression, thus modeling the single step semantics that explicitly handles the four ways to form an expression; These forms being a field access, a method invocation with no arguments, a method invocation with one or more arguments, and a new expression.\\

These expression contexts merit a particular rule in the evaluation relation as will be explained in section 4.20. It is a very syntax-light way of representing congruence rules that most importantly does not cause crowding in the typing rules of the language and hence makes it easier to get an overview of the rules and their structure. Such transparency is key in both the design and proving of formal models.\\

\subsubsection{Monotonicity}
The preservation lemmas $preservation\_over\_esc$ and $preservation\_over\_ec$ work with implicit monotonicity of subtyping. This means that given either expression context $exps\_context$ or $exp\_context$, let us call it $EE$, then if some expression $e'$ is a subtype of another expression $e$, then by the property of monotonicity over expression contexts, $EE \S e'$ must also be a subtype of $EE \S e$. Written formally:
\E{\forall \S e \S e' \S EE, \S e' <: e \Rightarrow (EE \S e') <: (EE \S e)}
This is a property of the way the expressions and evaluation rules are constructed and something that is nicely encapsulated and hidden. To briefly describe general monotonicity in mathematics; A function is said to be monotonic if it preserves the order of its domain in its co-domain. Essentially, any value in a monotonic function's domain has the same place the function's co-domain, thus preserving the order.\\

\subsection{Evaluation rules}
The evaluation (or computation) rules of the proof closely model those presented in section 2.3. The full evaluation relation is presented below, with the usual predicate definition. One new function is introduced here, however. \N{env\_zip} takes a list of pairs $(x,t)$ as its first argument, a list of singletons $e$ as its second argument, and ensures that the third argument is the list of pairs $(x,e)$, ie. the pairing of each key of the first list with each element of the second list, as the image of the new pair.
\BB{
Inductive eval : exp $\rightarrow$ exp $\rightarrow$ Prop :=\\
$|$ eval\_field : forall C fs es f e fes,\\
\hspace{2.5 mm} fields C fs $\rightarrow$\\
\hspace{2.5 mm} env\_zip fs es fes $\rightarrow$\\
\hspace{2.5 mm} binds f e fes $\rightarrow$\\
\hspace{2.5 mm} eval (e\_field (e\_new C es) f) e\\
$|$ eval\_meth : forall C m E e e' es ves es0 t tenv Ps XTs,\\
\hspace{2.5 mm} method m C E t tenv e $\rightarrow$\\
\hspace{2.5 mm} env\_zip tenv es ves $\rightarrow$\\
\hspace{2.5 mm} env\_zip E Ps XTs $\rightarrow$\\
\hspace{2.5 mm} subst\_type\_exp XTs e = e' $\rightarrow$\\
\hspace{2.5 mm} eval (e\_meth (e\_new C es0) m Ps es) \\
\hspace{5 mm} (subst\_exp ((this,(e\_new C es0))::ves) e')\\
$|$ eval\_context : forall EE e e',\\
\hspace{2.5 mm} eval e e' $\rightarrow$\\
\hspace{2.5 mm} exp\_context EE $\rightarrow$\\
\hspace{2.5 mm} eval (EE e) (EE e').\\
}

A field can only be evaluated on a \N{new} expression, otherwise it is handled by the context rule. The fields of the class corresponding to the \N{new} expression is looked up. Among those fields, the given field's corresponding expression among the arguments to the \N{new} expression is chosen and is the result of the evaluation. Like fields, a method can also only be evaluated with a new expression as receiver for the method invocation. The method's variables are paired with the arguments given in the method invocation, the method's type arguments are paired with the actual type arguments in the method invocation, and the expression that is the method body is then the result of the evaluation, after having its variables substituted. The final evaluation rule deals with expression contexts. This is where the congruence rules are explicitly applied, instead of being inherent in all typing rules. It states that given an evaluation from one expression to the another, and an expression context, applying the expression context to both expressions still preserves the evaluation.\\

\subsection{Evaluation of at expressions}
The at expressions introduced in section 2.3 are pervasive throughout the type safety proof and are written below again, for ease of reference.\\
\C{.D@P.C = P.D, \S .D@.C = .D, \S P@T = P, \S P.C@T = P.C}\\
These expressions are pervasive due to the fact that whenever a $type$ is used where it could be a dependent (relative) type, this is not always the case and will be explained below, then an assumption binding this possible relative to another $type$ in a given context is carried along from the applying lemma. These assumptions binding the possible relative type are carried from the typing rules all the way through intermediate lemmas and to the preservation and progress lemmas that together form type safety. Besides the \N{eval\_at} relation implemented in the Coq proof to model the at expressions, there is a \N{eval\_ats} relation that works on lists of at expressions and this relation is also carried all the way from the typing rules to the final two lemmas.\\

As to when a type cannot possibly be a relative type, this can be seen from the rules, since there is no rule along the lines of \C{.D@C}. This means that no relative types can be resolved in top classes, and as such cannot be referenced in one. This means that for an expression like \C{T@C}, where T is some mystery class name, then not only can T not be a relative type, but since at expressions only transform relative types, it is obvious that \C{T@C = T}. More formally using mathematical notation:\\
$\forall T,C \in type : typeis\_qcname \S C \Rightarrow T@C = T \S \land \not\exists D : T = .D$\\

The change to the structure of types makes the Coq formalization more verbose since the rules like \C{P@T = P} has to be written as two cases because the family names \C{P} are no longer in the syntax of the language but are handled by a predicate.\\

The introduction of relative types is one of the things that broke all the code from Fraine et al.\cite{FJcode} and it takes significant effort to resolve these new possible relative types in most contexts. An example of a lemma where not just the proof but also the lemma itself is changed substantially is \N{binds\_zip'}. Below is an example of such a rewrite. It is quite interesting to see, for instance, how the \N{(imgs \_)} assumption is factored out into the \N{eval\_ats} assumption.\\

The old formalization:
\BB{
Fact binds\_zip: 
\hspace{2.5 mm} forall E0 E ds Eds v t,\\
\hspace{5 mm} wide\_typings E0 ds (imgs E) $\rightarrow$\\
\hspace{5 mm} env\_zip E ds Eds $\rightarrow$\\
\hspace{5 mm} binds v t E $\rightarrow$\\
\hspace{5 mm} (exists2 e, binds v e Eds \& wide\_typing E0 e t).\\
}

\newpage
The new formalization:
\BB{
Fact binds\_zip':\\
\hspace{2.5 mm} forall E0 E tE0 ds Eds v t T T0 ts A,\\
\hspace{5 mm} typeis\_qcname A $\rightarrow$\\
\hspace{5 mm} eval\_ats (imgs E) T0 ts $\rightarrow$\\
\hspace{5 mm} wide\_typings E0 tE0 ds ts A $\rightarrow$\\
\hspace{5 mm} env\_zip E ds Eds $\rightarrow$\\
\hspace{5 mm} binds v T E $\rightarrow$\\
\hspace{5 mm} eval\_at (T,T0) t $\rightarrow$\\
\hspace{5 mm} (exists2 e, binds v e Eds \& wide\_typing E0 tE0 e t A).\\
}

The first assumption on $A$ added is simply to make sure that the context is well-formed. It is not strictly needed as it could be dug out of the \N{wide\_typing} assumption. The second, \N{eval\_ats}, is present to handle relative types, and the \N{imgs} relation has been moved from the next assumption. The next two assumptions mirror those of the original formalization. The final assumption again serves to handle the relative type of \N{wide\_typing} in the goal.\\

The proofs are omitted, but it should be noted that the proof, as is the case with all of the proofs, is significantly longer, both in terms of number of cases and in terms of their complexity and thus the number of lines needed to prove it.\\

\subsection{Substitution of types}
In order to be able to substitute types properly in the \N{eval} relation for a method invocation, the relation \N{subst\_type\_exp} is written. It takes as arguments a type environment and an expression. It then splits the expression into the four types of expressions. Nothing is done to the variables. Fields have their receiver expression given as argument to a recursive call to \N{subst\_type\_exp}. Method invocations also have their receiver expression recursively computed by \N{subst\_type\_exp} and its type arguments computed by another function \N{subst\_types}, and finally the expressions that act as arguments for the method invocation are all recursively computed by \N{subst\_type\_exp} again. The final case is a new expressions where the class name that is to be instantiated is computed by \N{subst\_type} and the expressions that act as arguments are recursively computed by \N{subst\_type\_exp} as in the other cases. 

\N{subst\_types} takes a type environment and a list of types as arguments and if any of the types are variables and any of those variables appear in the type environment, the corresponding types of those variables replace the argument types.

The \N{subst\_type} function resembles \N{subst\_types}, but with the second parameter being a single type instead of a list of types. Below is the code for \N{subst\_type'} along with an explanation of the implementation of \N{subst\_type} and \N{subst\_types}, and then the code for \N{subst\_type\_exp} is presented to help with the explanation above.
\BB{
Definition subst\_type' (XT:(tvname*type)) (t:type) :\\
\hspace{2.5 mm} type := match XT, t with\\
\hspace{5 mm} $|$ (X,t\_C C), t\_X Y $\Rightarrow$ \\
\hspace{7.5 mm} if X == Y then t\_C C else t\\
\hspace{5 mm} $|$ (X,t\_C C), t\_XdotC Y D $\Rightarrow$\\
\hspace{7.5 mm} if X == Y then t\_CdotC C D else t\\
\hspace{5 mm} $|$ \_, \_ $\Rightarrow$ t\\
\hspace{2.5 mm} end.\\
}

As explained above, this definition only changes variables and only if the variable matches the key of the pair \N{XT}. \N{subst\_type} is given a type environment which, as described in section 2.3, is a list of pairs on the same form as \N{XT}. \N{subst\_type} is a Fixpoint and recursively applies \N{subst\_type'} with the head of the type environment until the entire list has been processed. This can be written in a way that will stop as soon as a substitution happens in \N{subst\_type'}, since another substitution cannot happen as the type is no longer a variable but a qualified class name. However, this formalization is simpler and more transparent. The recursive call can also be written inside \N{subst\_type'}, changing Definition to Fixpoint, which will save an application of the unfold tactic and some lines of code, but it is the general principle of this thesis to formalize rules as simply as possible. The separation of the definition and recursion is also more appropriate when taking into account the definition of \N{subst\_types}, which is simply Coq's List.map function applied to \N{subst\_type}. Thus the two levels of recursion, both on the first parameter of \N{subst\_type'} and on the second parameter, are separate from its definition. A property that nicely mirrors the general attempt at modularity throughout the proof.
\BB{
Fixpoint subst\_type\_exp (XTs:list (tvname*type)) (e:exp) :\\ 
\hspace{2.5 mm} exp := match e with\\
\hspace{5 mm} $|$ e\_var \_ $\Rightarrow$ e\\
\hspace{5 mm} $|$ e\_field e0 f $\Rightarrow$ e\_field (subst\_type\_exp XTs e0) f\\
\hspace{5 mm} $|$ e\_meth e0 m Ps es $\Rightarrow$ \\
\hspace{7.5 mm} e\_meth (subst\_type\_exp XTs e0) m (subst\_types XTs Ps)\\
\hspace{10 mm} (List.map (subst\_type\_exp XTs) es)\\
\hspace{5 mm} $|$ e\_new C es $\Rightarrow$ e\_new (subst\_type XTs C) \\
\hspace{7.5 mm} (List.map (subst\_type\_exp XTs) es)\\
\hspace{2.5 mm} end.\\
}

\N{subst\_type\_exp} destructs the expression and while explained abstractly above, a more thorough understanding of the structure of the formalization can be achieved by looking over the actual code. The structure is similar to all other functions that matches over an Inductive to handle the cases differently.\\

All the functions discussed in this section are implemented to facilitate the use of type arguments. Without type arguments, concepts like genericity would not exist. Generics add much to the expressive power of a language and are one of the few features to make it into the .FJ calculus. The code above could have been written more succinctly but some verbosity made for greater transparency and the use of the lemmas in the proof requires almost no unfolding since the lemmas dealing with evaluation all use other intermediate lemmas.\\

As an example of some of the intermediate lemmas involved with type substitution, here are a couple that are worth mentioning: \C{subst\_type XTs (t\_C C) = t\_C C} and \C{subst\_type XTs (t\_CdotC C D) = t\_CdotC C D} for fully qualified class names. \C{subst\_type XTs (t\_dotC C) = t\_dotC C} for relative types. \C{subst\_type XTs (t\_X X) = t\_X X $\lor$ exists C, subst\_type XTs (t\_X X) = t\_C C} for top class variables and similarly for the nested class of a variable. These lemmas are all quite specific, in the sense that they only reason about a single constructor each, but they greatly simplify the constructions in the applying lemmas. \\

\subsection{Decidability properties}
It it necessary to determine if two constructions are equal or not, when looking them up in an environment. These constructions are \N{type} and \N{exp}, and while \N{type} decidability is quite simple to prove as it is just a matter of destructing all cases, \N{exp} decidability is difficult since expressions can be arbitrarily large.\\

The formal definition of decidability (or decidable equality) on X can be expressed like this:
\E{\forall x, x' \in X : x = x' \lor \lnot (x = x')}

In Coq, the formalization looks like this:
\BB{forall (x x':X), x = x' $\lor$ x <> x'\\}

When looking up a construction in an appropriate environment, it is often through induction so that something must be matched with the first element of the list (environment). This means that whenever an assumption or goal looks like \N{binds a b ((x,v)::env')}, decidability must be checked with respect to \N{a} and \N{x}. It should not be necessary to check if \N{b} equals \N{v}, since if \N{a} and \N{x} are equal, then \N{b} and \N{v} must also be equal by the \N{binds\_fun} lemma. If \N{a} and \N{x} are not equal, then it does not matter if \N{b} and \N{v} are equal. There can be many duplicate images in the environments, but as mentioned earlier, an environment is only \N{ok} if there are no duplicate keys. Most often the keys \N{a} and \N{x}s are not compound but rather atoms, for which decidability is already defined.\\

\subsubsection{Type decidability}
The decidability lemma for types is aptly called \N{type\_dec} and proving it is just a matter of the destructing \N{type} into its constructors and checking the atoms they are made of, since there is no recursion in the definition of types and as a result are always finite.\\

Another lemma \N{list\_type\_dec} is also defined to prove decidability on lists of types. This is done quite simply by induction in the list and using the previously defined \N{type\_dec} in the base case and induction step.\\

\subsubsection{Exp decidability}
The induction scheme created by Coq for the \N{exp} construction is not good enough to prove decidability on expressions and thus a new induction scheme \N{exp\_ind2} is written to handle the recursive nature of expressions. As the rectangle presenting the grammar of .FJ clearly shows, expressions do not exhibit the nice property of finiteness that types do. Inside an expression, there can be another expression and even a list of expressions. This means that when destructing \N{exp} into its constructors, the standard induction principle of iterating over all cases of a constructor or creating an induction hypothesis over a list simply will not work. This prompted the lemma \N{induction \_ using exp\_ind2}, where the underscore represents which term is to be the subject of induction.\\

\N{exp\_ind2} is a lemma that, given a term \N{P} of type $exp \rightarrow Prop$, states how the different constructors of an expression hold for \N{P}. In other words, it is a new induction scheme for the inductive definition of \N{exp} and its proof is not much more than a refinement of the goal using \N{exp\_ind2'}. Instead of constructing the correct function in the lemma by the use of tactics, it is formalized in the definition \N{exp\_ind2'}. This function is given a term \N{P} of type $exp \rightarrow Prop$ and a term \N{Q} of type \N{list exp $\rightarrow$ Prop} to handle the lists that are nested inside some expressions. The change from the first induction principle constructed by Coq to this new function used to construct an induction principle is that the lists inside expressions are now checked by \N{P} and recursed over using \N{Q}, which has two added clauses in the function to handle the list \N{nil} and \N{cons} cases. Then a term \N{F} which is given an expression $e$ and returns \N{P} applied to that \N{e}, uses the rules defined to construct exactly the function that can be used to prove \N{exp\_ind2}.\\

The explanations of both lemma and definition are quite high-level and to really get a feel for how the induction principle is constructed and proven, the code can be found in the beginning of FJ\_Properties.v, just below the definition of expressions.\\

\subsection{Proving preservation}
The proof of preservation uses, among other lemmas, the \N{method\_implies\_typing''} lemma defined below. This lemma produces a \N{wide\_typing} in a super of where the method is \N{ok}, because it needs to check the actual place of declaration for the method since the type checker does recurse down into inheriting classes but only states that the method found is \N{ok} in its declaring class.\\

A couple of additions in FJ\_Definitions.v and FJ\_Facts.v ensures that the bound environment is empty. This change was made to facilitate the proof of preservation and its intermediate lemmas because of time constraints. The reason behind removing the bound environment for the sake of the preservation lemma is that relative types are the main focus of the proof and type arguments are considered well enough understood to be a candidate for omission in order to facilitate a proof of the former.\\

Most of the lemmas do not depend on the bound environment being empty and thus reintroducing the bound environment should not be too disruptive to the rest of the proof. In the case of preservation, some of the intermediate lemmas would need rewriting but the overall structure should not change.\\

In the dynamic context of running a piece of code, the substitution of types is applied first and then the substitution of terms is applied. This means that in the static context of doing type analysis on a piece of code, substitution of terms must be applied before substitution of types. The substitution of types as explained in section 4.22 is handled by \N{ea\_substitutivity\_top} and \N{ea\_substitutivity\_nested}, and the substitution of terms is done almost exactly like the .FJ paper.\\

The following sections will discuss the type substitution lemmas. The reason why the names of both lemmas start with \N{ea} is that they primarily resolve the \N{eval\_at} relations, which has the effect of translating any relative types into qualified types.\\

\subsubsection{ea\_substitutivity\_top}
The lemma \N{ea\_substitutivity\_top} states that given a \N{wide\_typing} for an expression $b$ to a type $U$ in the context \N{t\_C D}, with the empty bound environment and a type environment beginning with the pair \N{(this, t\_C D)}, then for some subclass \N{t\_C C} of \N{t\_C D}, there is a \N{wide\_typing} for $b$ to $U$ in an unrelated type on the form of a qualified class name and the beginning pair of the type environment changes to \N{(this, t\_C C)}.\\

The lemma is proven using induction in the new induction principle \N{exp\_ind2} explained above. This produces a subgoal for the four kinds of expressions. The first case of \N{e\_var} involves checking the type environment for the correct binding. This is done using some intermediate lemmas that reason on the structure of \N{env\_zip}. 

The second subgoal is \N{e\_field}, which requires considerable work, but still less than the \N{e\_invk} case, because the typing of method invocation involves many assumptions such as evaluation of types and bounds, typing the receiver, checking well-formedness of bounds and contexts, and most critically, finding the right \N{method} assumption that describes the method in the appropriate context. 

The subgoal for \N{e\_invk} is proven in much the same way, where the structure of the goal is guessed at and the appropriate assumptions are constructed and passed to the typing rule. 

The fourth case is \N{e\_new} which is the shortest case where the appropriate types are guessed and an intermediate lemma that acts on lists of expressions uses the induction hypothesis to change the context for a \N{wide\_typings} assumption on the arguments to the new expressions.\\

\subsubsection{ea\_substitutivity\_nested}
The proof for the nested case is admitted due to time constraints but there should be no problems in proving it using induction in the expression with \N{exp\_ind2} just like the top lemma. The proof will most likely require many lines of code since the relative types need to be cased on to resolve them properly.\\

\subsection{Proving progress}
\N{progress} is formalized like this in FJ\_Properties.v:
\BB{
Theorem progress:\\
\hspace{2.5 mm} forall e t A,\\
\hspace{5 mm} typing nil nil e t A $\rightarrow$\\
\hspace{5 mm} value e $\lor$ (exists e', eval e e').\\
Proof.\\
\hspace{2.5 mm} intros.\\
\hspace{2.5 mm} destruct progress' as [Hpro \_].\\
\hspace{2.5 mm} apply (Hpro nil nil e t A H eq\_refl).\\
Qed.\\
}

The above says that if expression $e$ can be typed to $t$ in the context $A$, then $e$ is either a value or it can be evaluated to another expression. The proof is very short but uses the intermediate lemma \N{progress'} which does all the work. \\

\N{progress'} is destructed in such a way that the left part of the first conjunction is called $Hpro$ and the rest is omitted. Then the new assumption called \N{Hpro} is applied with the appropriate variables along with the hypothesis $H$ which is what the \N{typing} assumption is automatically named by Coq and the last argument is \N{eq\_refl}.\\

\N{eq\_refl} simply states that given a $Type$, note this is not the $type$ formalized in this thesis but a keyword in Coq meaning an actual type, and some variable of that type, then that variable equals itself. In short, $\forall T \in Type, x \in T \Rightarrow x = x$. This is an easy way to get rid of equalities that are left over after applying lemmas in Coq.\\

The \N{progress'} lemma is defined like this:
\BB{
Theorem progress':\\
\hspace{2.5 mm} (forall E tE e t A, \\
\hspace{5 mm} typing E tE e t A $\rightarrow$ tE = nil $\rightarrow$\\
\hspace{5 mm} value e $\lor$ exists e', eval e e') $\land$\\
\hspace{2.5 mm} (forall E tE e t A, \\
\hspace{5 mm} wide\_typing E tE e t A $\rightarrow$ tE = nil $\rightarrow$\\
\hspace{5 mm} value e $\lor$ exists e', eval e e') $\land$\\
\hspace{2.5 mm} (forall E tE ds env A, \\
\hspace{5 mm} wide\_typings E tE ds env A $\rightarrow$ tE = nil $\rightarrow$\\
\hspace{5 mm} (forall v, In v ds $\rightarrow$ value v) $\lor$\\
\hspace{5 mm} exists EE e0 e0', \\
\hspace{7.5 mm} exps\_context EE $\land$\\
\hspace{7.5 mm} EE e0 = ds $\land$\\
\hspace{7.5 mm} eval e0 e0').\\
}

This definition seems like more work than is necessary, since only the first case is needed. But the reason \N{progress'} is defined like this is to be able to apply the \N{typings\_mutind} lemma which is an induction scheme and is explained below.\\ 

After applying \N{typings\_mutind} there are seven subgoals that must be dealt with. The first is rather trivial, as an assumption reads \C{binds x t nil} which clearly cannot happen and that contradiction solves the first subgoal - Just as a reminder, this means that \C{x} is bound to \C{t} in \C{nil}, and in other words, the pair \C{(x,t)} exists in the empty list. 

The second subgoal states that a particular field expression is either a value or it can be evaluated to some expression. Since fields can never be values, the way to prove the subgoal is to find the correct expression that the field expression evaluates to and this is done using intermediate lemmas. 

The third subgoal states that a method invocation is either a value or can be evaluated. Again, it cannot be a value, since only new expressions can be values, and again, using intermediate lemmas, the correct expression that is the reduction of the method invocation can be found. This case is a bit trickier because it is necessary to come up with the right expression context in order to arrive at the correct expression. 

The fourth subgoal is fairly simple as it states that a new expression is either a value or can be evaluated. This means that the expressions that act as arguments to the constructor must themselves be values and an appropriate assumption on the form of a disjunction is available to prove the subgoal. 

The fifth and sixth subgoals are not interesting and are proven with a single line each. 

The seventh and most difficult subgoal states that a disjunction must hold that matches an assumption but for the addition of one element in a list of expressions that both the assumption and subgoal binds. This resembles the method invocation subgoal, since disjunctive assumptions must be destructed to give more subgoals, intermediate lemmas are used and then the correct expression context is found and appropriately evaluated.\\

\subsubsection{Induction schemes}
Using the \N{Scheme} keyword in Coq allows for the automatic creation of induction schemes. This is done for \N{typing}, \N{wide\_typing}, and \N{wide\_typings} and combined into one induction scheme \N{typings\_mutind} using the \N{Combined Scheme} syntax. Then \N{typings\_mutind} is used to prove \N{typings\_implies\_ok\_env} which says that if something can be typed by either \N{typing}, \N{wide\_typing}, or \N{wide\_typings} then the type environment used is well-formed, which simply means that no duplicate keys exist. Finally, \N{typings\_implies\_ok\_env} is divided into three separate definitions again, one for each kind of typing relation, and the three parts are then used throughout the proofs as they are hinted to eauto.\\

The combined induction scheme \N{typings\_mutind} is at least as important as the lemmas it proves even though the syntax for applying it is quite cumbersome. It is applied in a few key places, and the lemmas it is applied in are usually appended with a prime to show that they are rewritten to accommodate the use of \N{typings\_mutind}.\\

\subsection{Optimizing proofs}
Some of the proofs can be written shorter and with fewer uses of \N{eauto} to improve clarity. There is always a trade-off between compactness and clarity when it comes to proofs, both of which can be emphasized more in this thesis. It is considered outside of the scope of this thesis to do optimizations on the proofs, since most of the time has gone into the formalization and making sure it is as clean and clever as possible.\\

There are also some lemmas and facts that are not explicitly used or are hinted to eauto which does not use it. These lemmas and their proofs can be removed, but as explained earlier, some lemmas are kept in the proof to provide the reader with better understanding of the structure and the choices made. Optimization of the proofs is not a key issue in this thesis and is thus left as possible future work.\\

As discussed in section 2.5, parsimony is a highly desirable property in mathematical proofs, and the proof presented by this thesis would benefit greatly if considerable effort was put into making the proof cleaner. It is highly probable some assumptions and even entire lemmas can be removed from the type safety proof, which is exactly what parsimony is all about and perhaps best described by Occam's Razor.\\

\subsection{General proof techniques}
This section discusses the different practices used in the proofs, and while it will be about Coq constructions and proofs it should be applicable to most proof assistants.\\

\subsubsection{Catch-all tactics}
The tactic \N{eauto} has been used many times, and as explained in section 4.26, it should be kept to a minimum for better transparency in the proofs. \N{eauto} works well to solve many subgoals easily but disguises which lemmas are actually used and is much slower to evaluate since many lemmas are hinted to \N{eauto} and it often has to go through many of those lemmas to find the right one.\\

\N{eauto} belongs to a group of Coq tactics that do some work but do not tell the user exactly what work has been done. Other such tactics are \N{trivial}, which, unlike \N{eauto}, is not recursive, \N{simpl}, which tries to unfold constants and reduce the goal, etc. Many more such tactics exist\cite{coqTactics}.\\

\subsubsection{Definitions vs. proofs}
Proving is an iterative process. First the definitions and rules need to be expressed before anything can be proven. Then, as the proofs progress, it is often prudent to go back and alter the definitions and rules; Either because errors have been discovered or because new information about the structure of the system has revealed itself and it would be advantageous to do some restructuring. This process is healthy and should not be avoided.\\ 

The Coq Proof Assistant only assists in proofs and overall consistency. It cannot reason about the semantics of the system that is being reasoned about. It is essentially up to the people doing the proving to make sure that the code models that which is to be modeled. Thus, with regard to models, proof assistants offer nothing more than pen and paper, except that rigor is enforced.\\

\subsubsection{Indentation}
There are two distinct indentation styles present in the files and their use depends on the context. Relatively simple proofs or proofs with only a few cases will usually have one level of indentation. Sometimes, for clarity, each case is separated by a blank line. 

The other indentation style can be seen at the end of FJ\_Definitions.v and quite frequently in FJ\_Facts.v and FJ\_Properties.v. Here, the first case will be indented with spaces equal to twice the number of total subgoals. The subsequent subgoals will then each be indented 2 spaces less than its predecessor. This helps to get an easy overview of how many subgoals there are, how many lines it takes to prove each goal, and which subgoal is the most complex, or at the very least which subgoal needs the most work to solve.\\

The proof was written using the Coq Proof Assistant 8.4pl2 in Emacs 24.2.1 with Proof General 4.2. This setup supports only the first way of indenting code, while the second is probably preferable if proper auto-indentation support existed. It is, however, outside of the scope of this thesis to add support in Emacs for the most appropriate indentation with the Coq Proof Assistant. Adding support for multiple ways of doing auto-indentation could be a very helpful mechanism to make proofs more transparent and in many ways make the code look more like its declarative counterpart. This could be an interesting problem for someone familiar with Elisp to tackle.\\

\section{Examples}
As explained in section 4.1, FJ\_Example.v models the example code from the FJ paper. The code seems verbose since Igarashi et al. opted for syntactic regularity by always including the supertype, writing out the constructor, and writing the receiver for a field access. The code below is an exact copy of the code from the FJ paper.\\

\newpage
\subsection{Example code}
\BB{
class A extends Object \{\\
\hspace{2.5 mm} A() \{ super(); \}\\
\}\\
class B extends Object \{\\
\hspace{2.5 mm} B() \{ super(); \}\\
\}\\
class Pair extends Object \{\\
\hspace{2.5 mm} Object fst;\\
\hspace{2.5 mm} Object snd;\\
\hspace{2.5 mm} Pair(Object fst, Object snd) \{\\
\hspace{5 mm} super(); this.fst=fst; this.snd=snd;\\
\hspace{2.5 mm} \}\\
\hspace{2.5 mm} Pair setfst(Object newfst) \{\\
\hspace{5 mm} return new Pair(newfst, this.snd);\\
\hspace{2.5 mm} \}\\
\}\\
\\
new Pair(new A(), new B()).setfst(new B())\\ 
$\longrightarrow$ new Pair(new B(), new B())\\
}

Two simple classes \N{A} and \N{B} are declared along with the more interesting \N{Pair} class. \N{Pair} has two fields \N{fst} and \N{snd} of type \N{Object} which are set when the class is first instantiated. The field \N{fst} can be set using the method \N{setfst} which instantiates a new object of the class \N{Pair} with a new argument \N{newfst} and the previous \N{snd} field.\\

The final line shows how evaluation works. A \N{Pair} object is created with an instantiation of \N{A} as \N{fst} and an instantiation of \N{B} as \N{snd} and the \N{setfst} method is then called on the newly created \N{Pair} object with an instantiation of \N{B} as its argument. By evaluation, this reduces to a new instantiation of \N{Pair} with both \N{fst} and \N{snd} being an instantiation of \N{B}.\\

Since there are no generics in FJ, the example code in FJ\_Example.v has empty bound environments and no type arguments. The code models the constructions above using a mix of parameters and definitions to express the classes and their environments. Some new names and renaming must be introduced to properly code the example in Coq. \N{Pair}, \N{A}, and \N{B} are types. \N{fst} and \N{snd} are field names. \N{setfst} is a method name. \N{newfst} is a variable. \N{pair\_flds} is a definition returning \N{Pair}'s field environment. \N{setfst\_env} is a definition returning \N{setfst}'s type environment, which in this context is its formal parameters. \N{new\_pair} is a definition returning a new-expression with \N{Pair} as its type and a list of two parameters given to \N{new\_pair}. \N{setfst\_body} returns \N{new\_pair} with the first parameter \N{(e\_var newfst)} and second parameter \N{(e\_field (e\_var this) snd)}. Finally, \N{pair\_mths} returns \N{Pair}'s method environment which only contains \N{setfst}.\\

\subsection{Intermediate lemmas}
A hypothesis called \N{ct\_fix} is used to set a specific class table that contains only the three classes and their declarations. The declarations, environments, and the translation from the former to the latter is explained in section 4.3.\\

A lemma called \N{ct\_noobj}, just like in the type safety proof, is written for the fixed class table and is wrapped in a module called \N{ExNoObj}. Many other intermediate lemmas are also formalized, almost all of which exists in a slightly different form in the original code base. The most important ones are:
\BB{
Lemma ex\_ok\_setfst: \\
\hspace{2.5 mm} ok\_meth nil Pair setfst setfst\_env setfst\_body Pair.\\
\\
Lemma ex\_ok\_pair: \\
\hspace{2.5 mm} ok\_tclass pair Object pair\_flds (pair,pair\_flds)\\
\hspace{5 mm} pair\_mths nil.\\
\\
Lemma ex\_step\_field: forall a b, \\
\hspace{2.5 mm} eval (e\_field (new\_pair a b) snd) b.\\
\\
Lemma ex\_type\_field:\\
\hspace{2.5 mm} typing nil nil (e\_field (new\_pair (e\_new A nil)\\ 
\hspace{5 mm} (e\_new B nil)) snd) obj Pair.\\
\\
subst\_exp ((this,new\_pair a b)::(newfst,c)::nil) setfst\_body = \\
\hspace{2.5 mm} new\_pair c (e\_field (new\_pair a b) snd).\\
}

The proof of \N{ex\_ok\_setfst} is quite long but not too complex. First \N{ok\_meth} is unfolded and each part of the resulting conjunction is dealt with using repeated application of the typing relation to shape the goals into something immediately provable. With the many applications of the typing relation comes many new subgoals which is the real reason for the size of the proof. Moreover, the environments are quite small but they still need to be unfolded to dig out the right terms. Stepping through the proof in Coq is recommended as it will give a great deal of intuition as to how the .FJ construction works for an actual program. The last subgoal of the proof is to handle the case where a method is defined in a super. This means that a series of equalities has to be proven, since the attributes of the method has to match those declared in the super; As previously described, method declaration is invariant in Java and as such method return types cannot change for overriding methods. Again, this causes the proof to take up many lines of code while not being overly complex.\\

\N{ex\_ok\_pair} is simple to prove, in spite of the fact that it is unfolded similarly to \N{ex\_ok\_setfst} and all the subgoals are dealt with separately. The \N{pair} construction is simple enough that none of the subgoals' proofs explode in size and most of the proof obligations are handled with \N{eauto} or \N{fa\_nil} because of the empty type environments.\\

Proving \N{ex\_step\_field} is just a matter of changing the names of the new constructions introduced by type arguments. The formalization and proof are almost exactly as the original.\\

The proof of \N{ex\_type\_field} is done by applying the typing relation and then digging out the right terms from the environments (class table, constructors, field variables, and methods, and again shaping the goals to be immediately provable. The proof might look complex with the amount of explicit parameters but because of the small environments the proof never becomes too complex and should be easy to understand when stepping through it.\\

\subsection{Final proof}
Below is the final lemma formalizing the evaluation of \N{new Pair(new A(), new B()).setfst(new B())} to \N{new Pair(new B(), new B())} with the introduction of a third variable $c$, as the resulting first of the pair.
\BB{
Lemma ex\_step\_meth: forall a b c,\\
\hspace{2.5 mm} eval (e\_meth (new\_pair a b) setfst nil (c::nil)) \\
\hspace{5 mm} (new\_pair c (e\_field (new\_pair a b) snd)).\\
}

Owing to the many intermediate lemmas, the proof for this final lemma is very short. Most of the proof obligations could be factored out into other lemmas because of the modular nature of the statement.\\

Since the basic structure of the original example code works with the extended model of FJ, it can be seen as evidence that not only is the FJ formalization well-suited for extension, one of the goals in writing it, but the .FJ model is also a good way to model the specific extension of Family Polymorphism.\\

\section{Conclusion}
Family Polymorphism successfully fixes the current deficiency in most object oriented languages of subtyping between groups of mutually referencing classes. While Lightweight Family Polymorphism is considarably less general than Family Polymorphism and standard Java, its calculus retains the charactaristic and important properties needed to model a language well. Igarashi et al. already provides a paper proof of type safety but a Coq formalization provides confidence in model proof without need for extensive peer review. This is due to the rigor enforced on all proofs, that does not allow unjustified jumps in logic or any other forms of logical errors. The coq proof of .FJ works, apart from one admitted lemma and its role in the proof is a small, well-defined statement.\\

Type safety is divided into the properties of progress and preservation as defined by Milner. Progress is proven but only three of the four cases of preservation is proven, these being variable expressions, field access expressions, and new expressions. The missing method invocation case is split into a lemma for top class receivers and a lemma for nested class receivers. The top class receivers lemma is proven and while the nested class recievers lemma still needs to be proven, there should be no problems in doing so, as explained in section 4.24, and its omission was purely due to time constraints.\\

The example code shows how an actual program can be transformed from syntactical class declarations and an expression to semantical class environments and an evaluated expression using the, mostly, theoretical formalization in Coq. This effectively argues that .FJ, and by extension the Coq formalization, models something very useful. While the example code is only a proof of the correct evaluation of a particular expression, it does grant insight into how the .FJ calcus works.\\

The main contribution of this thesis is the formalization of .FJ and its proof. When using the same formalization as in this thesis, the lemmas reasoning about it are all inherited, and each of them is a result of their own. As such, looking over the code is a great way to gain insight into the formalization and its properties. Most of the lemmas written have not been referenced in the thesis, simply because there are too many. Additionally, the subjects discussed throughout section 4 provide insight to not only the .FJ formalization but formal modeling and theorem proving in general.\\

The proof of the missing lemma is left as future work. The formalization of .FJ is already very clean ans transparent, but the proofs could benefit from some cleaning and possibly optimizing. Again, due to time constraints, the structure of the intermediate lemmas is not optimal and some restructuring would most likely produce a more concise proof. However, it should be mentioned that one of the strengths of using a proof assistant is that parsimony is not as important a property as in proofs by pen and paper since the proofs do not have to be reviewed.\\

The proof can be found in its entirety in the appendix along with the number of lines and lemmas.\\

\clearpage

\section{Appendix}
\noindent{Website containing the source code and compiled code for the project:}\\
\texttt{cs.au.dk/\~{}lonkz/projects/fampoly/}\\

\noindent{The proof uses 6249 lines of code.}\\

\noindent{There are 172 lemmas and facts. Of these 172 lemmas, 107 are not explicitly used but are hinted to eauto and thus their use is concealed. This leaves 65 lemmas that are explicitly applied.}\\

% References: BibLaTeX
\clearpage
\printbibliography

\end{document}
