%%This is a very basic article template.
%%There is just one section and two subsections.

\documentclass[a4paper]{article} 
\usepackage[latin1]{inputenc}
\usepackage{calc}
%\usepackage{setspace}
\usepackage{graphicx}
\usepackage{multicol}
\usepackage[normalem]{ulem}
%% Please set your language here
\usepackage[english]{babel}
\usepackage{color}
\usepackage{hyperref}
\usepackage{natbib}

\usepackage{moreverb}
%%\newenvironment{code}{\catcode`\@=\other\relax\small\verbatimtab}{%
%%\endverbatimtab\normalsize\catcode`\@=\active}
%%\newenvironment{code}{ \begin{verbatim} } { \end{verbatim} }

\definecolor{darkblue}{rgb}{0, 0, 0.4}
%%\newenvironment{code}  {\begin{quote} \small \darkblue \sf } { \end{quote}}
\newenvironment{code} {\small } { }



\bibpunct{[}{]}{,}{a}{}{;} 


\begin{document}



\title{Static typing and Scala}
\author{Jelle Pelfrene}
\date{}

\maketitle

\section{Introduction to static type systems}


Historically the progress of computer science can be seen as a road to
ever-higher levels of abstraction. There is a struggle to find and define the
right abstractions and methods of abstraction. This process is mirrored
in program development by programmers searching the right abstractions for a
particular program. Just as the programmer then has to implement his
carefully chosen abstractions correctly and efficiently, so too language
developers develop the machinery to correctly and efficiently implement ever
higher abstractions. 


As always, it is extremely important to get these foundations right.
Since the abstraction techniques supported by a programming language influence
the ease and cleanliness with which programs in that language can be
written, work in programming languages will pay double dividends for everyone building on them. 


Type systems have been an important driver in programming language
research for decades and profoundly influences the languages we think and
program in.
\\

\subsection{Type checking in the program lifecycle}
A computer program takes different forms
during the development cycle. The programmer typically works on a textual representation. The end-user cares only about the run-time 
behaviour, given by the final binary form. The compiler stands between both: it 
transforms a syntactically correct program text into a tree form, analyses 
this tree and finally generates the code that implements the high-level behaviour specified.
\\

This middle step of analysis is very programming language specific. Languages 
that are called `dynamic' let you run any program that adheres to the syntax.
These are also known as scripting languages with well-known
instances such as Python and Ruby. Other languages are more `static' and
perform more advanced analyses on a program before allowing it to be
executed. Haskell, Java and Scala are examples of this category of
languages. 

We will study the theory and practice of type
checking, a common correctness-validating analysis run in the
beginning of the analysis stage. After verifying program
correctness typically many more analyses are run that provide program
optimization.
\\
\subsection{Early detection of errors}
Programs need further verification besides syntactic checking because 
the syntax allows statements that correspond to clearly nonsensical 
behaviour. For example, in an object-oriented language a method call 
can be syntactically correct but actually refer to a non-existent 
method. Or possibly the reference to the receiver object is not 
valid but a null reference, or a write to an array exceeds its bounds.
\\

Program languages need to either specify a way to deal with these situations or
prevent them from occurring. Dynamic languages need to check everything at
runtime before performing an
operation. If the next operation would be illegal, they will at that moment throw an exception, signifying the 
message was not understood or a null pointer is being dereferenced.
Static 
languages use compile-time analysis to guarantee these invalid operations will 
never occur during an execution of the program.  


A static type system uses classification to build an
abstraction of the program. Every expression is classified according to the
type of values generated on execution. By only allowing expressions to be
combined in compatible ways, the type system proves
that prohibited error classes can never occur in validated
programs. For example, in an object-oriented language the type system 
will typically check method calls on objects. It will classify objects by
their interfaces and prove that
all objects called will contain the called method. If the method is not defined,
the programmer will be alerted by the compiler and the program refused.
\\

%A companion technique is program verification using verification conditions 
%where pre- and post-conditions in the program provide a specification. A
%theorem prover then attempts to verify the implementation against this
%specification and only allows provably correct programs to be run. 

\subsection{Further benefits}
%Static type checking as a discipline has further repercussions besides
%preventing errors that stem from trivial coding errors or disregarding side
%cases. 
Type systems can do more than detect simply coding errors early with guarantees.
Type systems have been used to prove more general kinds of properties of a program, from never crashing because of illegal interactions over being deadlock-proof
~\citep{SafeJava} to proving mathematical theorems.


A type system can also be a documentation tool for the
programmer. The
types occurring in a program say a lot about the structure of the
program. In dynamic languages this information is usually preserved in
documentation or perhaps using Hungarian notation for variable
names. Documentation can be made inconsistent by modifications, whereas a statically typed program enforces
this knowledge in the program itself.

A type system can be helpful to maintainability by using
the type system to enforce the use of interfaces and separation of
concerns in the program. Manual refactoring is aided because the compiler
will signal inconsistencies when a change has been made.

There are also consequences for performance. Statically typed
programs can run faster because some run-time checks for properties 
proved at compile-time can be eliminated
and because it provides extra information to layout data in memory efficiently.
\\
\subsection{Limitations}
Static type systems do have an impact on succinctness, 
flexibility and expressiveness. 


Type annotations typically make a statically typed program more verbose than a
corresponding program written in an untyped language. However, the difference
in verbosity need not be big. Static typing can be used with a system of type
inference, relieving the programmer from the need to provide type annotations for all expressions. There is a whole
spectrum of languages from verbose like Java to non-verbose like
Haskell. Also the intricateness of the
program can play a part, where sometimes extra annotations might be necessary 
to help the type inference engine along.
 

Static typing may also interfere with prototyping 
by refusing to allow the program before every possible path is fully checked. The 
ability to selectively turn off parts of the type system is still an active research 
domain. For now most statically typed languages also provide a
read-eval loop to enable testing key parts of a system.


Finally and most fundamentally, employing a type system imposes
limits to expressiveness. We want
absolute guarantees from our type checker even though it can
only do a static analysis of the program. Therefore the type system
will always err on the side of safety. It will need to conservatively 
disallow some programs that cannot be proved safe, even though the program 
can in fact be executed without problems.
\\


This last limitation of type systems is a major driver in type system 
development. The goal is to develop a type system that would allow not all possible 
but all useful programs to be checked. The trade-off is pushing the bounds of 
maximal expressiveness, while keeping the language convenient to use by not requiring 
too much explicit typing hints from the programmer. 
\\

\subsection{To the future}
These are exciting times for type system designers: the escalating security
requirements for always-connected computer systems have made proof of security properties a lot more 
 worthwhile. Also, the field has been academically developed to a point where it can now 
 take on real industry-proof languages instead of just purely academic proof of concepts.
Indeed, recent features added to mainstream program languages such as Java
generics ~\citep{bracha98making} and $C^\sharp$ nullable value types are
already firmly founded in enhanced type systems, and there is hope for more tangible improvements in this direction.
Specifically, research on how to combine static and soft type checking
\citep{wadlerblame} and typical features of dynamic languages \citep{ego} will
hopefully make this body of research and experience useful in further contexts.


\newpage

\section{Typing lambda calculus}

Historically, most of the research into type systems up to the 1990's was 
performed in a foundational computational framework called the lambda
calculus ~\citep{barendregt88introduction}.
\\

Computation in the lambda calculus is based on substitution. As an example, the substitution 
of 3 for x in x*(x+y), written (x*(x+y))[x:=3] equals
3*(3+y). We could perform a further substitution on the result of 11 for y as
(3*(3+y))[y:=11] equals (3*(3+11)), which would result (with
primitive operations + and * available)
in 3*14 = 42. It is obvious the operation we have performed here can model the
 action of calling a function with actual parameters 3 and 11 for formal parameters x and y.
\\

In Lambda calculus notation one would use very little syntax and write the
above function as $\lambda x . \lambda y . (x*(x+y))$ . A call 
of this function with
3 for x and 11 for y is written as  {\tt $\lambda x . \lambda y . (x*(x+y))$
$ 3$ $ 11$ } by just appending actual parameters to the right.
\\

This is actually the complete form of the untyped lambda calculus: a term can
be a variable, a function abstraction with a formal parameter and body 
``$\lambda$ param . body'' or a function application of a function term and a 
parameter term  ``function term'' 

\begin{tabbing}
term \=::=  \= x     \hspace{70pt}       \= variable \\ 
       \>	 $\|$ \>	$\lambda$x . term     \>	abstraction: function with parameter x\\
       \>	 $\|$ \>	term  term           \>	application: right term is parameter for left function\\
\end{tabbing}

This simple system is Turing-complete and thus lets us express any computable
function. Concepts such as numbers, booleans and pairs could all be encoded as functions in the lambda calculus through 
so-called Church encodings\citep{TAPL,barendregt88introduction}.
If we extend this calculus with enough syntactic sugar, primitive implementations of the base 
values and operations on them, the end product would be a useable full-fledged 
programming language, which can indeed be seen as the foundations of
Scheme\citep{schemereport}.
\\

However such dynamic framework lets us write expressions that make no sense.
For example, if in this untyped system we wrote a function that increments its parameter by two:
	$\lambda n . 2+n$ by again using a primitive operation on numbers,
we could use this function correctly on 1 : $(\lambda n . 2+n)  1$  $\Rightarrow$ through
substitution: $2+1$ $\Rightarrow$  $3$. However, accidentally somewhere deep down in our
program we might cause it to be applied to, say the word ``hello''. This is an
invalid operation and we would expect a runtime error.
\\

Erroneous expressions such as these can be prevented by using a very simple and lightweight 
static type system. As type systems work through classification of values and expressions by types, 
we will first investigate the categories of values that exist in this so-called 
 ``simply typed lambda calculus''. 
The possible values consist of primitive values we add for convenience and function values.


Classifying some base datatypes is easy: we might introduce a type Nat that classifies all natural 
numbers, a type String for words and a type Bool for the logical constants true and false. 
But what to do for functions? Suppose we introduce a type Function. This is still insufficient 
to differentiate between proper function application and inconsistent 
application. The relevant information about a function is not just that it is one, but 
what type of parameter it takes and what type the return value is. Traditionally the 
notation A $\rightarrow$ B is used for the type that classifies functions that return a B when 
applied to an A. This means String $\rightarrow$ Nat would be the type of a function that gives the 
length of a string. The `$\rightarrow$' itself is then a type constructor,
since it is not a type itself but constructs one when given type parameters.
\\

The specification of our language now looks like this: \label{STLC_grammar}
\begin{tabbing}
term 		\=::=  \= x     \hspace{70pt}              \= variable \\
       \>	 $\|$ \>	$\lambda$x: type . term   \>	abstraction with param x with given type\\
       \>	 $\|$ \>	term  term         		  \>	application\\
type \>	::= \>	base type 						\>	(Nat, Bool,\ldots depending on actual language) \\
	 \>	$\|$	\>	type $\rightarrow$ type 	\>	 function type: argument type
	 `$\rightarrow$' result type
\end{tabbing}

% \begin{verbatim}
% term ::= x	                     variable
%        | \lambda x:type .term    abstraction with param x with given type
%        | term term               application 
% type ::= base type               (Nat, Bool,\ldots depending on actual language) 
%        | type -> type            function type: argument type '->' result type
% \end{verbatim}

Generally, we write the syntax term `:' type to express that the given term has
the given type. Although we need the type of a function to contain both its source 
type and target type, simply annotating a function with just the type 
of parameter it takes, turns out to be enough to allow the typechecker for the simply typed lambda calculus 
to check function applications for validity. 
\\

For an approximate account of how a typechecker would function, suppose `+' is
shorthand for a built-in function that takes two Nat numbers and returns a Nat.
Then we can reason as follows about our example function, now annotated as 
$\lambda n:Nat . 2+n$ to prevent it being applied to anything but numbers. Our function is a function that takes parameters of type Nat
 (by annotation), and returns values of whatever type the function in its body returns when given a Nat. 
 The type of the body when given a Nat is equal to the type of the result of the 
 function `+' when given two Nats, being Nat.  Thus the type of our example
  function is Nat $\rightarrow$ Nat. 
  
  
  Applying this function to the String ``hello'' should give rise 
  to an error by the compiler to the programmer,because functions should only
  be given parameters of the correct type. If we apply our function to the parameter 3 which is of type Nat, 
  this is permissible: an application of a function that takes a Nat to a Nat, to a parameter of 
  type Nat naturally gives us a result of type Nat. 

There is a small set of simple rules guiding this decision making process. \label{STLC_typing}
\begin{enumerate}
  \item The type of a primitive value is its predefined base type.
  \item The type of a variable inside a function is the annotated type remembered from its declaration site.
  \item The type of a function with annotated parameter type A is the function type A$\rightarrow$B, \
			where B is the type found for the body in the knowledge that the newly introduced parameter has type A.
  \item The type of the application of a function f: A$\rightarrow$B to a parameter gives \begin{enumerate}
                                                                                            \item if the actual parameter has type A, result type B
                                                                                            \item if the actual parameter does not have type A, a type error 
                                                                                          \end{enumerate}
\end{enumerate}
These rules allow us to write an algorithm that determines whether 
the annotated types lead to a consistent program or whether it is flawed.
Very important is the fact that the different cases of the algorithm are mutually 
distinct. The structure of the term being analised, corresponding to a line in the grammar, 
fully specifies which single branch applies.
This `syntax-directedness' of the type checking rules simplifies writing the
type-checker.
\\

However, there are a lot of functions that are intuitively perfectly fine 
that we cannot write in this simply typed lambda calculus. It is very simple but lacks expressivity.
The most straightforward example is the identity function: the function that just
 returns its parameter. In the untyped lambda calculus 
we can just write  $\lambda n . n$ and apply this function to any
value to have the same value returned. If our
programming system only accepts the simply typed lambda calculus, this term will not parse: we need to annotate with a parameter type. 
There is no way this can be done once and for all: 
if we want the identity function for natural numbers, we need to write the function 
as $\lambda n : Nat . n$ , for booleans $\lambda n : Bool. n$ and so on.


We need to make our type system more expressive to be able to express polymorphic functions like this that 
work for multiple types. The extension with so-called universal types allows us to model the 
identity function as 
	\mbox{$forall X .  \lambda n : X. n$} with type \mbox{$forall X.
	X\rightarrow X$}. This extension has been fully worked out and, like any,
	complicates the typesystem while improving expressiveness. The story is similar for subtyping, as best known 
from object-oriented languages, and a host of other
extensions~\citep{TAPL, cardelli85understanding}. 






\newpage
\section{The Scala programming language}
\subsection{Scala's background}

The Scala language \citep{ScalaOverview} designed by Martin Odersky is an
effort to create a multi-paradigm scalable language by joining the Object-Oriented and Functional programming paradigms. 
The language is designed to be scalable by using the same core abstractions 
for both very small programs and big systems. The core abstractions are
derived by comparing and unifying Object-Oriented and Functional design practices.  


The Object-Oriented 
way to decompose a problem is as a set of objects that send messages to each 
other. Objects are first-class values so object methods can take and return other objects. 
Objects only depend on the interfaces of their peers as a way to separate 
concerns. Each object neatly hides local state and uses it to make decisions
internally. Extending a program should ideally be as easy as replacing a class 
with an enhanced subtype through inheritance. In this way Object-Orientation aims to build 
flexible and easily extendable systems. 


The Functional way to decompose a problem is as a set of 
functions that operate on external data. Functions are first-class values, so
so-called higher-order functions can operate on other 
functions and return functions. 
Data is modelled using algebraic data types and decisions are made external to the data by 
functions pattern matching over the way the data was constructed. The usage of state 
is minimised which makes it easier to reason about and prove properties of the 
program. Together this provides a concise way of building large programs out 
of small building pieces\citep{SICP}.


Furthermore, Scala has a static type system but does not need fully explicit type declarations.
A type inference engine derives partial type information, 
so the programmer can leave a lot of type annotations implicit which makes 
for shorter code compared to explicit languages such as Java. This allows 
Scala to aspire to the conciseness of dynamic scripting languages while 
providing the guarantees of a static type system.

 
Finally, Scala currently can compile to bytecode for the Java virtual 
machine and can call Java code seamlessly so Scala code integrates well into the current Java ecosystem.


\subsection{Scala as a meta-programming language}

Though Scala on the whole is a multi-paradigm language useable as a successor for Java, some 
features in particular make it very practical 
to write language-handling programs in.


Firstly, pattern matching as in Haskell is possible on case class 
hierarchies, the Scala version of Abstract Data Types. Writing programming-language 
oriented code involves a lot of operations of trees since the object 
program is represented as a tree of abstract syntax elements. Scala has a
`match' statement that implements the traditionally functional language feature of pattern matching which 
is a way of externally defining a function over all the variants of an abstract data type.
The code is localized in the function instead of distributed 
over the different subtypes, as could be simulated in other object-oriented languages 
by the use of the visitor pattern. 


Secondly, Scala is suitable as a host language for domain specific languages because of a combination of 
flexible syntax and user-definable implicit conversions. The standard library provides an 
implementation of the `executable grammar' or `parser combinator'
\citep{MonadicParserGenerators} idea. This reduces the time needed to implement parsers compared to manual 
implementation. As an embedded domain specific language it can be better integrated 
than using an external parser generator as from the yacc family.


Thirdly, Scala has mixin composition using traits, which allows expressing a 
component model using objects. Whereas in Java the interface of
an object only specifies which functionality it provides, Scala allows defining both the interface the component provides and those it depends on. 
Traits can be mixed into classes, allowing the composition of separate
subcomponents and dependencies into larger structures. In Scala this
happens not in a separate module definition language but in a type-safe
way in the language itself.

\subsection{Scala's type system}
The present-day Scala type system is based on the $\nu$-object calculus 
with restrictions to make type checking a decidable algorithm. This introduces a
familiar object-oriented system where values are objects, whose class is their
type. Besides normal classes Scala has mixin traits. It extends these by not
only allowing value members for an object but also abstract or concrete type
members. This type abstraction mechanism is also available packaged as familiar
generic classes and generic methods. In Scala variance is declaration-site
based . %with declaration-site variance as opposed to usage-site variance as in
%Java. 
Scala also includes path-dependent types, self types and structural types. It
couples all these with a type inference engine that limits the manual type annotations necessary, because
\begin{quote}
The more interesting your types get, the less fun it is to write them down! - Benjamin C. Pierce
\end{quote}


\subsection{Feature overview with examples}

\subsubsection{Concise syntax}
One of the big initial payoffs when switching from Java to Scala is the concise
and supple syntax.  An explicit and verbose rendering of the traditional simple hello world program 
in Scala would be:

\begin{code}\begin{verbatimtab}
object HelloWorld {
	def main(args: Array[String]) : Unit = {
		println("Hello, world!")
	}
}
\end{verbatimtab}
\end{code}

\noindent We repeat the standard Java version for easy comparison:

\begin{code}\begin{verbatimtab}
public class HelloWorld{
	public static void main(String[] args) {
		System.out.println("Hello, world!");
	}
}
\end{verbatimtab}
\end{code}

We can already notice some syntax differences between Scala and 
Java in this first example. Firstly, no semicolons are needed to end the last
statement on a line, but they are allowed and useful as separator.  Secondly,
there are a number of differences regarding type annotations. In Scala, type
declarations come after the element they are annotating and are separated from it by a colon. This goes both for the parameters the method \texttt{main}
takes and for the return type of the method itself. In the \texttt{args} formal
parameter it is visible that Scala uses square brackets instead of angled
brackets to denote type parameters.
The \texttt{Unit} type takes the place of
Java's \texttt{void} to indicate a method without useful result 
except for side-effects. The only value of type \texttt{Unit} is written as a
pair of parentheses \texttt{()}. 

In the direction of shorter code, the return type of non-recursive methods
like this one can be inferred by the compiler. Scala requires the equality
sign to link the method body to the declaration. Just leaving out the body would make the method abstract.
But in case the body 
fits on one line, the curly braces for delimiting the block are optional.
Thus the short
version of this method would be :	
\begin{code}\begin{verbatimtab}
	def main(args: Array[String]) = println("Hello, world")
\end{verbatimtab}
\end{code}

Scala actually has a specific idiom for the common case of an object
created as entry point for an application by making it extend \texttt{Application}. The code in the body of the object will be run when the main method is called.
\begin{code}\begin{verbatimtab}
object HelloWorld extends Application{
	println("Hello, world")
}
\end{verbatimtab}
\end{code}

This first example also already shows a non-cosmetic property of Scala
reflected in the syntax: it is intended to be more purely
object-oriented than Java. Scala does away with the notion of \texttt{static} members as present in Java. 
In Java static members do not belong to an object at all but to a class.
However these static class members do not participate in inheritance like
ordinary members. In Scala you can write objects directly and on first usage an
automatic singleton object will be created. A frequent pattern is to combine a
class with a helper object that contains the features that would in Java have
been static class members.

Scala's syntax employs quite a bit of syntactic sugar to provide familiar
syntax on top of a more uniform object-oriented model. Because in Scala every
value is an object, operators on values are actually ordinary methods. Code written as \texttt{3 +
4} is just desugared into \texttt{ 3.+(4)} on auto-boxed integer values. 
Indeed this operator-like syntax is enabled not just for 
the default operators in the standard library, but for every method that takes a single parameter. 
Because Scala does not reserve the traditional operator names this provides
both the capability to use operator names for your own classes and a clean syntax for method calls regardless of method name. 
An extra bit of very useful syntactic sugar is Scala's special handling 
of function call syntax. Using \texttt{a()} where a is an object is
desugared into a method call \texttt{a.apply()}, whereas on the left-hand side
of an assignment the form \texttt{a() = x} is desugared into
\texttt{a.update(x)}. Again this works for all objects: by simply providing the
methods \texttt{apply} and \texttt{update} on an object, this short-hand syntax
will be available.

This allows, as a second example, a crude version of a simple cell
that holds one Int value as follows:
\begin{code} \begin{verbatimtab}

object OperatorAndParenthesesSyntax {
	class IntCell {
		var contents: Int	= 0
		def apply()		= contents
		def update(i: Int)	= contents = i
		def +=(i: Int)		= contents = contents + i
    }
    def main(args: Array[String]) = {
		val c: IntCell = new IntCell
		println(c())	  //prints 0
		c() = c() + 42	  //converted into c.update(c.apply() + 42)
		println(c())	  //prints 42
    }
}
\end{verbatimtab}
\end{code}
As shown, this \texttt{update} and \texttt{apply} desugaring also works if the
methods take parameters. The syntax for \texttt{Array}s is indeed implemented precisely 
this way, by exploiting uniform desugaring instead of special-case syntax. For
example, arrays of element type A contain methods \mbox{\texttt{apply(i: Int):
A}} as well as \mbox{\texttt{update(i: Int, x: A): Unit}}. Scala will transform
the expression \mbox{\texttt{a(2) = a(3) +1}} into \mbox{\texttt{a.update(2,
a.apply(3) +1)}} .


Also shown here is that Scala makes the syntactic difference between immutable and mutable references not by a 
preceding \texttt{final} as in Java but by using \texttt{val} for the 
declaration instead of \texttt{var}. In keeping with the functional
programming style that mutable data should be avoided, this bit of
Scala syntax makes it convenient to use \texttt{val} by default and only
use \texttt{var} if mutable state if explicitly needed.


\subsubsection{Higher-Order Functions}
Scala is a functional language, thus every function is a value and can be passed 
around and used by other functions. This enables the easy composition of
functionality which allows functional languages to express the essence of algorithms succinctly. 

Scala does not force this style upon the programmer but enables it. However,
method application is a basic operation of Scala and methods are part of
objects, not first-class objects themselves. This fact is concealed by
generating a corresponding first-class object of class \texttt{Function}
whenever a method is used as a first-class value. These \texttt{Function}
objects get their function call syntax by providing a \texttt{apply} method,
using the syntactic sugar explained previously. 

Scala also allows curried functions, which allow applying one argument at a
time, as in the lambda calculus or Haskell. A curried function has a
partitioned formal parameter list. For example,  \texttt{def compare(x: T)(y:T)}
is the curried version of  \texttt{def compare(x:T, y: T)}.


By combining higher-order functions with control over the evaluation order, a
languages gains the ability to abstract custom control structures. Scala
normally evaluates the arguments of a method application first in a
\mbox{call-by-value} evaluation order. Specifying a particular formal parameter
should use \mbox{call-by-name} semantics is possible by syntactically preceding 
the parameter type with `$\Rightarrow$' in the method declaration. This
use of `$\Rightarrow$' is also the Scala notation for the type `function'.
One can think of the evaluation of the parameter being postponed by wrapping it
into a function of no arguments, a thunk. Since functions are values, this is considered 
fully evaluated when passed as a parameter and the expression within is left
intact. Scala also automatically converts a block with a result expression of
type T on the caller side into a thunk of type \texttt{() $\Rightarrow$ T} when
needed, to enable the following code:

\begin{code}\begin{verbatimtab}
def mywhile(e: =>Boolean)( body: =>Unit) {
	if (e) {
		body
		mywhile(e)(body)
	}
}
def main(args: Array[String]) = {
	var i = 10
	mywhile(i>0) {
		println(i)
		i= i-1
	}
}
\end{verbatimtab}
\end{code}
Here we combine several new features compared to Java: the function 
\texttt{mywhile} is curried, its first actual parameter is a closure of an
anonymous inline function over the value of the mutable variable i and its second
 parameter is actually a block automatically converted to a thunk of type
 \texttt{$\Rightarrow$ Unit}.
 
\subsubsection{Mixins with Traits}

Scala generalizes the well-known system of  
inheritance with a single base class and multiple implementation-less
interfaces by incorporating traits. Instead of extending a base class and
implementing several interfaces, a class can extend a base class and have
several traits mixed into it. Traits fully replace interfaces and are more general. Traits can contain default
implementations of methods as well as variables. A nice example is the \texttt{Ordered} trait from the standard Scala library.

\begin{code}\begin{verbatimtab}
trait Ordered[A] {

  /** Result of comparing this with operand that.
   *  returns x where
   *  x <   0 iff   this < that
   *  x == 0 iff  this == that
   *  x >   0 iff  this > that
   */
  def compare(that: A): Int

  def <  (that: A): Boolean = (this compare that) <  0
  def >  (that: A): Boolean = (this compare that) >  0
  def <= (that: A): Boolean = (this compare that) <= 0
  def >= (that: A): Boolean = (this compare that) >= 0
  def compareTo(that: A): Int = compare(that)
}
\end{verbatimtab}
\end{code} 
If this were an interface, each class that needed to be \texttt{Ordered} would
need to reimplement all of the methods, which is mostly duplication of common
code. In this trait however, the default implementations depend on
\texttt{compare}, the one abstract method that is left. This means that for just
implementing the method \texttt{compare} and mixing in the \texttt{Ordered} trait the class
gets a lot of functionality without repetition.

Actually mixing in ordered would look something like this:
\begin{code}\begin{verbatimtab}
	class Date extends Superclass with Ordered[Date] {
		def compare(that: Date) = { ... implementation ...}
	}
\end{verbatimtab}
\end{code}

So while adding an interface to a class only gives it more obligations 
to its clients that need to be implemented for each class 
separately, a trait can really modularize behaviour.


%%Traits can even be used to get the effect of aspect-oriented around
%%advice
%%object TraitsExample extends Application{
%%    abstract class EmptyBaseClass {}
%%  
%%  trait Doer { def doSomething()  }
%%  trait Twice extends Doer{
%%    def doSomething()
%%    def doSomethingTwice() = { 
%%      doSomething()
%%      doSomething()
%%    }
%%  }
%%trait HelloPrinter extends Doer{
%%     def doSomething() = println("hello")
%%  }
%%trait Logger extends Doer{
%%    abstract override def doSomething() = {
%%      println("before dosomething")
%%      super.doSomething()
%%      println("after dosomething")
%%    }
%%  }
%%
%%The base class we are mixing our traits in is empty 
%%in this case. We define a top trait Doer analogous to an abstract
%%aspect. If we just mix in the HelloPrinter implementation of Doer we
%%can get the following run:
%%
%%    val hp = new EmptyBaseClass with HelloPrinter
%%    hp.doSomething() 
%%output:
%%hello
%%
%%By mixing in the trait Twice we get the extra functionality:
%%    val hp2 = new EmptyBaseClass with HelloPrinter with Twice
%%    hp2.doSomethingTwice()
%%output:
%%hello
%%hello
%%
%%The trait Logger implements what would traditionally be an around 
%%advice. Note that we need to specify the keywords abstract override
%%to be able to use a super call as a proceed().
%%al hp3 = new EmptyBaseClass with HelloPrinter with Twice with Logger
%%    
%%    hp3.doSomethingTwice()
%%
%%
%%
%%output:
%%before dosomething
%%hello
%%after dosomething
%%before dosomething
%%hello
%%after dosomething
%%
 
\subsubsection{Modules with abstract type members and self types}


An important principle when building components is to avoid
hard-coded links. Expressing dependencies of a component either as
formal parameters or abstract members of the component can replace the brittle
use of global state and scope. 


Instantiating an abstracted 
component in the case of parametric abstraction is done by applying
actual parameters. This is well-known from Java as passing constructor
arguments in case of value parameters, or passing type arguments in
case of type parameters of a Java-1.5 generic class. 


Abstraction
through abstract members is possible in Java only for
abstract methods. Instantiation is then performed by creating a fully
concrete subclass and creating an instance of the subclass.
\\

In Scala both parametric abstraction and abstract member abstraction
are supported equally. A class can take both types and values as
formal parameters, and have both abstract type members and abstract value
members. 


The canonical example\citep{odersky:scala-experiment} is a
symbol table component for a compiler. This structure consists of
two mutually dependent subcomponents \texttt{Types} and \texttt{Symbols}.
The dependency can be expressed in Scala by using an abstract type
member.

\begin{code}\begin{verbatimtab}
trait Symbols {
  type Type //abstract because not implemented!
  class Symbol { def tpe: Type }
}

trait Types {
  type Symbol
  class Type { def sym: Symbol }
}

class SymbolTable extends Symbols with Types

\end{verbatimtab}
\end{code}

The mixin composition of these two mutually dependent structures
overwrites the abstract definitions with the concrete ones, to create
one class where everything snaps together.
\\

This same concept can be formulated in a parallel way in Scala using
self types. Giving a trait an explicit self type means that a class with
this trait mixed in can only be used concretely when also all
the components listed in the traits self type are mixed in. This implies that we
can register dependencies by just incorporating them into the self type. The self type is then the supposed type of the implicit \texttt{this}
reference within the trait. All elements belonging to the self type, including
dependencies, can be used inside the body, because unless they are present no actual instantiations can be
created. Self types are optional: if
no self type is explicitly given it is simply taken to be the class
itself. 

\begin{code}\begin{verbatimtab}
trait Symbols { self: Symbols with Types => // selfname:selftype =>
   class Symbol {def tpe: Type}
}
trait Types { self: Types with Symbols =>
   class Type { def sym: Symbol}
}

\end{verbatimtab}
\end{code}

So components can be built in Scala by \begin{enumerate}
         \item making each discernable piece of functionality a trait
         \item listing each traits dependencies in its self type 
		 \item instantiating a full component by mixing the right
traits together.
\end{enumerate}


\subsubsection{Generics with declaration site variance}
The concept of variance comes up when a language combines subtyping with type
parametrization. The question is: how does the subtyping work between two
generic classes with type instantiations that are themselves subtypes of each
other? One simple example is a read-only cell. 
\begin{code}\begin{verbatimtab}
 class A(a: Int) {
    def getVal() = a
    override def toString():String = "A with val "+a.toString
 }
 class B(b: Int) extends A(b) {
    override def toString():String = "B with val "+b.toString
 }
      
 class ReadOnlyCell[+T](elem:T) {
    def get:T = elem
 }
\end{verbatimtab}
\end{code}
The \texttt{`+'} in front of the type parameter T of \texttt{ReadOnlyCell}
declares the class as covariant in T. In this specific case where B
is a subtype of A , written B $<:$ A, a covariant \texttt{ReadOnlyCell} means
\texttt{ReadOnlyCell[B]} should be a subtype of \texttt{ReadOnlyCell[A]} . The
subtyping of the generic class goes in the same direction as the subtyping in
the type parameters. This relation needs to hold because
reading A's from a cell of B's should be ok: every B read from the cell is also
an A. Thus we should be able to use a \texttt{ReadOnlyCell[B]} as \texttt{ReadOnlyCell[A]}.

\begin{code}\begin{verbatimtab}
    val ro = new ReadOnlyCell[B](new B(42))
    val ro_alias: ReadOnlyCell[A] = ro
    println(ro.get)
    println(ro_alias.get)
prints out: 
    B with val 42
    B with val 42
\end{verbatimtab}
\end{code}

The opposite variance declaration using \texttt{-} also exists and is called
contravariance. This occurs in the predefined type
\texttt{Function1}, which is the class of function objects that take one
parameter.

\begin{code}\begin{verbatimtab}
    trait Function1[-T1, +R] extends AnyRef {
        def apply(v1:T1): R
    }
\end{verbatimtab}
\end{code}

We see in the signature of the \texttt{apply} method that T1 is the type of the 
first argument to the function and R is the return type. Reasoning in this case
can be done by analogy to the safe substitution principle for method
overriding: Result types may become more specific in a subtype, the argument types can only become 
more lenient. 

Like for read-only cells, functions are covariant in their result type.
However, according to the safe substitution principle a function is only 
more specific than another, if it is more lenient in the type of arguments it
accepts. This is contravariance, annotated with \texttt{`-'}.  

This fundamental concept can be expressed very succinctly in Scala. Scala
avoids the complexities of wildcards as in Java by making the designer of the
class specify the variance. The compiler will then not accept a class when the
declared variance and the signatures of the methods of the class are
in conflict.


\subsubsection{Pattern matching and case classes}

In a system with a certain number 
of datatypes and a number of operations over them, the operations can either
be implemented as methods internally or in a function
externally. Implementing the proper datatype specific behavior as a
method in each of the subclasses is the object-oriented style. Using an external function 
that pattern matches over the abstract data types is the functional
style. 

The OO variant makes it easy to extend the system with a 
new subclass since all the original code can be left untouched, while the 
functional style makes it easy to add new operations for the same 
reason. The final goal is a scheme that allows easy modifications in both
directions \citep{wadlerexpressionproblem}. Scala has been a vehicle for further research into these problems. In
\citep{odersky-zenger:fool12} solutions to the expression problem using a
combination of abstract type members, self types and mixins are worked out.
\\

As a hybrid OO-Functional language,
Scala does not need a visitor pattern to emulate the functional style but has
this built in. 
All that is needed to enable pattern matching against a
certain class or object is precede its definition with the keyword
\texttt{case}. Scala thus unifies algebraic data type definitions as in Haskell with object-oriented class hierarchies.


Actually, for a case class declaration the Scala compiler also automatically 
generates \texttt{hashcode}, \texttt{equals} and \texttt{toString} methods based
on the parameters of the default constructor, as well as accessor methods 
for these parameters. To construct an instance of a case class, a companion
object to the class introduces a factory method so even the `\texttt{new}'
becomes superfluous. This makes case classes a natural fit for value objects and
relatively dumb data.

Normally the hierarchy of case classes is externally extensible, but using
the optional keyword \texttt{sealed} on the base type of the hierarchy
disables this and fixes the children nodes to the ones defined in the original
file. This enables more checking in pattern matches: in a pattern match on
this hierarchy the compiler will emit warnings when the pattern match is not
defined for all cases. A very small but still useful example outside of the
tree-processing realm is the Scala rendition of the \texttt{Maybe} concept from Haskell. 
A summarised version shows the class hierarchy like follows: 

\begin{code}\begin{verbatimtab}
sealed abstract class Option[+A] extends Product {
	def isEmpty: Boolean
	def get: A
}
final case class Some[+A](x: A) extends Option[A] {
	def isEmpty = false
	def get = x
}
case object None extends Option[Nothing] {
  def isEmpty = true
  def get = throw new NoSuchElementException("None.get")
}
\end{verbatimtab}
\end{code}

This forms an idiom for optional values making explicit that the 
value may be missing. In Java one would normally just pass \texttt{null} and 
depend on the receiver checking for \texttt{null} for every optional 
parameter. The Option idiom makes the distinction explicit and 
moves this constraint into the type system. The type system will not allow a
value of \texttt{Option[T]} to be used where one of type T is needed. The
optional value must be unpacked first. In this way the type system forces the
programmer to remember these values could be non-existent and handle both
cases. 

A actual pattern match over an optional value could then look as follows:
\begin{code} \begin{verbatimtab} 
def handleOptionalParam(t: Option[T]) = t match {
	case None 	=> //the param was not given, set default?
	case Some(x) 	=> //the param was given and the name x refers to the actual value
	case _ => //always matches in normal pattern match: cannot happen here
    }
\end{verbatimtab}
\end{code}

A relatively recent enhancement to pattern matching in Scala is known as
extractors \citep{LAMP-REPORT-2006-006}. Extractor objects enable a
representation interface of case classes that stands between the user pattern
matching over a hierarchy and the implementer of the actual classes. This extra
indirection allows pattern matching while still guaranteeing the representation
independence typical of an OO setting.



% \subsubsection{Path-dependent types}
% Here is a type of cells using object-oriented abstraction.
% \begin{code}\begin{verbatimtab}
%     abstract class AbsCell {
%        type T
%        val init : T
%        private var value : T = init
%        def get : T = value
%        def set(x : T) = { value = x }
%     }
% \end{verbatimtab}
% \end{code}
% 
%  It is also possible to access AbsCell without knowing the binding
%   of its type member.
%  For instance:                   def reset(c : AbsCell): unit = c.set(c.init);
%  Why does this work?
%      . c.init has type c.T
%      . The method c.set has type c.T ) unit.
%      . So the formal parameter type and the argument type
%         coincide.
%   c.T is an instance of a path-dependent type.
%   


\subsubsection{Implicit parameters}


Scala makes ad-hoc polymorphism like Haskell typeclasses
\mbox{\citep{wadler89how}} possible with a feature called implicit parameters. 

A formal parameter of a method can be preceded by the keyword \texttt{implicit}. 
This allows the method to be called both normally
and without actually providing the missing argument. If the argument
corresponding to the implicit parameter is missing in a call, the compiler will
automatically try and find a suitable value and pass it behind the
scenes. Suitable values are marked as being available for use by the compiler
as stand ins by preceding their declaration with the same keywork
\texttt{implicit}. The compiler will consider all implicit declarations in scope
as possible arguments and
select the single conforming stand in as implicit argument. In case there no
suitable stand ins are in scope or there are multiple options generating
ambiguity,  the compiler willl emit a warning.

This can be used to emulate typeclasses by listing the type class
implementation as an implicit parameter of any function that needs it.
\\

In the following example we wish to have a general testing function that
checks whether the order defined on values is correct. The testing
methed will use the \texttt{`<'} operator which is defined for numerical
values and for \texttt{Ordered} values. To be able to use the same
method also for dates, we decorate the \texttt{java.util.Date} with the
\texttt{Ordered} trait.

We define as implicit a conversion function from Date to \texttt{Ordered}. The
implementation of the trait \texttt{Ordered} is complete with just the
implementation of the \texttt{compare} method. 
The testing method is applicable for any type for which an Ordered
implementation can be found. By not requiring subtyping but just a
conversion function this code can work with extendable classes. Then dates can
be compared in the testing function just like numeric values once this implicit 
conversion method in pulled in scope by an import statement.


% to be able to emulate a supposed typeclass that
%provides another Integer representation of a datatype. The method
%\texttt{returnIntRepresentation} takes a parameter x and takes as implicit
%parameter a function providing the other Integer representation.

%Both objects impl1 as impl2 provide a suitable method with the
%implicit keyword. Depending on which is currently in scope,
%the provider will automatically find one to complete the method call.
\begin{code}\begin{verbatimtab}
object OrderedImplicit {
  import java.util.Date
  implicit def date2ordered(x: Date): Ordered[Date] = new Ordered[Date]{
    def compare(y: Date): Int = x.compareTo(y)
  }
}
object OrderedTest extends Application{
  import java.util.Date
  val first: Date = new Date()
  val later: Date = new Date(first.getTime() + 10000)
  
  def testOrder[T](left:T,right:T)(implicit isordered: T => Ordered[T]){
    println(left+ " smaller than "+right+": "+(left < right))
  }
  testOrder(1,2)
  import OrderedImplicit._
  testOrder(first,later)
}
\end{verbatimtab}
\end{code}
% \begin{code}\begin{verbatimtab}
% object OrdinaryImplicit {
%   
%   class myList[A](xs: A*) {
%     def length: Int = xs.length
%   }
%   object impl1 {  implicit def intrep[T](x: myList[T]): Int = x.length   }
%   object impl2 {  implicit def intrep[T](x: myList[T]) : Int = 2*x.length }
% 
%   def returnIntRepresentation[T](x: myList[T])(implicit repr: myList[T] => Int) = {
%     repr(x)
%   }
%     
%   def main(args: Array[String]) = {
%     val l = new myList(1,2,3,4)
%     import impl1._
%     println(returnIntRepresentation(l)) //prints 4
%     import impl2._ 
%     println(returnIntRepresentation(l)) //prints 8
%   }
% }
% \end{verbatimtab}
% \end{code}
The exact same mechanism is used in a simpler form for ordinary implicit
conversions. When the types in an expression don't match, because a returned
object is not of the requested type, or a method call is requested on a type that doesn't support it, 
the Scala compiler will look in the current scope for an implicit converter
function that it can automatically plug in between as adapter. This can be used to 
enrich a provided class for which the source code is not available, from the outside.

An example from the standard library: 

\begin{code}\begin{verbatimtab}
final class RichChar(c: Char) {
  def isDigit: Boolean = Character.isDigit(c)
  // isLetter, isWhitespace, etc.
}
object RichCharTest {
  implicit def charWrapper(c: char) = new RichChar(c) //definition of implicit convertor
  def main(args: Array[String]) {
    println('0'.isDigit)
  }
}
\end{verbatimtab}
\end{code}

% 
% Implicits can also be used as a work-around for a limitation in the trait 
% mixin system of Scala. If we have a trait that takes type parameters, we 
% cannot mix in two versions of the trait with different parameters in the 
% same class. Suppose we try to mixin the representation function on myLists 
% from the first example of the higher example.
% 
% \begin{code}\begin{verbatimtab}
% object NoImplicitsDoubleImplementation {
%   import OrdinaryImplicit.{myList}
%   def main(args: Array[String]) = {
%     trait reprBuilder[S,T] extends myList[T]{
%       def repr[T]():S
%     }
%     trait IntReprBuilder[T] extends reprBuilder[Int,T]{
%       def repr[T]() = this.length
%     }
%     trait DoubleReprBuilder[T] extends reprBuilder[Double,T]{
%       def repr[T]() = this.length.toDouble
%     }
%     
%     class fulllist[A](xs: A*) extends myList[A] { self: fulllist[A] with reprBuilder[Int,A] with 	reprBuilder[Double,A]=>
%     //further stuff here that uses the repr method
%     }
%     val l2 = new fulllist(5,6,7,8) with IntReprBuilder[Int] with DoubleReprBuilder[Int]
%     println(l2 repr)
%   }
% }
% \end{verbatimtab}
% \end{code}
% The compiler will complain on the line where we try and instantiate our list l2 by mixing 
% in the two instantiations of the same trait: ``illegal inheritance; template ... inherits 
% different type instances of trait reprBuilder: reprBuilder[Int, Int] and reprBuilder[Double, Int]''
% 
% 
% 
% Implicits allows us to make the two versions available for use without actually 
% mixing them into the base class. 
% 
% \begin{code}\begin{verbatimtab}
% object WithImplicitsDoubleImplementation {
%   import OrdinaryImplicit.{myList}
% 
%     object IntReprBuilder {
%       implicit def repr[T](x: myList[T]):Int = x.length
%     }
%     object DoubleReprBuilder {
%       implicit def repr2[T](x: myList[T]):Double = x.length.toDouble
%     }
%     def returnRepresentation[T,U](x: myList[T])(implicit repr: myList[T] => U):U = {
%       repr(x)
%     }
%     
%     def main(args: Array[String]) = {
%       val l3 = new myList(5,6,7,8)
%       import IntReprBuilder._
%       import DoubleReprBuilder._
%       println(returnRepresentation[Int,Int](l3))         //prints 4
%       println(returnRepresentation[Int,Double](l3))      //prints 4.0
%     
%   }
% }
% \end{verbatimtab}
% \end{code}
% Unfortunately we still cannot name our implicit methods the same in 
% both objects, but because we can trigger the implicit on the type 
% of the method and not of a object containing the method the name 
% becomes irrelevant and the example works.

% \subsubsection{Structural subtyping}
% Determining whether a type is a subtype of another is commonly
% done either nominally or structurally. Nominal subtyping is the 
% kind we are used to in object-oriented languages: A is a subtype of B 
% if and only if it is declared that way by an A extends B declaration 
% and objects of class A are safely substitutable for objects 
% of class B. If subtyping is structural, we drop the first 
% requirement. Objects can then implement interfaces if they 
% match the interface even if they haven't been declared that 
% way. This is a very nice feature to have when you are integrating 
% code instead of writing it and the interface you would like to use 
% was not used by the implementers. Another use case would in prototype 
% programming to get some typechecking before you freeze everything 
% into named interfaces.
% 
% \begin{code}\begin{verbatimtab}
% 
% package introexamples;
% case class X(x: String){}
% case class Y(y: String){}
% case class Z(z: String){}
% 
% object StructuralSubtyping {
%   type hasY = { def getY():Y } //structural type defined here
%   def printYVal(o: hasY) = println(o.getY().y)
% 
%   def main(args: Array[String]) = {
%     printYVal(new VendorA.classA())        //prints classA.Y
%     printYVal(new VendorB.classB())        //prints classB.Y
%     
%   }
% }
% package VendorA {
%   trait hasXY {
%     def getX(): X
%     def getY(): Y
%   }
%   class classA extends hasXY{
% 	def getX() = new X("classA.X")
%         def getY() = new Y("classA.Y")
%   }
% }
% package VendorB {
%   trait hasYZ {
%     def getY() : Y
%     def getZ() : Z
%   }
%   class classB extends hasYZ{
%     	def getY() = new Y("classB.Y")
%         def getZ() = new Z("classB.Z")
%   }
% }
% 
% \end{verbatimtab}\end{code}

\newpage
\section{Implementing the Simply Typed Lambda Calculus in Scala}
 
A toy interpreter for the Simply Typed Lambda Calculus (STLC) can be
structured like a pipeline, adopting parts of a reference architecture for
compilers \citep{appelcompiler}. Each stage performs an operation on the
current program representation or translates between representations.

The front end parses the textual
representation into an abstract syntax tree. Then, the
typechecker checks whether the program is valid. Finally, the
evaluator performs the actual computation in order to arrive at
the result of the program.

This leads to the following structure for an interpreter:
\begin{enumerate}
  \item A representation for the abstract syntax tree as a datatype
  of tree nodes
  \item A parser that transforms a text representation into this tree
  representation
  \item A pretty printer to show the tree representation of the result as
  text
  \item A typechecker that validates a tree representation
  \item An evaluator that reduces a tree representation to a value
\end{enumerate}
The large-scale dependencies in this structure are formulated in Scala using
explicit self types, consisting of those interfaces a component depends on.
 The general top level \texttt{Interpreter} trait mentions the required
subcomponents for the different operations. The general trait 
for the typechecker is also given as a second example. Besides the abstract
syntax definition, this subcomponent is also coupled to the pretty printer to enable nicely
printed error messages.

\begin{code}\begin{verbatimtab}
trait Interpreter { self: Interpreter with Evaluator with TypeChecker 
      with PrettyPrinter with TextToAbstractSTParser with AbstractSyntax =>
  def interpret(line:String):String = {
    val canontree = parse(line) //from TextToAbstractSTParser
    typeCheck(canontree)        //from TypeChecker, throws typeException on error
    val result = evaluate(canontree) //from Evaluator
    prettyPrint(result)              //from PrettyPrinter
  }
}

//The typechecker also gets a prettyprinter mixed in to print errors textually
trait TypeChecker { self: TypeChecker with PrettyPrinter with AbstractSyntax =>
   def typeOf(t: LTerm) :LType }

class SimplyTypedInterpreter extends Interpreter with SimplyTypedEvaluator 
      with SimplyTypedTypeCheck with SimplyTypedPrettyPrinter
      with SimplyTypedTextToASTParser with AbstractSyntax{ }

\end{verbatimtab}
\end{code}
An interpreter instance can be launched by instantiating an object of
class \texttt{SimplyTypedInterpreter}. This class is formed by mixing in the
corresponding implementation trait for each component interface trait 
mentioned in the self type.


\subsection{Tree representation}
For the representation of the abstract syntax nodes, the implementation uses
 DeBruijn indices following the examples in `Types and Programming
 Languages' \mbox{\citep{TAPL}} . Where in  written examples of the STLC, an
 occurrence of a variable refers to the innermost formal parameter with identical
 name, in DeBruijn index representation this linking occurs numerically. 
 %Using DeBruijn indices for variable representation means that the link between an occurrence of a
 %variable and the function where this variable was introduced as formal
 %parameter happens numerically. 
 The index of a variable refers to the outward distance between the
 occurrence and the declaring scope. A variable
 with number 0 is bound in the innermost scope. Number 1 refers to the formal parameter that is introduced one level of scope
 higher. As an example, \mbox{$\lambda$ f:Bool$\Rightarrow$Bool . $\lambda$ 
 b:Bool .  f b} has as DeBruijn index form \mbox{
 $\lambda$:Bool$\Rightarrow$Bool . $\lambda$:Bool 1 0}. 
 This nameless representation solves the problem of $\alpha$-equivalence
 \citep{barendregt88introduction} between equivalently structured expressions
 that differ only in the chosen variable names. Indeed, two different but
 equivalent expressions like $\lambda$:Nat x.x and $\lambda$:Nat y.y, both of
 which return their argument, are represented the same as $\lambda$:Nat 0. Thus DeBruijn indices form a canonical representation where $\alpha$-equivalence is
the same as simple equality.
% This is a canonical nameless representation , so
%two different but equivalent expressions like $\lambda$:Nat x.x and
%$\lambda$:Nat y.y, both of which return their argument, are represented the
%same as $\lambda$:Nat 0.
\\

We model the nodes as case classes in straightforward
correspondence to the grammar of the simply typed
lambda calculus as on page \pageref{STLC_grammar}.
\begin{code}\begin{verbatimtab}
trait AbstractSyntax  { self : AbstractSyntax => 
sealed trait LTerm 
case class Var(n: Int) extends LTerm
case class Lam(hint: VarHint, ty: LType, body: LTerm) extends LTerm
case class App(funt:LTerm, argt:LTerm) extends LTerm
...


sealed trait LType extends LTerm
case class TyBool() extends LType
...
//Function type with argument fundom and result funrange
case class TyArr(fundom: LType, funrange: LType) extends LType 
}
\end{verbatimtab}
\end{code}

\subsection{Parsing and printing}

We use the parser combinator library that is included with the Scala
standard library to implement the parser.
%Since the Scala standard library
%includes a parser combinator library, it is natural to use this framework when
%implementing a parser.

This library makes it easy to implement a parser from the linear text
representation to a tree form. However, this tree form still references
variable occurrences using their name instead of their DeBruijn index.
%onlyto
%some representation where variables are still defined by their textual name
%instead of their DeBruijn number.


A new trait \texttt{TextualSyntax} contains a syntax tree hierarchy parallel to
that in \texttt{AbstractSyntax}. The trait
\texttt{SimplyTypedLambdaToTextualParser} contains a parser built in the
parser combinator framework to parse text into these concrete syntax
trees. 

An additional component then performs the recursive transformation from the syntax tree with textual variables to the abstract
syntax tree with DeBruijn variable indices.

The identification of components with Scala traits with self types allows
hinding this subdivision of responsibility in the parser. The top level
interpreter trait just depends on a parser \texttt{TextToAbstractSTParser}
from code text to abstract syntax. It just so happens that the implementation of
this parser is itself a composite component. The trait \texttt{TextualSyntax}
is used strictly internally in both of the subcomponents and never leaks to the
self type of our top-level parser.

\begin{code}\begin{verbatimtab}
trait TextToAbstractSTParser { self: TextToAbstractSTParser with AbstractSyntax =>
    def parseToAST(str: String) : Option[LTerm]
}

trait SimplyTypedTextToASTParser extends TextToAbstractSTParser 
   with SimplyTypedLambdaToTextualParser with
   SimplyTypedTextualToAbstractTransform with TextualSyntax { 
   self: SimplyTypedTextToASTParser with AbstractSyntax => ...
}
\end{verbatimtab}
\end{code}


The component \texttt{SimplyTypedTextualToAbstractTransform} 
shares a lot of structure with the pretty printer. 
The first can be seen as a function from textual syntax nodes to abstract syntax
nodes. The second takes abstract syntax nodes to string representations.


Both need to gather context about variable bindings as they traverse the tree.
The first needs to remember the naming hints derived from the original input to find 
the distance to the closest scope binding the variable name. The second
remembers what naming hints it has already encountered in order to create new unused variable names to make a clearly readable representation.

Both were implemented as a \texttt{Function} object whose apply method provided 
the specific functionality. Making it a real subclass of \texttt{Function} 
even enables use in higher-order functions just like any other 
function.


The functionality for adding data to the context was lifted into a common
ancestor, \texttt{ContextualFun}. The differing functionality is split up between some 
helper functions and a different \texttt{apply} method.
However, the \texttt{extend} method that creates a new object with a
properly extended context cannot naively be lifted into
\texttt{ContextualFun}. To be able to chain calls properly, this method needs
to declare a return type equal to that of the actual work-performing subclass,
not that of the superclass \texttt{ContextualFun} where it is delegated to.

One way to make this structure inheritable, without overriding methods in a
subtype just to further constrain the return type, is to introduce a factory
method \texttt{create} with a type parameter as return type. This type parameter represents the concrete subclass. Using a type bound on
the parameter we can express that the parameter represents a subtype of
\texttt{ContextualFun}. Now each subclass can instantiate the superclass with the correct
type parameters and the factory method will be properly typed without overrides.

\begin{code}\begin{verbatimtab}
trait ContextualFun[A,B, Self <: ContextualFun[A,B,Self]] 
                                  extends Function1[A,B]{
  type Ctx = StringBindingContext[NameBinding]
  val ctx: Ctx
  def extendtransformed(str:String, trans:String=>String) = {
    val ctx2 = ctx + (trans(str), new NameBinding(trans(str)))
    create(ctx2) }
  def extend(str:String) = extendtransformed(str,hintTransform)
  val hintTransform:(String =>String)
  def create(ctx: Ctx) : Self
  def pickUniqueName(n: String) : String = {
    if (ctx.isNameBound(n)) pickUniqueName(n+"'")
    else (n)
  }
}

trait PrinterCtxFun extends ContextualFun[LTerm,String,PrinterCtxFun] {
    def create(ctxnew: Ctx) = new PrinterCtxFun { val ctx = ctxnew }
    def apply(t:LTerm):String = ... }
trait DeBruijnifyCtxFun extends ContextualFun[RawTerm,LTerm,DeBruijnifyCtxFun]{
    def create(ctxnew: Ctx)=  new DeBruijnifyCtxFun {val ctx = ctxnew}
    def apply(t:RawTerm):LTerm = ... }
\end{verbatimtab}
\end{code}


\subsection{Typechecking}
As mentioned previously, typechecking for the simply typed lambda
calculus is `syntax-directed'. The syntactic structure of the term being
examined fully specifies which single typing rule is applicable.


This shows nicely in an implementation of this recursive algorithm of page \pageref{STLC_typing}
using a pattern matching function. Each typing rule of the algorithm is
reflected as a leg of the pattern match. 

\begin{code}\begin{verbatimtab}
trait TypeChecker { self: TypeChecker with PrettyPrinter with AbstractSyntax =>
  def typeOf(t: LTerm) :LType
}
trait SimplyTypedTypeCheck extends TypeChecker{self: TypeChecker 
		with PrettyPrinter with AbstractSyntax =>
...
  def typeOf(t: LTerm)= t match {
    case Tru()        => TyBool
      ...
    case Var(n)       =>   { 
      recallTypeOfVar(n)
    }
    case Lam(hint,ty,body) => {
      TyArr(ty, rememberingTypeOfVar(hint,ty).typeOf(body))
    } ...
\end{verbatimtab}
\end{code}
The base type case is covered by implementing branches linking each built-in
term to its specific base type.  When encountering a variable, the algorithm
uses a helper method, which, based on the DeBruijn index, looks up up the
remembered type annotation for this variable. Counterpart to this is the branch
for a new function. Here the algorithm needs to remember the type declaration
on the formal parameter, which also forms the domain type for the function. 
The type of the body is then calculated with that extra piece of information. 
The function \texttt{rememberingTypeOfVar} constructs the extended typing context 
used by \texttt{recallTypeOfVar}.


In the STLC errors can only occur at function 
application. Composition of functions is only correct if the argument has the expected type. 
If the argument has the wrong type or the supposedly-function term is not 
of function type at all,  the program is faulty and will be refused.

\begin{code}\begin{verbatimtab}
    case App(tfun,targ)  => {  
      val funty = typeOf(tfun)
      val argty = typeOf(targ)
      funty match {
        case TyArr(fundom, funrange) => {
          //concrete and formal parameter types need to match
          if  (argty == fundom ) funrange 
          else throw new TypeExc("wrong application argument type: ...")
        }
        case _ => throw new TypeExc("applying to non function type: ...  ")
      }}}}
\end{verbatimtab}
\end{code}


\subsection{Evaluation}
Evaluation too is a straightforward transcription of a simple algorithm. 
Reducing a full term to a value 
can be implemented by repeatedly taking a single local step of
evaluation. In the STLC
the only place where actual work is performed is function application.
%Opting for \mbox{call-by-value} semantics means that function arguments 
%need to be fully reduced 
%to values before the function can be applied. Then finally the 
Function application is performed by replacing the whole application by the
body of the function, with the argument value suitably inserted. This requires
an implementation of the substitution process
suited for the specific term representation as background machinery. 

Because of this separation of concerns, the evaluation function can become an
uncluttered pattern matching function on a given term in abstract
syntax. The code is a straightforward reflection of evaluation drilling down to 
the level of a function application. In the case of an application syntax
node, the algorithm branches depending on whether the argument needs
further reduction or not. Because pattern match clauses are tried from top to bottom, you need to match on the most specific case
first. A choice for \mbox{call-by-value} semantics means that
function arguments need to be fully reduced 
to values before the function can be applied. Thus the pattern match leg that
tests whether this is applicable in this case comes first.
Afterwards come the cases where, by fall-through, either the function or the argument needs
further reduction before performing the substitution.


\begin{code}\begin{verbatimtab}
trait Evaluator { self: Evaluator with AbstractSyntax =>
   def evaluate(t: LTerm) : LTerm  
}
trait SimplyTypedEvaluator extends Evaluator 
   with SimplyTypedDeBruijnSubstitution{ self: SimplyTypedEvaluator 
                                             with AbstractSyntax =>

   def eval1(t: LTerm): LTerm = t match {
   
    //E-AppAbs: Reduction is possible
    case App( Lam(hint,ty,body), v) if (isValue(v))
      => substituteterm(v2).asTop.intoterm(body)
          
    //E-App2: we have a function but need an argument
    case App(v1, t2)             if (isValue(v1))  
      => App(v1 ,eval1(t2))
      
    //E-App1: we still need to reduce our function
    case App(t1, t2)                             
      => App(eval1(t1), t2)
      
   }}
\end{verbatimtab}
\end{code}
The full evaluation trait also contains the function \texttt{isValue()}. Putting this 
functionality in the abstract syntax definition as a normal 
method vs. in the evaluation trait as done here is a matter of taste.
On the one hand, it needs to be kept in sync with the definition of the 
abstract syntax tree. On the other hand it is only used during 
evaluation. The same design choice needs to be made regarding the
implementation of substitution. In this implementation it is fragmented out
into a separate trait as a subcomponent for the concrete evaluation trait.
\\

The \texttt{TermSubstitutionProvider} trait exposes an interface like\\
\mbox{\texttt{substituteterm(arg).asTop.intoterm(body)}}. This fluid style is
implemented using a chain of methods and abstract classes. Each method
returns another small object with the next methods in the chain defined
on it. 
Scala's supporting features for family polymorphism
\citep{ScalableComponentAbstractions} make it easy for the concrete
trait to subclass this whole structure . Any premature hard links between
the abstract classes can be avoided by linking to an 
abstract type instead. The abstract
types are bounded by an abstract class defining the minimum exposed interface.

\begin{code}\begin{verbatimtab} 
trait TermSubstitutionProvider { self: TermSubstitutionProvider with AbstractSyntax =>

  type partial <: examplepartial
  type topsubst <: exampletopsubst
  
  def substituteterm(v:LTerm):partial
  abstract class examplepartial(v:LTerm){
    def asTop: topsubst
    override def toString="[? := "+v+"]"
  }
  abstract class exampletopsubst(v:LTerm){
    def intoterm(term:LTerm):LTerm
    override def toString="[top := "+v+"]"
}}
\end{verbatimtab}
\end{code}
The extending trait can then define subclasses of the different
syntax building classes. By implementing the binding
type members, the knot between the subclasses is fully tied.

\begin{code}\begin{verbatimtab} 
trait SimplyTypedDeBruijnSubstitution extends TermSubstitutionProvider 
              {self: TermSubstitutionProvider with AbstractSyntax=> 
              
  type partial = bruijnpart
  type topsubst = bruijntopsubst
  
  def substituteterm(v:Term)=new bruijnpart(v)
  class bruijnpart(v:Term) extends examplepartial(v) {
    def asTop() = new bruijntopsubst(v)
  }
  class bruijntopsubst(v:Term) extends exampletopsubst(v) {
    def intoterm(term: Term)= ...
}}
\end{verbatimtab}
\end{code}


\newpage
\bibliographystyle{abbrvnat}
\bibliography{types}



\end{document}
