%%This is a very basic article template.
%%There is just one section and two subsections.
\documentclass[a4paper,11pt]{report}

%\sloppy %adds necessary line breaks
\usepackage[a4paper]{geometry}

\usepackage[latin1]{inputenc}
\usepackage{calc}
%\usepackage{setspace}
\usepackage{graphicx}
\usepackage{multicol}
\usepackage[normalem]{ulem}
%% Please set your language here
\usepackage[english]{babel}
\usepackage{color} 
\usepackage{hyperref}
\usepackage{natbib}

\usepackage{moreverb}



%%\newenvironment{code}{\catcode`\@=\other\relax\small\verbatimtab}{%
%%\endverbatimtab\normalsize\catcode`\@=\active}
\newenvironment{code}{ \begin{verbatimtab} } { \end{verbatimtab} }
%\newenvironment{verbatimtab} {} {}
 
\definecolor{darkblue}{rgb}{0, 0, 0.4}
%%\newenvironment{code}  {\begin{quote} \small \darkblue \sf } { \end{quote}}
%%\newenvironment{code} {\small } { }

%usage of package listings tnx to Adriaan

\usepackage{listings}
\newcommand{\codein}[1]{\lstinline{#1}}

\newenvironment{listingwide}{\begin{lstlisting}[xleftmargin=-0.5cm,xrightmargin=-1.6cm] }{ \end{lstlisting}}




\lstset{ literate= {=>}{$\Rightarrow$}{2}
  {->}{$\to$}{2}
  {-(+)>}{$\toplus$}{2}  
  {-(-)>}{$\tominus$}{2}  
  {<-}{$\leftarrow$}{2}
  % {\\}{$\lambda$}{1}
  {<~}{$\prec$}{2}
  {<|}{$\triangleleft$}{2}
  {<:}{$<:$}{1}
}

\lstdefinelanguage{Scala}{% 
       morekeywords={% 
                try, catch, throw, private, public, protected, import, package, implicit, final, package, trait, type, class, val, def, var, if, this, else, extends, with, while, new, abstract, object, requires, case, match, sealed,override},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstdefinelanguage{Haskell}{%
   otherkeywords={=>},%
   morekeywords={abstype,break,class,case,data,deriving,do,else,if,instance,newtype,of,return,then,where},%
   sensitive,%
   morecomment=[l]--,%
   morecomment=[n]{\{-}{-\}},%
   morestring=[b]"%
  }

%  numberbychapter=false,
\lstset{breaklines=true,language=scala} 
%\lstset{basicstyle=\footnotesize\ttfamily, breaklines=true, language=scala, tabsize=2, columns=fixed, mathescape=false,includerangemarker=false}
% thank you, Burak 
% (lstset tweaking stolen from
% http://lampsvn.epfl.ch/svn-repos/scala/scala/branches/typestate/docs/tstate-report/datasway.tex)
\lstset{
    xleftmargin=-0.5em,%
    frame=single,%  TODO REMOVE only for floating listings
    captionpos=b,%
    fontadjust=true,%
    columns=[c]fixed,%
    keepspaces=true,%
    basewidth={0.56em, 0.52em},%
    tabsize=2,%
    basicstyle=\renewcommand{\baselinestretch}{1.0}\small\tt,% \small\tt
    %Adriaan has 0.97
    commentstyle=\small\tt,%
    keywordstyle=\bfseries,%
    belowcaptionskip={-10pt}
}









\bibpunct{[}{]}{,}{a}{}{;} 










\begin{document}



\title{Type Systems and Datatype-Generic programming in Scala}
\author{Jelle Pelfrene}
\date{}

%\maketitle


\pagenumbering{roman}
 
\tableofcontents





\newpage
\section*{Dankwoord}
\addcontentsline{toc}{subsection}{Dankwoord}
Graag bedank ik mijn promotoren, Prof. Dr. ir. Frank Piessens en Prof. Dr. ir.
Wouter Joossen, en mijn begeleider, ir. Adriaan Moors. Ze hebben me
geholpen een eigen thesis rond programmeertalen uit te voeren, en me met hun
uitgebreide kennis van het vakgebied begeleid wanneer het voor mij nog allemaal wazig was.

Ik zou ook mijn medestudenten willen bedanken. Waar anders vind je die
kameraadschap en typische gesprekken, met humor die enkel door studenten
computerwetenschappen wordt verstaan.

Tenslotte nog een dankwoord voor mijn ouders, broer en grootouders voor hun
jarenlange steun, en zeker niet te vergeten mijn verloofde, Tamarah.


\newpage
\section*{Samenvatting}
\addcontentsline{toc}{subsection}{Samenvatting}
Programmeren is een stiel. Zowel vakmanschap als materieel is bepalend voor een
goed resultaat. Daarom is de ontwikkeling van nieuwe, betere
programmeertalen cruciaal. 

Wanneer is zo'n taal beter dan een andere? De meningen hierover verschillen
enorm, maar soms tekent zich een nieuwe generatie van talen af, die het
programmeren ineens een pak aangenamer maken. Zo'n taal elimineert
beslommeringen, laat de programmeur algemeen uitdrukken wat hij wil en dwingt
niet tot het telkens opnieuw herhalen van dezelfde stopwoorden. En het
belangrijkst van al: wanneer je bij het programmeren spontaan minder fouten
maakt waar je later naar op zoek moet, dan heb je vooruitgang geboekt.

Veel van deze frustraties vallen onder de noemer boilerplate: boilerplate is
code die niet specifiek met het probleem of de oplossing te maken heeft, maar
die je moet schrijven opdat het programma zou werken. Boilerplate code wordt
veroorzaakt wanneer de taal niet toelaat om bepaalde stukken code in
een abstractie voor eens en altijd samen te vatten.
\\
\\
Theoretisch programmeertaalonderzoek concentreert zich vaak op een functioneel
model van berekeningen dat abstractie maakt van de hardware. Al decennia lang sijpelen
innovaties door van deze talen naar meer mainstream programmeertalen, die zich
ondertussen verveld hebben tot de bekende objectgerichte talen zoals Java.
Functionele talen zoals Haskell combineren bondigheid met expressiekracht en een 
krachtig typesysteem. Zo'n typesysteem is als een harnas dat je beschermt tegen
fouten, zoals het controleren van dimensies bij een natuurkundige berekening.
Maar het leren van zo'n taal voelt soms als terechtkomen in een vreemd land 
waar alles anders moet en je geen enkel woord begrijpt.

Vandaar is er nood aan talen die eenzelfde kracht verpakken in een makkelijker
te verteren dosering. De taal Scala is zo'n nieuwe 
hybride object-georienteerde en functionele taal.
\\
\\ 
Het eerste deel van deze thesis geeft een inleiding tot typesystemen en de
taal Scala, met als case study het bouwen van een interpreter voor een model
van een simpele getypeerde functionele taal. Ook al laat Scala dit
toe op een bondigere en meer modulaire manier dan Java, toch zou je bij sommige delen die variabelen
behandelen veel liever ``en dit gaat triviaal'' kunnen typen in plaats van
de code uit te spellen.


In het tweede deel van de thesis bestudeer ik een nieuwe trend binnen
getypeerde functionele talen, datatype-generic programming. Deze mikt net op
het nog bondiger en algemeen maken van code door abstract met verschillende
datastructuren om te gaan. Hierdoor moet sommige code helemaal niet meer
geschreven worden, en wordt in andere contexten het weglaten van alle
niet-interessante gevallen concreet mogelijk.

Code die in deze techniek geschreven is belooft korter, algemener,
flexibeler en meer herbruikbaar te zijn.
\\

Enerzijds kunnen we door deze stijl in Scala toe te passen een krachtiger soort
interfaces schrijven. Niet enkel traditionele objectgerichte
interfaces kunnen hiermee verbeterd worden, maar ook constant herhaalde code zoals controleren op
nullpointers of een structuur doorlopen met een lus, wordt op deze manier in een interface verpakbaar.
Ik heb enkele van deze abstracties overgezet naar Scala, en problemen en
uitbreidingen ge\"{\i}dentificeerd die nodig zijn om hier op een elegante manier
gebruik van te maken.


Anderzijds heeft datatype-generic programming ook geleid tot technieken om veel
gevallen van iteratie over de elementen van onregelmatig gevormde
hi\"{e}rarchische structuren te vervangen. Ik heb enkele aanpakken overgebracht
naar Scala, die liggen op verschillende plekken in het spectrum tussen elegant
bruikbaar en algemeen toepasbaar.
Als case study is er een iets ingewikkeldere versie van de interpreter uit de
introductie aangepast. In deze versie wordt de code die variabelen en
substitutie behandeld omgezet naar een andere implementatietechniek, en wordt
dan een variatie op de voorgaande technieken gebruikt om de triviale gevallen uit deze
code weg te schrappen.
\\
De abstracte interfaces leiden inderdaad tot zeer herbruikbare, algemene code.
Uit de case study blijkt dat het weghalen van de triviale gevallen lukt. De
code wordt bondiger, beter opgesplitst en duidelijker.

Er is ook nog ruimte voor verbetering. Er zijn enkele kleine toevoegingen aan
de taal nodig om de voordelen van de abstractere interfaces volledig te kunnen
rapen in voor veel programmeurs meer conventionele stijl. Daarnaast hebben
de mechanismen die nu toelaten om in de case study de triviale code weg te
schrappen, zelf ondersteunende code nodig die de hoeveelheid code terug naar
boven schroeft.


 
 
 
 
 
 
 
 
 
 
\chapter{Introduction}
\pagenumbering{arabic}
Programming is both an art and a profession. Both know-how and good tools are
instrumental for a good result. Because a programmer's primary tool is his
programming language, the development of new and better programming languages
is crucial to succeed at the ever greater scale of programs demanded from the
industry. 

The question when one programming language is better than another is not an
easy one.

Different people have vastly different opinions, but sometimes a new generation
of languages comes to the foreground, that really do make programming more
convenient.
Such a next-generation language eliminates the small frictions during
programming, is more expressive, lets you say precisely what you want and let
you say it once. And the most important criterium: noticing a language is
better because it by itself causes you to make less errors.

A lot of the common frustrations fall under the header ``boilerplate code''.
Boilerplate code is all the code that is not related to the problem or the
solution, but must be written simply to make the program works. It is the code
you write again and again because your language does not allow enough
abstraction. 

With the explosion of new languages over the last years, I noticed
a subject matter that was the source of constant strife: type systems. 
Language designers as well as programmers seemed very divided over this notion,
and I wanted to learn more about the topic. In short, a type system is a
mechanism within the language that protects you from noticing mistakes when it
is too late, by checking that the structure of your program makes sense, like
dimensional analysis for physical calculations.

The hard core of theoretical language and type system study focusses on a
functional model of calculations that abstract away from the hardware. For
decades now, features from this community have slowly entered more mainstream
programming languages.

Among the most interesting new languages is the programming language Scala.
Scala is gaining prominence by aiming for the unification of functional and
object-oriented programming, coupled to a strong type system. 
\\
The first part of this thesis gives an introduction to type systems and Scala.
This consists of a
sketch of type systems as a mathematical and engineering construct. Then the
lambda calculus as a model of programming and computation is
introduced, together with how a simple type system works for this example. An
extensive chapter introduces the Scala language and gives a quick example-based
tutorial. As a case study we treat the structuring of a toy interpreter for the
lambda calculus.

Even though Scala allows this in a way that is a lot
more concise and modular compared to Java, still some parts smelled of
boilerplate code, waiting for a good abstraction to come along.
\\

In the second part of the thesis, I study a new trend within advanced
functional typed languages: datatype-generic programming. This programming
technique aims squarely at the elimination of boilerplate code. It
allows writing more concise, general and reusable code by treating whole
datastructures in an abstract way. Using this style, some code need not be written 
anymore at all, while in other contexts you can give only the
interesting parts of a calculation and let the rest function by analogy.

The question of this thesis is whether the abstraction mechanisms of Scala
suffice to write such shorter, general, flexible code. 
\\

A first point of attack uses this style to write a more powerful system of
interfaces. This can not only improve traditional object-oriented interfaces,
but also allows abstraction over whole new kinds of code. The endless stream of
loops and bookkeeping code like checking for null pointers can be abstracted
away. 
I have ported some of these abstractions to Scala, and identified design
problems and language extensions that are needed to allow such abstraction in
an elegant way. 

A second point of attack for data-type generic programming is the traversal 
over elements of irregular structures. I have explored different techniques in
Scala, on a spectrum between very powerful and elegant, and more widely usable.

One of the methods is applied to an extended version of the case study
interpreter, showing how the boilerplate related to variable handling can be
tackled.

Finally, we focus on the lessons learned.












\chapter{Introduction to static type systems}


Historically the progress of computer science can be seen as a road to
ever-higher levels of abstraction. There is a struggle to find and define the
right abstractions and methods of abstraction. This process is mirrored
in program development by programmers searching the right abstractions for a
particular program. Just as the programmer then has to implement his
carefully chosen abstractions correctly and efficiently, so too language
developers develop the machinery to correctly and efficiently implement ever
higher abstractions. 


As always, it is extremely important to get these foundations right.
Since the abstraction techniques supported by a programming language influence
the ease and cleanliness with which programs in that language can be
written, work in programming languages will pay double dividends for everyone 
building on them. 


Type systems have been an important driver in programming language research for 
decades and profoundly influence the languages we think and
program in.
Unlike other design considerations, type systems are more mathematically
grounded because of a deep correspondence with
logic \citep[section 9.4]{TAPL}. 

\subsubsection*{Type checking in the program lifecycle}
A computer program takes different forms during the development cycle. The 
programmer typically works on a textual representation. The end-user cares only 
about the run-time behaviour, given by the final binary form. The compiler 
stands between both: it transforms a syntactically correct program text into a 
tree form, analyses this tree and finally generates the code that implements 
the high-level behaviour specified. 
\\

This middle step of analysis is very programming language specific. Languages 
that are called `dynamic' let you run any program that adheres to the syntax.
These are also known as scripting languages with well-known
instances such as Python and Ruby. Other languages are more `static' and
perform more advanced analyses on a program before allowing it to be
executed. Haskell, Java and Scala are examples of this category of
languages. 

We will study the theory and practice of type
checking, a common correctness-validating analysis run in the
beginning of the analysis stage. After verifying program
correctness typically many more analyses are run that provide program
optimization.

\subsubsection*{Early detection of errors}
Programs need further verification besides syntactic checking because 
the syntax allows statements that correspond to clearly nonsensical 
behaviour. For example, in an object-oriented language a method call 
can be syntactically correct but actually refer to a non-existent 
method. Or possibly the reference to the receiver object is not 
valid but a null reference, or a write to an array exceeds its bounds.

Program languages need to either specify a way to deal with these situations or
prevent them from occurring. At one extreme end, fully dynamic languages need to
check everything at runtime before performing an
operation. If the next operation would be illegal, they will at that moment throw an exception, signifying the 
message was not understood or a null pointer is being dereferenced.
Static 
languages use compile-time analysis to guarantee that some categories of invalid
operations will never occur during an execution of the program.  

A static type system uses classification to build an
abstraction of the program. Every expression is classified according to the
type of values generated on execution. By only allowing expressions to be
combined in compatible ways, the type system proves
that prohibited error classes can never occur in validated
programs. For example, in an object-oriented language the type system 
will typically check method calls on objects. It will classify objects by
their interfaces and prove that
all objects called will contain the called method. If the method is not defined,
the programmer will be alerted by the compiler and the program is refused.



\subsubsection*{Further benefits}

Type systems can do more than detect simple coding errors early with guarantees.
Type systems have been used to prove more general kinds of properties of a program, 
from never crashing because of illegal interactions over being deadlock-proof
~\citep{SafeJava} to proving mathematical theorems.


A type system can also be a documentation tool for the
programmer. The
types occurring in a program say a lot about the structure of the
program. In dynamic languages this information is usually preserved in
documentation or perhaps using Hungarian notation for variable
names. Documentation can be made inconsistent by modifications, whereas a statically typed program enforces
this knowledge in the program itself.

A type system can be enlisted for maintainability by using
the type system to enforce the use of interfaces and separation of
concerns in the program. Manual refactoring is aided because the compiler
will signal inconsistencies when a change has been made.

There are also consequences for performance. Statically typed
programs can run faster because some run-time checks for properties 
proved at compile-time can be eliminated
and because it provides extra information to layout data in memory efficiently.


\subsubsection*{Limitations}
Static type systems do have an impact on succinctness, 
flexibility and expressiveness. 


Type annotations typically make a statically typed program more verbose than a
corresponding program written in an untyped language. However, the difference
in verbosity need not be big. Static typing can be used with a system of type
inference, relieving the programmer from the need to provide type annotations 
for all expressions. The well-known Hindley-Milner type system underlying many
functional programming languages needs no annotations at all, while more
complex type systems may require some type annotations. It also depends on the
intricateness of the particular program, which may need some extra typing
information to help the type inference engine along.
Therefore there is a whole spectrum
of languages: from rather verbose like Java to more succinct like Haskell. 


Static typing may also interfere with prototyping 
by refusing 
the program to run before every possible path is fully checked. The 
ability to selectively turn off parts of the type system is still an active research 
domain. For now, statically typed languages often also provide a
read-eval loop to enable testing key parts of a system.


Most fundamentally, employing a type system imposes
limits to expressiveness. We want
absolute guarantees from our type checker, even though it can
only do a static analysis of the program. Therefore the type system
will always need to err on the side of safety. 

It will need to conservatively 
disallow some programs that cannot be proved safe, even though the program 
can in fact be executed without problems.


This last limitation of type systems is a major driver in type system 
development. The goal is to develop a type system that would allow not all possible 
but all useful programs to be checked. The trade-off is pushing the bounds of 
maximal expressiveness, while keeping the language convenient to use by not requiring 
too much explicit typing hints from the programmer. 


\subsubsection*{To the future}
These are exciting times for type system designers: the escalating security
requirements for always-connected computer systems have made proof of security properties a lot more 
 worthwhile. Also, the field has been academically developed to a point where it can now 
 take on real industry-proof languages instead of just purely academic proof of concepts.
Indeed, recent features added to mainstream program languages such as Java
generics ~\citep{bracha98making} and $C^\sharp$ nullable value types are
already firmly founded in enhanced type systems, and there is hope for more tangible improvements in this direction.
Specifically, research on how to combine static and soft type checking
\citep{wadlerblame} and typical features of dynamic languages \citep{ego} will
hopefully make this body of research and experience useful in further contexts.





















\chapter{Typing lambda calculus}

Historically, most of the research into type systems up to the 1990's was 
performed in a foundational computational framework called the lambda
calculus ~\citep{barendregt88introduction}.


Computation in the lambda calculus is based on substitution. As an example, the substitution 
of 3 for x in x*(x+y), written (x*(x+y))[x:=3] equals
3*(3+y). We could perform a further substitution on the result of 11 for y as
(3*(3+y))[y:=11] equals (3*(3+11)), which would result (with
primitive operations + and * available)
in 3*14 = 42. It is obvious the operation we have performed here can model the
 action of calling a function with actual parameters 3 and 11 for formal parameters x and y.


In Lambda calculus notation one would use very little syntax and write the
above function as $\lambda x . \lambda y . (x*(x+y))$ . A call 
of this function with
3 for x and 11 for y is written as  {\tt $\lambda x . \lambda y . (x*(x+y))$
$ 3$ $ 11$ } by just appending actual parameters to the right.
\\

This is actually the complete form of the untyped lambda calculus: a term can
be a variable, a function abstraction with a formal parameter and body 
``$\lambda$ param . body'' or a function application of a function term and a 
parameter term  ``function term'' 
 
\begin{tabbing}
term \=::=  \= x     \hspace{70pt}       \= variable \\ 
       \>	 $\|$ \>	$\lambda$x . term     \>	abstraction: function with parameter x\\
       \>	 $\|$ \>	term  term           \>	application: right term is parameter for left function
\end{tabbing}
This simple system is Turing-complete and thus lets us express any computable
function. Concepts such as numbers, booleans and pairs could all be encoded as functions in the lambda calculus through 
so-called Church encodings\citep{TAPL,barendregt88introduction}.
If we extend this calculus with enough syntactic sugar, primitive implementations of the base 
values and operations on them, the end product becomes a useable
full-fledged programming language, which can indeed be seen as the foundations of
Scheme\citep{schemereport}.
\\

However such dynamic framework lets us write expressions that make no sense.
For example, if in this untyped system, enriched with some primitive datatypes
and operators,  we wrote a function that increments its parameter by two:
$\lambda n . 2+n$ by using a primitive addition operation on numbers, we could
use this function correctly on 1 : $(\lambda n . 2+n)  1$  $\Rightarrow$ through 
substitution: $2+1$ $\Rightarrow$  $3$. However, accidentally somewhere deep down in our
program we might cause it to be applied to, say the word $``hello''$. This is an
invalid operation and as programmers we would hope to get a runtime
error when the system attempts to perform this, instead of encountering bogus
data in memory later.
\\

Erroneous expressions such as these can be prevented by using a very simple and lightweight 
static type system. As type systems work through classification of values and expressions by types, 
we will first investigate the categories of values that exist in this so-called 
 ``simply typed lambda calculus''. 
The possible values consist of a set of primitive values that we add for
convenience, and function values.


Classifying primitive values is easy: we might introduce a type Nat that
classifies all natural numbers, a type String for words and a type Bool for the logical constants true and false. 
But what to do for functions? Suppose we introduce a type Function. This is still insufficient 
to detect inconsistent function
application. The relevant information about a function is not just that it is one, but 
what type of parameter it takes and what type of value it results in. Traditionally 
the notation A $\rightarrow$ B is used for the type that classifies 
functions that return a B when 
applied to an A. This means String $\rightarrow$ Nat would be the type of a 
function that calculates the 
length of a string. The `$\rightarrow$' itself is then a type constructor, 
since it is not a type itself but constructs one when given type parameters.
\label{first_typeconstructor_mention}
\\

The specification of our language now looks like this: \label{STLC_grammar}
\begin{tabbing}
term 		\=::=  \= x     \hspace{70pt}              \= variable \\
       \>	 $\|$ \>	$\lambda$x: type . term   \>	abstraction with param x with given type\\
       \>	 $\|$ \>	term  term         		  \>	application\\
type \>	::= \>	base type 						\>	(Nat, Bool,\ldots depending on actual language) \\
	 \>	$\|$	\>	type $\rightarrow$ type 	\>	 function type: argument type
	 `$\rightarrow$' result type
\end{tabbing}

% \begin{verbatim}
% term ::= x	                     variable
%        | \lambda x:type .term    abstraction with param x with given type
%        | term term               application 
% type ::= base type               (Nat, Bool,\ldots depending on actual language) 
%        | type -> type            function type: argument type '->' result type
% \end{verbatim}

Generally, we write the syntax term `:' type to express that the given term has
the given type. Although we need the type of a function to contain both its source 
type and target type, simply annotating a function with just the type 
of parameter it takes, turns out to be enough to allow the typechecker for the simply typed lambda calculus 
to check function applications for validity. 
\\

For an approximate account of how a typechecker would function, suppose `+' is
shorthand for a built-in function that takes two Nat numbers and returns a Nat.
Then we can reason as follows about our example function, now annotated as 
$\lambda n:Nat . 2+n$ to prevent it being applied to anything but numbers. 
Our function is a function that takes parameters of type Nat
 (by annotation), and returns values of the same type as those which the 
 function in its body returns when given a Nat. The 
 type of the body when given a Nat is equal to the type of the result of the 
 function `+' when given two Nats, being Nat.  Thus the type of our example
  function is Nat $\rightarrow$ Nat. 
  
  
  Applying this function to the String $``hello''$ should give rise 
  to an error by the compiler to the programmer, because functions should only
  be given parameters of the correct type. If we apply our function to the parameter 3 which is of type Nat, 
  this is permissible: an application of a function that takes a Nat to a Nat, to a parameter of 
  type Nat naturally gives us a result of type Nat. 

There is a small set of simple rules guiding this decision making process. \label{STLC_typing}
\begin{enumerate}
  \item The type of a primitive value is its predefined base type.
  \item The type of a variable inside a function is the annotated type remembered from its declaration site.
  \item The type of a function with annotated parameter type A is the function type A$\rightarrow$B, \
			where B is the type found for the body in the knowledge that the newly introduced parameter has type A.
  \item The type of the application of a function f: A$\rightarrow$B to a parameter gives \begin{enumerate}
                                                                                            \item if the actual parameter has type A, result type B
                                                                                            \item if the actual parameter does not have type A, a type error 
                                                                                          \end{enumerate}
\end{enumerate}
These rules allow us to write an algorithm that determines whether 
the annotated types lead to a consistent program or whether it is flawed.
Very important is the fact that the different cases of the algorithm are mutually 
distinct. The structure of the term being analysed corresponds to a
line in the grammar, and also fully specifies which single branch applies.
This `syntax-directedness' of the type checking rules simplifies writing the
type-checker.
\\

However, there are a lot of functions that are intuitively perfectly fine 
that we cannot write in this simply typed lambda calculus. It is very simple but lacks expressivity.
The most straightforward example is the identity function: the function that just
 returns its parameter. In the untyped lambda calculus 
we can just write  $\lambda n . n$ and apply this function to any
value to have the same value returned. If our
programming system only accepts the simply typed lambda calculus, this term will not parse: we need to annotate with a parameter type. 
There is no way this can be done once and for all: 
if we want the identity function for natural numbers, we need to write the function 
as $\lambda n : Nat . n$ , for booleans $\lambda n : Bool. n$ and so on.


We need to make our type system more expressive to be able to express polymorphic functions like this that 
work for multiple types. The extension with so-called universal types allows us to model the 
identity function as 
	\mbox{$forall X .  \lambda n : X. n$} with type \mbox{$forall X.
	X\rightarrow X$}. This extension has been worked out as the
	Hindley-Milner type system and, like any, complicates the typing rules while
	improving expressiveness. The story is similar for subtyping, as best known from object-oriented languages, and a host of other
extensions~\citep{TAPL, cardelli85understanding}. 























\chapter{The Scala programming language}
\section{Scala's background}

The Scala language \citep{ScalaOverview} designed by Martin Odersky is an
effort to create a multi-paradigm scalable language by joining the Object-Oriented and Functional programming paradigms. 
The language is designed to be scalable by using the same core abstractions 
for both very small programs and big systems. The core abstractions are
derived by unifying Object-Oriented and Functional design practices.  


The Object-Oriented 
way to decompose a problem is as a set of objects that send messages to each 
other. Objects are first-class values, so object methods can take and return
other objects. Objects only depend on the interfaces of their peers as a way to separate 
concerns. Each object neatly hides local state and uses it to make decisions
internally. Extending a program should ideally be as easy as replacing a class 
with an enhanced subtype through inheritance. In this way Object-Orientation aims to build 
flexible and easily extendable systems. 


The Functional way to decompose a problem is as a set of 
functions that operate on external data. Functions are first-class values, so
so-called higher-order functions can operate on other 
functions and return functions. 
Data is modelled using algebraic data types and decisions are made external to the data by 
pattern matching over the way the data was constructed. The usage of state 
is minimised, which makes it easier to reason about and prove properties of the 
program. Together this provides a concise way of building large programs out 
of small building pieces\citep{SICP}.


Furthermore, Scala has a static type system but does not need fully explicit type declarations.
A type inference engine derives partial type information, 
so the programmer can leave a lot of type annotations implicit, which makes 
for shorter code compared to explicit languages such as Java. This allows 
Scala to aspire to the conciseness of dynamic scripting languages while 
providing the guarantees of a static type system.

 
Finally, Scala compiles to bytecode for the Java virtual 
machine and can use Java libraries seamlessly so Scala code integrates well into
the current Java ecosystem.


\section{Scala as a meta-programming language}

Though Scala on the whole is a multi-paradigm language that can be viewed as a
successor for Java, some features in particular make it very practical 
to write language-handling programs in.


Firstly, a version of pattern matching as in Haskell is possible on case class 
hierarchies, the Scala version of Abstract Data Types, as well as through
information hiding ``extractor objects''~\citep{LAMP-REPORT-2006-006}. Writing
programming-language oriented code involves a lot of operations on trees since the object program is represented as a tree of abstract syntax elements. Scala has a
`match' statement that implements the traditionally functional language feature of pattern matching which 
is a way of externally defining a function over all the variants of an abstract data type.
The code is localized in the function instead of distributed 
over the different subtypes, as can be simulated in other object-oriented
languages by the use of the visitor pattern
~\citep{wadlerexpressionproblem},~\citep{gof_design_patterns}.


Secondly, Scala is suitable as a host language for domain specific languages
~\citep{hudak_dsel} because of a combination of flexible syntax and
user-definable implicit conversions. The standard library provides an implementation of the `executable grammar' or `parser combinator'
\citep{MonadicParserGenerators} idea. This reduces the time needed to implement parsers compared to manual 
implementation. As an embedded domain specific language it can be better integrated 
than using an external parser generator as from the yacc family.


Thirdly, Scala has mixin composition using traits, which allows expressing a 
component model using objects ~\citep{ScalableComponentAbstractions}. Whereas
in Java the interface of an object only specifies which functionality it provides, Scala allows defining both the 
interface the component provides and those it depends on. 
Traits can be mixed into classes, allowing the composition of separate
subcomponents and dependencies into larger structures. In Scala this
happens not in a separate module definition language but in a type-safe
way in the language itself.

\section{Scala's type system}
The Scala type system is based on the $\nu$-object calculus 
with restrictions to make type checking a decidable algorithm. This introduces a
familiar object-oriented system where values are objects, whose class is their
type. Besides normal classes Scala has mixin traits. It extends these by not
only allowing value members for an object but also abstract or concrete type
members. This type abstraction mechanism is also available packaged as familiar
generic classes and generic methods. In Scala the interaction between
type parameters and subtyping is fixed where the type parameter is
declared.
%with declaration-site variance as opposed to usage-site variance as in
%Java. 


Scala also includes self types and structural types. It
couples all these with a type inference engine that reduces the amount of manual type annotations necessary, because
\begin{quote}
The more interesting your types get, the less fun it is to write them down! - Benjamin C. Pierce
\end{quote}


\section{Feature overview with examples}

\subsection{Concise syntax}
One of the big initial payoffs when switching from Java to Scala is the concise
and supple syntax.  An explicit and verbose rendering of the traditional simple hello world program 
in Scala is:

\begin{lstlisting}[language=Scala]
object HelloWorld {
	def main(args: Array[String]) : Unit = {
		println("Hello, world!")
	}
}
\end{lstlisting}

\noindent We repeat the standard Java version for easy comparison:

\begin{lstlisting} 
public class HelloWorld{
	public static void main(String[] args) {
		System.out.println("Hello, world!");
	}
}
\end{lstlisting}




We can already notice some syntax differences between Scala and 
Java in this first example. Firstly, no semicolons are needed to end the last
statement on a line, but they are permitted and used as explicit separator. 
Secondly, there are a number of differences regarding type annotations. In Scala, a type
declaration comes after the element that it classifies and
is separated from it by a colon. This goes both for the parameters 
the method \texttt{main} takes and for the return type of the 
method itself. In the \texttt{args} formal
parameter it is visible that Scala uses square brackets instead of angled
brackets to denote type parameters.
The \texttt{Unit} type takes the place of
Java's \texttt{void} to indicate a method without useful result 
except for side-effects. The only value of type \texttt{Unit} is written as a
pair of parentheses \texttt{()}. 

This code can be written more succinctly because the return type of
non-recursive methods like this one can be inferred by the compiler. 
Scala requires the equality
sign to link the method body to the declaration. Just leaving out the body makes
the method abstract. But in case the body 
is just one expression, the curly braces for delimiting the block are
optional. Thus the short
version of this method would be :	
\begin{lstlisting}[frame=none]
	def main(args: Array[String]) = println("Hello, world")
\end{lstlisting}

Scala actually has a specific idiom for the common case of an object
that functions as entry point for an application, by making it extend 
\texttt{Application}. The code in the body of the object will be 
run when the main method is called.
\begin{lstlisting}
object HelloWorld extends Application{
	println("Hello, world")
}
\end{lstlisting}

This first example also already shows a non-cosmetic property of Scala
that is reflected in the syntax: it is intended to be more purely
object-oriented than Java. Scala does away with the notion of 
\texttt{static} members as present in Java. 
In Java static members do not belong to an object at all but to a class.
However, these static class members do not participate in inheritance like
ordinary members. In Scala you can write an object directly and on first
usage an automatic singleton instance will be created. A frequent pattern is to
combine a class with a helper object that contains the features that would in Java have
been static class members.

Scala's syntax employs quite a bit of syntactic sugar to provide familiar
syntax on top of a more uniform object-oriented model. Because in Scala every
value is an object, operators on values are actually ordinary methods. 
Code written as \texttt{3 +
4} is just desugared into \texttt{ 3.+(4)} on auto-boxed integer values. 
Indeed this operator-like syntax is enabled not just for 
the default operators in the standard library, but for every method that takes 
a single parameter. 
Because Scala does not reserve the traditional operator names this provides
both the capability to use operator names for your own classes and a clean 
syntax for method calls regardless of method name. 
An extra bit of very useful syntactic sugar is Scala's special handling 
of function call syntax. The syntax \texttt{a()} where \texttt{a} is an object
is desugared into a method call \texttt{a.apply()}, whereas on the left-hand side
of an assignment the form \texttt{a() = x} is desugared into
\texttt{a.update(x)}. Again this works for all objects: by simply providing the
methods \texttt{apply} and \texttt{update} on an object, this short-hand syntax
will be available.

This allows, as a second example, a crude version of a simple cell
that holds one Int value as in listing \ref{intcell}.
\begin{lstlisting}[float,label=intcell, caption= A cell containing an integer]
object OperatorAndParenthesesSyntax {
  class IntCell {
    var contents: Int   =   0
    def apply()	        =   contents
    def update(i: Int)	=   contents = i
    def +=(i: Int)      =   contents = contents + i
  }
  def main(args: Array[String]) = {
    val c: IntCell = new IntCell
    println(c())        //prints 0
    c() = c() + 42      //converted into c.update(c.apply() + 42)
    println(c())        //prints 42
  }
}
\end{lstlisting}
As shown, this \texttt{update} and \texttt{apply} desugaring also works if the
methods take parameters. The syntax for \texttt{Array}s is indeed implemented precisely 
this way, by exploiting uniform desugaring instead of special-case syntax. For
example, arrays of element type A contain methods \mbox{\texttt{apply(i: Int):
A}} as well as \mbox{\texttt{update(i: Int, x: A): Unit}}. Scala will transform
the expression \mbox{\texttt{a(2) = a(3) +1}} into \mbox{\texttt{a.update(2,
a.apply(3) +1)}} .


Also shown here is that Scala makes the syntactic difference between immutable and mutable references not by a 
preceding \texttt{final} as in Java but by using \texttt{val} for the 
declaration instead of \texttt{var}. In keeping with the functional
programming style that mutable data should be avoided, this bit of
Scala syntax makes it convenient to use \texttt{val} by default and only
use \texttt{var} if mutable state is explicitly needed.


\subsection{Higher-Order Functions}\label{currying}
Scala is a functional language, thus every function is a value and can be passed 
around and used by other functions. This enables the easy composition of
functionality which allows functional languages to express the essence of algorithms succinctly. 

Scala does not force this style upon the programmer but enables it. 
Method application is a basic operation of Scala and methods are part of
objects, not first-class objects themselves. This fact is concealed by
generating a corresponding first-class object of class \texttt{Function}
whenever a method is used as a first-class value. These \texttt{Function}
objects are automatically provided with an \texttt{apply} method and thus 
get function call syntax, using the
syntactic sugar explained previously.

Scala also allows curried functions, which allow applying one argument at
a time, as in the lambda calculus or Haskell. A curried function has a
partitioned formal parameter list. For example,  \texttt{def compare(x: T)(y:T)}
is the curried version of  \texttt{def compare(x:T, y: T)}.


By combining higher-order functions with control over the evaluation order, a
language gains the ability to abstract custom control structures. Scala
normally evaluates the arguments of a method application first in a
\mbox{call-by-value} evaluation order. Specifying a particular formal parameter
should use \mbox{call-by-name} semantics is possible by syntactically preceding 
the parameter type with `$\Rightarrow$' in the method declaration. This
use of `$\Rightarrow$' is also the Scala notation for the type `function'.
One can think of the evaluation of the parameter being postponed by wrapping it
into a function of no arguments, a thunk. Since functions are values, a passed
thunk is considered a fully evaluated parameter and the
expression within is left intact. Scala also automatically converts a block with a result expression of
type T on the caller side into a thunk of type \texttt{() $\Rightarrow$ T} when
needed, to enable code like listing \ref{mywhile}.

\begin{lstlisting}[float=t!, label=mywhile,caption=Reimplementing a while loop]
def mywhile(e: =>Boolean)( body: =>Unit) {
	if (e) {
		body
		mywhile(e)(body)
	}
}
def main(args: Array[String]) = {
	var i = 10
	mywhile(i>0) {
		println(i)
		i= i-1
	}
}

\end{lstlisting}
This ombines several new features compared to Java: the function 
\texttt{mywhile} is curried, its first actual parameter is a closure of an
anonymous inline function over the value of the mutable variable i and its second
 parameter is actually a block automatically converted to a thunk of type
 \texttt{$\Rightarrow$ Unit}.
 

 
\subsection{Mixins with Traits}

Scala generalizes the well-known system of  
inheritance with a single base class and multiple implementation-less
interfaces by incorporating traits. Instead of extending a base class and
implementing several interfaces, a class can extend a base class and have
several traits mixed into it. Traits fully replace interfaces and are more general. Traits can contain default
implementations of methods as well as variables. A nice example is 
the \texttt{Ordered} trait from the standard Scala library in listing
\ref{ordered}.

\begin{lstlisting}[float,label=ordered, caption = Trait specifying a total ordering]
trait Ordered[A] {
  def compare(that: A): Int

  def <  (that: A): Boolean = (this compare that) <  0
  def >  (that: A): Boolean = (this compare that) >  0
  def <= (that: A): Boolean = (this compare that) <= 0
  def >= (that: A): Boolean = (this compare that) >= 0
  def compareTo(that: A): Int = compare(that)
}

\end{lstlisting} 
If this were a Java interface, each class that needed to be \texttt{Ordered}
would need to reimplement all of the methods, which is mostly duplication of common
code. In this trait however, the default implementations depend on
\texttt{compare}, the one abstract method that is left. This means that for just
implementing the method \texttt{compare} and mixing in the \texttt{Ordered} trait the class
gets a lot of functionality without repetition.

Actually mixing in ordered would look something like this:
\begin{lstlisting}
	class Date extends Superclass with Ordered[Date] {
		def compare(that: Date) = { ... implementation ...}
	}

\end{lstlisting}

So while adding an interface to a class only gives it more obligations 
to its clients that need to be implemented for each class 
separately, a trait can really modularize behaviour.


%%Traits can even be used to get the effect of aspect-oriented around
%%advice
%%object TraitsExample extends Application{
%%    abstract class EmptyBaseClass {}
%%  
%%  trait Doer { def doSomething()  }
%%  trait Twice extends Doer{
%%    def doSomething()
%%    def doSomethingTwice() = { 
%%      doSomething()
%%      doSomething()
%%    }
%%  }
%%trait HelloPrinter extends Doer{
%%     def doSomething() = println("hello")
%%  }
%%trait Logger extends Doer{
%%    abstract override def doSomething() = {
%%      println("before dosomething")
%%      super.doSomething()
%%      println("after dosomething")
%%    }
%%  }
%%
%%The base class we are mixing our traits in is empty 
%%in this case. We define a top trait Doer analogous to an abstract
%%aspect. If we just mix in the HelloPrinter implementation of Doer we
%%can get the following run:
%%
%%    val hp = new EmptyBaseClass with HelloPrinter
%%    hp.doSomething() 
%%output:
%%hello
%%
%%By mixing in the trait Twice we get the extra functionality:
%%    val hp2 = new EmptyBaseClass with HelloPrinter with Twice
%%    hp2.doSomethingTwice()
%%output:
%%hello
%%hello
%%
%%The trait Logger implements what would traditionally be an around 
%%advice. Note that we need to specify the keywords abstract override
%%to be able to use a super call as a proceed().
%%al hp3 = new EmptyBaseClass with HelloPrinter with Twice with Logger
%%    
%%    hp3.doSomethingTwice()
%%
%%
%%
%%output:
%%before dosomething
%%hello
%%after dosomething
%%before dosomething
%%hello
%%after dosomething
%%
 
\subsection{Modules with abstract type members and self types}


An important principle when building components is to avoid
hard-coded links. Expressing dependencies of a component either as
formal parameters or abstract members of the component can replace the brittle
use of global state and scope. 


Instantiating an abstracted 
component in the case of parametric abstraction is done by applying
actual parameters. This is well-known from Java as passing constructor
arguments in case of value parameters, or passing type arguments in
case of type parameters of a Java-1.5 generic class. 


Abstraction through abstract members is possible in Java only for
abstract methods. Instantiation is then performed by creating a fully
concrete subclass and creating an instance of the subclass.
\\

In Scala both parametric abstraction and abstract member abstraction
are supported equally. A class can take both types and values as
formal parameters, and have both abstract type members and abstract value
members. 


The canonical example\citep{odersky:scala-experiment} is a
symbol table component for a compiler. This structure consists of
two mutually dependent subcomponents \texttt{Types} and \texttt{Symbols}.
The dependency can be expressed by using abstract type
members as in listing \ref{type_members}.

\begin{lstlisting}[float,label=type_members,caption=symbol table using abstract type members]
trait Symbols {
  type Type //abstract because not implemented!
  class Symbol { def tpe: Type }
}

trait Types {
  type Symbol
  class Type { def sym: Symbol }
}

class SymbolTable extends Symbols with Types
\end{lstlisting}

The mixin composition of these two mutually dependent structures
overwrites the abstract definitions with the concrete ones, to create
one class where everything snaps together.
\\

This same concept can be formulated in a parallel way in Scala using
self types as in listing \ref{with_selftypes}. Giving a trait an explicit self
type means that a class with this trait mixed in can only be instantiated when also all
the components listed in the traits self type are mixed in. 
This implies that we
can register dependencies by just incorporating them into the self type. The self type is 
then the supposed type of the implicit \texttt{this}
reference within the trait. All elements belonging to the self type, including
dependencies, can be used inside the body. This must be so, because 
if the self type assumption 
was not valid, the instance that the code belongs to could not have been created 
as the constructor call would signal a type error. Self types are optional: if
no self type is explicitly given it is simply taken to be the class
itself. 

\begin{lstlisting}[float=h,label=with_selftypes,caption=symbol table using self types]
trait Symbols { self: Symbols with Types => // selfname:selftype =>
   class Symbol {def tpe: Type}
}
trait Types { self: Types with Symbols =>
   class Type { def sym: Symbol}
}
\end{lstlisting}

So components can be built in Scala by \begin{enumerate}
         \item making each discernable piece of functionality a trait
         \item listing each traits dependencies in its self type 
		 \item instantiating a full component by mixing the right
traits together.
\end{enumerate}


\subsection{Generics with declaration site variance}
The concept of variance comes up when a language combines subtyping with type
parametrization. The question is: how does the subtyping work between two
generic classes with type instantiations that are themselves subtypes of each
other? One simple example is a read-only cell. 
\begin{lstlisting}
 class A(a: Int) {
    def getVal() = a
    override def toString():String = "A with val "+a.toString
 }
 class B(b: Int) extends A(b) {
    override def toString():String = "B with val "+b.toString
 }
      
 class ReadOnlyCell[+T](elem:T) {
    def get:T = elem
 }

\end{lstlisting}
The \texttt{`+'} in front of the type parameter T of \texttt{ReadOnlyCell}
declares the class as covariant in T. In this specific case where B
is a subtype of A , written B $<:$ A, a covariant \texttt{ReadOnlyCell} means
\texttt{ReadOnlyCell[B]} should be a subtype of \texttt{ReadOnlyCell[A]} . The
subtyping of the generic class goes in the same direction as the subtyping in
the type parameters. This relation needs to hold because
reading A's from a cell of B's should be ok: every B read from the cell is also
an A. Thus we should be able to use a \texttt{ReadOnlyCell[B]} as \texttt{ReadOnlyCell[A]}.

\begin{lstlisting}
    val ro = new ReadOnlyCell[B](new B(42))
    val ro_alias: ReadOnlyCell[A] = ro
    println(ro.get)
    println(ro_alias.get)
prints out: 
    B with val 42
    B with val 42

\end{lstlisting}

The opposite variance declaration using \texttt{-} also exists and is called
contravariance. This occurs in the predefined type
\texttt{Function1}, which is the class of function objects that take one
parameter.

\begin{lstlisting}
    trait Function1[-T1, +R] extends AnyRef {
        def apply(v1:T1): R
    }

\end{lstlisting}

We see in the signature of the \texttt{apply} method that T1 is the type of the 
first argument to the function and R is the return type. Reasoning in this case
can be done by analogy to the safe substitution principle for method
overriding: Result types may become more specific in a subtype, the argument types can only become 
more lenient. 

Like for read-only cells, functions are covariant in their result type.
However, according to the safe substitution principle a function is only 
more specific than another, if it is more lenient in the type of arguments it
accepts. This is contravariance, annotated with \texttt{`-'}.  

This fundamental concept can be expressed very succinctly in Scala. Scala
avoids the complexities of wildcards as in Java by making the designer of the
class specify the variance. The compiler does not accept a class when the
declared variance and the signatures of the methods of the class are
in conflict.


\subsection{Pattern matching and case classes}

In a system with a certain number 
of datatypes and a number of operations over them, the operations can either
be implemented as methods internally or in a function
externally. Implementing the proper datatype specific behavior as a
method in each of the subclasses is the object-oriented style. Using an external function 
that pattern matches over the abstract data types is the functional
style. 

The OO variant makes it easy to extend the system with a 
new subclass since all the original code can be left untouched, while the 
functional style makes it easy to add new operations for the same 
reason. The final goal is a scheme that allows easy modifications in both
directions \citep{wadlerexpressionproblem}. Scala has been a vehicle for further research into these problems. In
\citep{odersky-zenger:fool12} solutions to the expression problem using a
combination of abstract type members, self types and mixins are worked out.
\\

As a hybrid OO-Functional language,
Scala does not need a visitor pattern to emulate the functional style but has
this built in. 
All that is needed to enable pattern matching against a
certain class or object is precede its definition with the keyword
\texttt{case}. Scala thus unifies algebraic data type definitions as in Haskell with object-oriented class hierarchies.


Actually, for a case class declaration the Scala compiler also automatically 
generates \texttt{hashcode}, \texttt{equals} and \texttt{toString} methods based
on the parameters of the default constructor, as well as accessor methods 
for these parameters. To construct an instance of a case class, a companion
object to the class introduces a factory method so even the `\texttt{new}'
becomes superfluous. This makes case classes a natural fit for value objects and
relatively dumb data.

Normally the hierarchy of case classes is externally extensible, but using
the optional keyword \texttt{sealed} on the base type of the hierarchy
disables this and fixes the children nodes to the ones defined in the original
compilation unit. This enables more checking in pattern matches: in a pattern match on
this hierarchy the compiler will emit warnings when the pattern match is not
defined for all cases. A very small but still useful example outside of the
tree-processing realm is the Scala rendition of the \texttt{Maybe} concept from Haskell. 
The structure of this hierarchy can be seen in listing
\ref{optiondefinitionlisting}.

\begin{lstlisting}[float,label=optiondefinitionlisting,caption= the Option type]
sealed abstract class Option[+A] {
	def isEmpty: Boolean
	def get: A
}
final case class Some[+A](x: A) extends Option[A] {
	def isEmpty = false
	def get = x
}
case object None extends Option[Nothing] {
  def isEmpty = true
  def get = throw new NoSuchElementException("None.get")
}
\end{lstlisting}

This forms an idiom for optional values making explicit that the 
value may be missing. In Java one would normally just pass \texttt{null} and 
depend on the receiver checking for \texttt{null} for every optional 
parameter. The Option idiom makes the distinction explicit and 
moves this constraint into the type system. The type system will not allow a
value of \texttt{Option[T]} to be used where one of type T is needed. The
optional value must be unpacked first. In this way the type system prevents the
programmer from ignoring that these values could be non-existent and forces him to handle both
cases. 

A actual pattern match over an optional value could then look as follows:
\begin{lstlisting}
def handleOptionalParam(t: Option[T]) = t match {
	case None 	 => //the param was not given, set default?
	case Some(x) => //the param is present: x is its value.
    }
\end{lstlisting}

A relatively recent enhancement to pattern matching in Scala is known as
extractors \citep{LAMP-REPORT-2006-006}. Extractor objects enable a
representation interface of case classes that stands between the user pattern
matching over a hierarchy and the implementer of the actual classes. This extra
indirection allows pattern matching while still guaranteeing the representation
independence typical of an OO setting.

\subsection{Sequence comprehensions}
Scala has a powerful \texttt{for} loop syntax like list comprehesions from
Haskell and Python. 
An example shows this feature best. The following method generates the squares
of all even numbers in a range:
\begin{lstlisting}[language=Scala]
def squaresofevens(low:Int,high:Int) = 
    for {  n <- List.range(low,high)  if n % 2 == 0
        } yield n*n
\end{lstlisting}
This syntax is purely syntactic sugar and is transformed into a combination of higher-order 
functions map, foreach, filter and flatMap.
The more functional style using those functions looks like 
\begin{lstlisting}[language=Scala]
def squaresofevens(low:Int, high:Int) =
   List.range(low.high) . filter( (n:Int) => n % 2 == 0 ) . map ( (e:Int) => e*e )
\end{lstlisting}
 
 


\subsection{Implicit parameters}


Scala makes ad-hoc polymorphism like Haskell typeclasses
\mbox{\citep{wadler89how}} possible with a feature called implicit parameters. 

One formal parameter list of a method (see \ref{currying}) can be preceded 
by the keyword \texttt{implicit}. 
This allows the method to be called both normally
and without actually providing the missing argument. If the argument
corresponding to the implicit parameter is missing in a call, the compiler will
automatically infer a suitable implicit value and pass it behind the
scenes. Values are marked as being available for use by the compiler
as implicit arguments by preceding their declaration with the same keyword
\texttt{implicit}. The compiler will consider all implicit declarations in scope
as possible arguments and
select the single conforming instance as implicit argument. In cases where no
suitable stand-ins are in scope or there are multiple options generating
ambiguity, the compiler willl emit a warning.

This can be used to emulate typeclasses by listing the type class
implementation as an implicit parameter in any function that needs it, instead
of using a type constraint as in Haskell.
\\

In the following example we wish to have a general testing function that
checks whether the order defined on values is correct. The testing
methed will use the \texttt{`<'} operator which is defined for numerical
values and for \texttt{Ordered} values. 
To be able to use the same
method also for dates, we decorate the \texttt{java.util.Date} with the \texttt{Ordered} trait.
So we define as implicit a conversion function from Date to \texttt{Ordered}.
The implementation of the trait \texttt{Ordered} is complete with just the
implementation of the \texttt{compare} method. 
\\

The testing method is now applicable for any type for which an \texttt{Ordered}
implementation can be found, even if the type does not actually inherit from
\texttt{Ordered}. By not requiring subtyping but just a conversion function this
code can work with classes that are extended externally.
Then dates can be compared in the testing function just like numeric values,
for which such a conversion is always in scope, once our extra implicit
conversion method is pulled in scope by an import statement.
 


\begin{lstlisting}
object OrderedImplicit {
  import java.util.Date
  implicit def date2ordered(x: Date): Ordered[Date] = new Ordered[Date]{
    def compare(y: Date): Int = x.compareTo(y)
  }
}
object OrderedTest extends Application{
  import java.util.Date
  val first: Date = new Date()
  val later: Date = new Date(first.getTime() + 10000)
  
  def testOrder[T](left:T,right:T)(implicit isordered: T => Ordered[T]){
    println(left+ " smaller than "+right+": "+(left < right))
  }
  testOrder(1,2)
  import OrderedImplicit._
  testOrder(first,later)
}

\end{lstlisting}
% \begin{lstlisting}
% object OrdinaryImplicit {
%   
%   class myList[A](xs: A*) {
%     def length: Int = xs.length
%   }
%   object impl1 {  implicit def intrep[T](x: myList[T]): Int = x.length   }
%   object impl2 {  implicit def intrep[T](x: myList[T]) : Int = 2*x.length }
% 
%   def returnIntRepresentation[T](x: myList[T])(implicit repr: myList[T] => Int) = {
%     repr(x)
%   }
%     
%   def main(args: Array[String]) = {
%     val l = new myList(1,2,3,4)
%     import impl1._
%     println(returnIntRepresentation(l)) //prints 4
%     import impl2._ 
%     println(returnIntRepresentation(l)) //prints 8
%   }
% }
% 
% \end{lstlisting}
The exact same mechanism is used in a simpler form for ordinary implicit
conversions. When the types in an expression don't match, because a returned
object is not of the requested type, or a method call is requested on a type that doesn't support it, 
the Scala compiler will look in the current scope for an implicit converter
function that it can automatically use as adapter between call
and argument . This can be used to enrich a provided class for which the source
code is not available, from the outside ~\citep{odersky_pimp_my_library}.

An example from the standard library: 

\begin{lstlisting}
final class RichChar(c: Char) {
  def isDigit: Boolean = Character.isDigit(c)
  // isLetter, isWhitespace, etc.
}
object RichCharTest {
  implicit def charWrapper(c: char) = new RichChar(c) //definition of implicit convertor
  def main(args: Array[String]) {
    println('0'.isDigit)
  }
}
\end{lstlisting}

% 
% Implicits can also be used as a work-around for a limitation in the trait 
% mixin system of Scala. If we have a trait that takes type parameters, we 
% cannot mix in two versions of the trait with different parameters in the 
% same class. Suppose we try to mixin the representation function on myLists 
% from the first example of the higher example.
% 
% \begin{lstlisting}
% object NoImplicitsDoubleImplementation {
%   import OrdinaryImplicit.{myList}
%   def main(args: Array[String]) = {
%     trait reprBuilder[S,T] extends myList[T]{
%       def repr[T]():S
%     }
%     trait IntReprBuilder[T] extends reprBuilder[Int,T]{
%       def repr[T]() = this.length
%     }
%     trait DoubleReprBuilder[T] extends reprBuilder[Double,T]{
%       def repr[T]() = this.length.toDouble
%     }
%     
%     class fulllist[A](xs: A*) extends myList[A] { self: fulllist[A] with reprBuilder[Int,A] with 	reprBuilder[Double,A]=>
%     //further stuff here that uses the repr method
%     }
%     val l2 = new fulllist(5,6,7,8) with IntReprBuilder[Int] with DoubleReprBuilder[Int]
%     println(l2 repr)
%   }
% }
% 
% \end{lstlisting}
% The compiler will complain on the line where we try and instantiate our list l2 by mixing 
% in the two instantiations of the same trait: ``illegal inheritance; template ... inherits 
% different type instances of trait reprBuilder: reprBuilder[Int, Int] and reprBuilder[Double, Int]''
% 
% 
% 
% Implicits allows us to make the two versions available for use without actually 
% mixing them into the base class. 
% 
% \begin{lstlisting}
% object WithImplicitsDoubleImplementation {
%   import OrdinaryImplicit.{myList}
% 
%     object IntReprBuilder {
%       implicit def repr[T](x: myList[T]):Int = x.length
%     }
%     object DoubleReprBuilder {
%       implicit def repr2[T](x: myList[T]):Double = x.length.toDouble
%     }
%     def returnRepresentation[T,U](x: myList[T])(implicit repr: myList[T] => U):U = {
%       repr(x)
%     }
%     
%     def main(args: Array[String]) = {
%       val l3 = new myList(5,6,7,8)
%       import IntReprBuilder._
%       import DoubleReprBuilder._
%       println(returnRepresentation[Int,Int](l3))         //prints 4
%       println(returnRepresentation[Int,Double](l3))      //prints 4.0
%     
%   }
% }
% 
% \end{lstlisting}
% Unfortunately we still cannot name our implicit methods the same in 
% both objects, but because we can trigger the implicit on the type 
% of the method and not of a object containing the method the name 
% becomes irrelevant and the example works.



\subsection{Structural subtyping}
 Determining whether a type is a subtype of another is commonly
 done either nominally or structurally. Nominal subtyping is the 
 kind we are used to in object-oriented languages: A is a subtype of B 
 if and only if it is declared that way by an \texttt{A extends B} declaration 
 and objects of class A are safely substitutable for objects 
 of class B. If subtyping is structural, we drop the first 
 requirement. Objects can then implement interfaces if they 
 ``match'' the contents of the interface even though they haven't been declared
 that way. An object matches a structural interface when it contains every member specified by the interface.
This is a very nice feature to have when you are integrating 
 code instead of writing it, and the interface you would like to use 
 was not used by the implementers. Another use case is in prototype 
 programming to get some typechecking before you freeze everything 
 into named interfaces.
 
 \begin{lstlisting}
 case class X(x: String){}
 case class Y(y: String){}
 
 object StructuralSubtyping {
   type hasY = { def getY():Y } //structural type defined here
   def printYVal(o: hasY) = println(o.getY().y)
 
   def main(args: Array[String]) = {
     printYVal(new VendorA.classA())        //prints classA.Y
     printYVal(new VendorB.classB())        //prints classB.Y
     
   }
 }
 package VendorA {
   class classA {
 	def getX() = new X("classA.X")
    def getY() = new Y("classA.Y")
   }
 }
 package VendorB {
   class classB {
    def getY() = new Y("classB.Y")
   }
 }
\end{lstlisting}


This is not a feature of Scala paramount to beginners, but we can use Scala's 
structural subtyping to simulate rank-two types, a common extension to Haskell.
\label{rank-two-trick}\label{ranktwoinscala}\label{ranktwoscala}

Universal polymorphism is normally left implicit in Haskell whereas it is explicit
in Scala. The function taking the head element of a list is of 
course totally independent of the element type of the list.
Haskell: 
\lstinline[language=Haskell] {hd::[a] => a } 
vs Scala: \lstinline[language=Scala] {def hd[a] (list:List[a]) :a }.

In more advanced Haskell you sometimes need to pass such type-independent as arguments
around. Suppose the handling function is declared as working for any type
parameter b , and needs a function that itself is also independent of
its type parameter. If we wrote in Haskell
\lstinline[language=Haskell] {takes2:: (b => b) => b => [b]} then this is interpreted 
in the same way as the following Scala method: 
\mbox{\lstinline[language=Scala] { def takes2[b] (fun: b=>B, arg: b): List[b]} }. 
This means that the user of \texttt{takes2} chooses a particular 
type \texttt{b}, and can then pass any function \texttt{b $\Rightarrow$ b}. 
But in this case we want to express that the user may choose \texttt{b}, but has to pass a
type-independent function, not a function \texttt{b $\Rightarrow$ b}. Then the 
implementation of \texttt{takes2} gets to instantiate the type on the 
passed function instead of the caller of \texttt{takes2}.
The Haskell syntax for this usecase is
\lstinline[language=Haskell] { takes2:: (forall b. b => b) => b => [b]}
In Scala we cannot write this directly because there is no syntax such as 
\lstinline[language=Scala] { def takes2[b] (fun: [b]b=>b, arg: b): List[b] }. 
There is also no trait like \texttt{PolyFunction1} available in the standard library 
that captures functions with this structure.
However, it is possible to place the \texttt{forall} by using a modified version
 of the \texttt{Function1} signature, writing it out structurally: 
\begin{lstlisting}[frame=none, language=Scala] 
def takes2[b] (fun: { def apply[b](arg:b):b}  , arg: b)  : List[b]
\end{lstlisting}






















\chapter{Implementing the Simply Typed Lambda Calculus in Scala}
 
A toy interpreter for the Simply Typed Lambda Calculus (STLC) can be
structured like a pipeline, adopting parts of a reference architecture for
compilers \citep{appelcompiler}. Each stage performs an operation on the
current program representation or translates between representations.

The front end parses the textual
representation into an abstract syntax tree. Then, the
typechecker checks whether the program is valid. Finally, the
evaluator performs the actual computation in order to arrive at
the result of the program.

This leads to the following structure for an interpreter:
\begin{enumerate}
  \item A representation for the abstract syntax tree as a datatype
  of tree nodes
  \item A parser that transforms a text representation into this tree
  representation
  \item A pretty printer to show the tree representation of the result as
  text
  \item A typechecker that validates a tree representation
  \item An evaluator that reduces a tree representation to a value
\end{enumerate}

\begin{lstlisting}[float,label=large-scale,caption=large-scala dependencies,xleftmargin=-0.5cm,xrightmargin=-1.6cm]
trait Interpreter { self: Interpreter with Evaluator with TypeChecker 
      with PrettyPrinter with TextToAbstractSTParser with AbstractSyntax =>
  def interpret(line:String):String = {
    val canontree = parse(line) //from TextToAbstractSTParser
    typeOf(canontree)        //from TypeChecker, throws typeException on error
    val result = evaluate(canontree) //from Evaluator
    prettyPrint(result)              //from PrettyPrinter
  }
}

//The typechecker also gets a prettyprinter mixed in to print errors textually
trait TypeChecker { self: TypeChecker with PrettyPrinter with AbstractSyntax =>
   def typeOf(t: LTerm) :LType }

class SimplyTypedInterpreter extends Interpreter with SimplyTypedEvaluator 
      with SimplyTypedTypeCheck with SimplyTypedPrettyPrinter
      with SimplyTypedTextToASTParser with AbstractSyntax{ }
      
\end{lstlisting}

The large-scale dependencies in this structure are formulated in Scala using
explicit self types, consisting of those interfaces a component depends on, 
as seen in listing \ref{large-scale}.
 The general top level \texttt{Interpreter} trait mentions the required
subcomponents for the different operations. The general trait 
for the typechecker is given as a second example. Besides the abstract
syntax definition, this subcomponent is also coupled to the pretty printer to enable nicely
printed error messages. 



An interpreter instance can be launched by instantiating an object of
class\\
 \texttt{SimplyTypedInterpreter}. This class is formed by mixing in the
corresponding implementation trait for each component interface trait 
mentioned in the self type.


\section{Tree representation}
For the representation of the abstract syntax nodes, the implementation uses
 DeBruijn indices following the examples in `Types and Programming
 Languages' \mbox{\citep{TAPL}} . Where in  written examples of the STLC, an
 occurrence of a variable refers to the innermost formal parameter with identical
 name, in DeBruijn index representation this linking occurs numerically. 
 The index of a variable refers to the outward distance between the
 occurrence and the declaring scope. A variable
 with number 0 is bound in the innermost scope. Number 1 refers to the formal parameter that is introduced one level of scope
 higher. As an example, \mbox{$\lambda$ f:Bool$\Rightarrow$Bool . $\lambda$ 
 b:Bool .  f b} has as DeBruijn index form \mbox{
 $\lambda$:Bool$\Rightarrow$Bool . $\lambda$:Bool 1 0}. 
 This nameless representation solves the problem of $\alpha$-equivalence
 \citep{barendregt88introduction} between equivalently structured expressions
 that differ only in the chosen variable names. Indeed, two different but
 equivalent expressions like $\lambda$:Nat x.x and $\lambda$:Nat y.y, both of
 which return their argument, are represented the same as $\lambda$:Nat 0. Thus 
DeBruijn indices form a canonical representation where $\alpha$-equivalence 
coincides with simple equality.
\\

We model the nodes as case classes (see listing \ref{syntaxtree}) in straightforward
correspondence to the grammar of the simply typed
lambda calculus as on page \pageref{STLC_grammar}.
\begin{lstlisting}[float, label=syntaxtree,caption=abstract syntax tree]
trait AbstractSyntax  { self : AbstractSyntax => 
sealed trait LTerm 
case class Var(n: Int) extends LTerm
case class Lam(hint: String, ty: LType, body: LTerm) extends LTerm
case class App(funt:LTerm, argt:LTerm) extends LTerm
...


sealed trait LType extends LTerm
case class TyBool() extends LType
...
//Function type with argument fundom and result funrange
case class TyArr(fundom: LType, funrange: LType) extends LType 
}
\end{lstlisting}
 

\section{Parsing and printing}

We use the parser combinator library that is included with the Scala
standard library to implement the parser.
%Since the Scala standard library
%includes a parser combinator library, it is natural to use this framework when
%implementing a parser.

This library makes it easy to implement a parser from the linear text
representation to a tree form. However, this tree form still references
variable occurrences using their name instead of their DeBruijn index.



A new trait \texttt{TextualSyntax} contains a syntax tree hierarchy parallel to
that in \texttt{AbstractSyntax}. The trait
\texttt{SimplyTypedLambdaToTextualParser} contains a parser built in the
parser combinator framework to parse text into these concrete syntax
trees. 

An additional component then performs the recursive transformation from the syntax tree with textual variables to the abstract
syntax tree with DeBruijn variable indices.

The identification of components with Scala traits with self types allows
hiding this subdivision of responsibility in the parser 
(see listing \ref{texttoabstract}). The top level
interpreter trait just depends on a parser \texttt{TextToAbstractSTParser}
from code text to abstract syntax. It just so happens that the implementation of
this parser is itself a composite component. The trait \texttt{TextualSyntax}
is used strictly internally in both of the subcomponents and never leaks to the
self type of our top-level parser.
\\
\begin{lstlisting}[float=b,label=texttoabstract,caption=from text to abstract syntax tree]
trait TextToAbstractSTParser { self:TextToAbstractSTParser 
                                    with AbstractSyntax =>
    def parseToAST(str: String) : Option[LTerm]
}

trait SimplyTypedTextToASTParser extends TextToAbstractSTParser 
   with SimplyTypedLambdaToTextualParser with
   SimplyTypedTextualToAbstractTransform with TextualSyntax { 
   self: SimplyTypedTextToASTParser with AbstractSyntax => ...
}
\end{lstlisting}


The component \texttt{SimplyTypedTextualToAbstractTransform} 
shares a lot of structure with the pretty printer: 
the former can be seen as a function from textual syntax nodes to abstract syntax
nodes, and the latter takes abstract syntax nodes to string representations.


Both need to gather context about variable bindings as they traverse the tree.
The first needs to remember the naming hints derived from the original input to find 
the distance to the closest scope binding the variable name. The second
remembers what naming hints it has already encountered in order to create new unused variable names to make a clearly readable representation.

Both were implemented as a \texttt{Function} object whose apply method provided 
the specific functionality. Making it a real subclass of \texttt{Function} 
even enables use in higher-order functions just like any other 
function.


The functionality for adding data to the context was lifted into a common
ancestor, \texttt{ContextualFun}. The differing functionality is split up between some 
helper functions and a different \texttt{apply} method (see listing \ref{contextualfuns}).
However, the \texttt{extend} method that creates a new object with a
properly extended context cannot naively be lifted into
\texttt{ContextualFun}. To be able to chain calls properly, this method needs
to declare a return type equal to that of the actual work-performing subclass,
not that of the superclass \texttt{ContextualFun} where it is delegated to.

One way to make this structure inheritable, without overriding methods in a
subtype just to further constrain the return type, is to introduce a factory
method \texttt{create} with a type parameter as return type. This type parameter represents the concrete subclass. Using a type bound on
the parameter we can express that the parameter represents a subtype of
\texttt{ContextualFun}. Now each subclass can instantiate the superclass with the correct
type parameters and the factory method will be properly typed without overrides.



\begin{lstlisting}[float,label=contextualfuns,caption=printer and namer share structure]
trait ContextualFun[A,B, Self <: ContextualFun[A,B,Self]] 
                                  extends Function1[A,B] { ... }

trait PrinterCtxFun extends ContextualFun[LTerm,String,PrinterCtxFun] {
    def create(ctxnew: Ctx) = new PrinterCtxFun { val ctx = ctxnew }
    def apply(t:LTerm):String = ... }
trait DeBruijnifyCtxFun extends ContextualFun[RawTerm,LTerm,DeBruijnifyCtxFun]{
    def create(ctxnew: Ctx)=  new DeBruijnifyCtxFun {val ctx = ctxnew}
    def apply(t:RawTerm):LTerm = ... }
\end{lstlisting}


\section{Typechecking}
As mentioned previously, typechecking for the simply typed lambda
calculus is `syntax-directed'. The syntactic structure of the term being
examined fully specifies which single typing rule is applicable.


This shows nicely in an implementation of this recursive algorithm of page \pageref{STLC_typing}
using a pattern matching function. Each typing rule of the algorithm is
reflected as a leg of the pattern match as in listing \ref{typecheckingpatternmatch}. 

\begin{lstlisting}[float,label=typecheckingpatternmatch,caption=type checking as pattern matching]
trait SimplyTypedTypeCheck extends TypeChecker{self: TypeChecker 
		with PrettyPrinter with AbstractSyntax =>
...
  def typeOf(t: LTerm)= t match {
    case Tru()        => TyBool
      ...
    case Var(n)       =>   { 
      recallTypeOfVar(n)
    }
    case Lam(hint,ty,body) => {
      TyArr(ty, rememberingTypeOfVar(hint,ty).typeOf(body))
    } ...
\end{lstlisting}
The base type case is covered by implementing branches linking each built-in
term to its specific base type.  When encountering a variable, the algorithm
uses a helper method, which, based on the DeBruijn index, looks up the
remembered type annotation for this variable. Counterpart to this is the branch
for a new function. Here the algorithm needs to remember the type declaration
on the formal parameter, which also forms the domain type for the function. 
The type of the body is then calculated with that extra piece of information. 
The function \texttt{rememberingTypeOfVar} constructs the extended typing context 
used by \texttt{recallTypeOfVar}.


In the STLC errors can only occur at function 
application. Application of a function to an argument is only correct 
if the argument has the exact expected type. 
If the argument has the wrong type or the supposedly-function term is not 
of function type at all,  the program is faulty and will be refused, 
as seen in listing \ref{typecheckapp}.

\begin{lstlisting}[float,label=typecheckapp,caption=checking function application]
    case App(tfun,targ)  => {  
      val funty = typeOf(tfun)
      val argty = typeOf(targ)
      funty match {
        case TyArr(fundom, funrange) => {
          //concrete and formal parameter types need to match
          if  (argty == fundom ) funrange 
          else throw new TypeExc("wrong application argument type:...")
        }
        case _ => throw new TypeExc("applying to non function type:...")
      }}}}

\end{lstlisting}


\section{Evaluation}
Evaluation too is a straightforward transcription of a simple algorithm. 
Reducing a full term to a value 
can be implemented by repeatedly taking a single local step of
evaluation. In the STLC
the only place where actual work is performed is function application.
%Opting for \mbox{call-by-value} semantics means that function arguments 
%need to be fully reduced 
%to values before the function can be applied. Then finally the 
Function application is performed by replacing the whole application by the
body of the function, with the argument value suitably inserted. This requires
an implementation of the substitution process
suited for the specific term representation as background machinery. 

Because of this separation of concerns, the evaluation function can become an
uncluttered pattern matching function on a given term in abstract
syntax. The code is a straightforward reflection of evaluation drilling down to 
the level of a function application. In the case of an application syntax
node (see listing \ref{evaluationapp}), the algorithm branches depending on whether the argument needs
further reduction or not. Because pattern match clauses are tried from top to bottom, you need to match on the most specific case
first. A choice for \mbox{call-by-value} semantics means that
function arguments need to be fully reduced 
to values before the function can be applied. Thus the pattern match leg that
tests whether this is applicable in this case comes first.
Afterwards come the cases where, by fall-through, either the function or the argument needs
further reduction before performing the substitution.


\begin{lstlisting}[float,label=evaluationapp,caption=evaluating function application]
trait Evaluator { self: Evaluator with AbstractSyntax =>
   def evaluate(t: LTerm) : LTerm  
}
trait SimplyTypedEvaluator extends Evaluator 
   with SimplyTypedDeBruijnSubstitution{ self: SimplyTypedEvaluator 
                                             with AbstractSyntax =>
   def eval1(t: LTerm): LTerm = t match {
   
    //E-AppAbs: Reduction is possible
    case App( Lam(hint,ty,body), v) if (isValue(v))
      => substituteterm(v).asTop.intoterm(body)
          
    //E-App2: we have a function but need an argument
    case App(v1, t2)             if (isValue(v1))  
      => App(v1 ,eval1(t2))
      
    //E-App1: we still need to reduce our function
    case App(t1, t2)                             
      => App(eval1(t1), t2)
      
   }}

\end{lstlisting}
The full evaluation trait also contains the function \texttt{isValue()}. Putting this 
functionality in the abstract syntax definition as a normal 
method vs. in the evaluation trait as done here is a matter of taste.
On the one hand, it needs to be kept in sync with the definition of the 
abstract syntax tree. On the other hand it is only used during 
evaluation. The same design choice needs to be made regarding the
implementation of substitution. In this implementation it is fragmented out
into a separate trait as a subcomponent for the concrete evaluation trait (see listing.
\\

The \texttt{TermSubstitutionProvider} trait (see listing \ref{abstractsubstitution}) exposes an interface like\\
\mbox{\texttt{substituteterm(arg).asTop.intoterm(body)}}. This fluid style is
implemented using a chain of methods and abstract classes. Each method
returns another small object with the next methods in the chain defined
on it. 
Scala's supporting features for family polymorphism
\citep{ScalableComponentAbstractions} make it easy for the concrete
trait to subclass this whole structure. Any premature hard links between
the abstract classes can be avoided by linking to an 
abstract type instead. The abstract
types are bounded by an abstract class defining the minimum exposed interface.

\begin{lstlisting}[float,label=abstractsubstitution,caption=fluid substitution interface]
trait TermSubstitutionProvider { self: TermSubstitutionProvider with AbstractSyntax =>

  type partial <: examplepartial
  type topsubst <: exampletopsubst
  
  def substituteterm(v:LTerm):partial
  abstract class examplepartial(v:LTerm){
    def asTop: topsubst
    override def toString="[? := "+v+"]"
  }
  abstract class exampletopsubst(v:LTerm){
    def intoterm(term:LTerm):LTerm
    override def toString="[top := "+v+"]"
}}

\end{lstlisting}
The implementing trait (see listing \ref{substitution} ) can then define subclasses of the different
syntax building classes. By implementing the binding
type members, the knot between the subclasses is fully tied. The functionality is implemented 
as a transliteration of the algorithm in \citep[chapter 7]{TAPL}. In this
algorithm the indices in the DeBruijn terms are shifted to deal with the 
new shape of the syntax tree. This is a highly non-trivial but well documented procedure. 

\begin{lstlisting}[float,label=substitution,caption=substitution implementation]
trait SimplyTypedDeBruijnSubstitution extends TermSubstitutionProvider 
              {self: TermSubstitutionProvider with AbstractSyntax=> 
              
  type partial = bruijnpart
  type topsubst = bruijntopsubst
  
  def substituteterm(v:Term)=new bruijnpart(v)
  class bruijnpart(v:Term) extends examplepartial(v) {
    def asTop() = new bruijntopsubst(v)
  }
  class bruijntopsubst(v:Term) extends exampletopsubst(v) {
    def intoterm(term: Term)= ...
}}

\end{lstlisting}


\section{Extending to System F}
In the simply typed lambda calculus, we can only abstract over values to form a function.
Such a function has a fixed type, and this forces the programmer to duplicate a lot of code for every type.


In System F, the polymorphic lambda calculus, we can also abstract over a type parameter.
We now have two abstractions that give a term: ordinary lambda abstraction that 
takes a term argument, and a new universal quantification abstraction, identified 
with capital lambda $\Lambda$, that takes a type and returns a term.

By wrapping a universal type abstraction around an ordinary term abstraction, we can write polymorphic functions.
The simplest one, the identity function, can now be 
expressed as \texttt{ $\Lambda a . \lambda t : a . t $}

The typing rules become more complicated, because we now have functions at the type level.
Indeed, the type of a polymorphic functions is no longer a base type, but a type abstraction 
using a type-level lambda over another type, in this case `` forall a . a''

There is of course also as second form of abstraction over types: existential instead of universal quantification.
This introduces its own abstraction form of terms over types, and its own composite type. This can be used to model data hiding as in module systems.
\\
To implement these, the typing and evaluation rules are again not too hard.
During typechecking, we now need to perform substitution of type arguments in composited types. During
evaluation, substitution of both types and terms in a term is needed. By factoring this 
out to the fluid substitution interface, typechecking and evaluation remain simple to 
implement as a pattern match.

However, the factoring into a separate substitution component only moves the complexity.
While \citep{TAPL} spells out the substitution algorithms, understanding them and 
perhaps finding implementation errors is not straightforward any more, because of the index manipulations.

Chapter 9 documents the technique of Nominal Abstract Syntax we used as a replacement for 
DeBruijn indices in a revision of the System F interpreter, and how datatype-generic 
programming techniques help to reduce the size of the then trivial substitution code.
































\chapter{Introduction to Datatype-generic programming}

Datatype-generic programming is a recent branch within functional programming
languages using abstraction over types to create more flexible programs,
pushing the limits of what is possible within a static type system.

As mentioned earlier, the evolution from early to modern computer languages has
manifested itself in rising mechanisms for abstraction. 
The basic function, abstracting from concrete values using parameters for a
block of code, has been mainstream for decades. The theory behind this is
backed by the simply typed lambda calculus as we have seen and implemented. 
An extension of this model, System F, also allows abstraction over types.



With the
introduction of `generics' into Java 5 and $C \sharp $2 , a partial implementation 
of such type abstraction is now mainstream. Now programmers can write typesafe functions 
that work for all value types and use classes parameterized over a simple element type.
%As an example, a function returning the first
%element of a list can now be declared as in Scala syntax as 
%\lstinline{ def first[E](list:List[E]):E }. 
This feature has reformed the collection frameworks in these languages 
significantly, as one can now put elements in a collection and retrieve them 
without restating lost type information with typecasts like before.



More recent developments within the functional programming, especially Haskell,
push the flexibility limit of typesystems higher but require fuller support for type
abstraction. 
Since the name ``generic programming'' is now generally recognized as referring 
to a generics implementations in a mainstream object-oriented language, 
the new name datatype-generic programming (DGP) has been coined for these
efforts.


A first branch of datatype-generic programming follows directly from taking 
the abstracting over element types one step further and also abstracting over
the container type. With this extra power, you can write functions that work for
a number of different datatypes. For example, a function \texttt{size} can
now be parametrized by both the container type and the element type, so it works
for both lists, trees and other datastructures.
A limited version of this concept is already available from the users point of
view by following the standard object-oriented advice of using
methods in interfaces instead of implementation classes. 
Scala
allows writing more expressive interfaces that are easier to implement. It is also possible to use the same
mechanisms to abstract over APIs and effects instead of just collections
of elements. This can for example be used to abstract over null-checking and
exception handling.

%It is possible in Java to have an
%interface Countable with a method size that works for any iterable datatype. a
%function that This way one can calculate our size example function for any
%iterator. But now Scala improves over Java with some features that make
%interfaces both more expressive and a lot more convenient. Use cases for this
%include collection libraries, but also control abstractions as are well-known
%in the Haskell community



A second subcategory of datatype-generic programming uses a different kind
of type information. In an environment where the structure of the
type is known, we can write functions that work guided purely by this structure, without needing a 
specific interface with datatype-specific code. A set of Haskell techniques
aims to enable general functions for serializing and deserializing, printing
and parsing. This eliminates the need to write this boilerplate code for
every datatype over and over again. The way this happens now in Java is through
reflection and any validity checks only happen at runtime. These Haskell methods 
can inspire a properly abstracted, safer and more convenient way of writing such functions.


A third subcategory focuses specifically on the boilerplate code needed when
querying or transforming a complex datastructure. When in most of the cases the
behaviour needed is very ordinary and only a few cases are actually interesting,
these methods aim to have the programmer write only the non-trivial cases and
just use default behaviour in all other cases.

 
 


The promises made by datatype-generic programming in general include:
\begin{itemize}
  \item greater resilience against changes
  \item less repeated code
  \item greater reuse
  \item clearer code
\end{itemize} 
Code that is not specific to any single type will automatically adapt when
the types used in the program are refactored.
Because they work for more cases, they can obviously lead to better
reuse. When boilerplate code like handling nulls or handling uninteresting cases
in a traversal can be abstracted, the remaining code handles only the
interesting parts of the algorithm and makes the code clearer. 



\section{Type abstraction taxonomy}
Somewhat like the constant widening of the term Artifial Intelligence,
the term generic programming refers to abstracting over things
mainstream languages cannot abstract over. A subdivision with specific terms is needed to
communicate accurately. Currently we can distinguish the following subdivisions
of the term genericity or polymorphism within functions that handle abstract
types:
\begin{itemize}
  \item A function is parametrically polymorphic with regard to a parameter if
  the evaluation logic in the funcion does not depend on the parameter in any
  way. Examples are the head and tail functions on a list, which are parametric in the element type. These
  functions have a type parameter with no extra information, and thus work for
  all types. 
  \item A function uses ad-hoc polymorphism if it depends on the specific type
  parameter passed to it, but hides it syntactically using overloading. Static
  method overloading in Java and type classes in Haskell both allow the same
  name to refer to different implementations based on the type. Scala's
  implicit parameters as type class emulation also fall here.
  \item A function uses bounded parametric polymorphism if it only needs partial
  type information about its parameter and not full implementation information.
  An example is subtyping using base classes and interfaces or traits in Java
  and Scala.
  \item A function is polytypic with regard to a parameter if it needs just the
structure of the parameter type to work without any specific implementation information. 
Usage is mainly for parsing, pretty-printing, serialization. This can happen
through function specialization at compile time or through reflection at run
time.
\end{itemize}



\section{Type constructor polymorphism}
The main step that is needed to go from the type abstraction of Java and
C$\sharp$ to that of Scala is the addition of type-constructor
polymorphism \citep{adriaantcpoly} in version $2.5$.

As mentioned in the explanation of type checking on page
\pageref{first_typeconstructor_mention}, we can make a distinction between
different subsets of types. 
The most common are value types, the one that is directly used to classify values, such as Int and String. However, when our data is not primitive but composed, we get types such as List
of Int, Tree of String. List and Tree are type constructors or type functions:
they take a type and yield another type. To differentiate between these sets of
types, we can categorize them using `kinds' just as types classify values. Ordinary value types have kind \texttt{*}, while List and Tree
both take one type parameter and are of kind \texttt{* $\Rightarrow$ *}.

In Scala syntax a method with a single type parameter of kind \texttt{*} is
written as \lstinline{def methodname[parametername] ...}, and a method with a
type constructor parameter of kind \texttt{* $\Rightarrow$ *} is written
\lstinline{def methodname[parametername[_]]}. This syntax demonstrates that the
type constructor parameter can be applied to a type itself.

\section{Weak interfaces lead to boilerplate}

If a language has no way to abstract over types, neither by type parameters nor
by subsumption, a specific function is needed for every operation on every
datatype. The best you can do in this case is to write different
datatype-specific functions, and have the client implement a wrapper using
static overloading to enable use of the same function name in all the
dispatched cases. 
In Java and similar languages, subsumption and interfaces can transform the
manual dispatch into a dynamic dispatch and make the client side polymorphic.
The client needs to write his function only once, but every implementer that
exposes the interface needs to implement every function of the interface. So
the number of implementations that really do something else that redirect is
still the same.


In Scala before version 2.5, the fact that traits can carry default
implementations already lowers the implementation effort tremendously. A lot of
methods with fixed return types can be layered on other more general functions.
However, some functions in interfaces need to return a value of the datatype
that implements the interface. The best we can do is have an implicit upcast
and return the value as being of the interface type. If the implementing classes
want to prevent this loss of type information, they have to override each
inherited method in order to specify a more specific return type. So then we
still cannot properly inherit default methods from traits.

Once a language has type constructor polymorphism, you can use the implementing
type as a type parameter of the interface. In the interface methods can then
get the more precise return type, and the implementation class can truly
inherit the implementation of default methods without any more boilerplate.

In Haskell this polymorphism has been available for a long time, and some
mathematical concepts have been implemented as type classes, requiring type
constructor polymorphism. Some interfaces are so general, they can provide an
abstract API to eliminate a very different type of boilerplate: code that
choins different operations on data.
By this chaining I mean the code that is necessary between two consecutive
function applications on a value. If the functions are normal and pure, we can
safely call the second function directly on the result of the first one. But
a lot of functions are not so well behaved and can for instance return null
instead of a proper value. Every programmer recognizes the pattern of applying
a function on a value, checking if it returned null and only in the case of a
valid result calling the next function. Or in case of functions that can throw
exceptions: calling a sequence of functions within nested try-catch blocks.

The interfaces that can be implemented once type constructor polymorphism is
available make it possible to write general code using an abstract function
chaining operator. We will tackle these in chapter 7.

\section{Traversals lead to boilerplate}

Boilerplate code refers to code that needs to be repeated, identically or
almost identically, and obscures the important pieces of code. One particular 
source of boilerplate is caused by the internal structure of datatypes.
%One of the areas which needs a lot of boilerplate in an object-oriented or
%functional approach, is the traversal of complex hierarchical datastructures.
A lot of operations on hierarchical datastructures consist of 
either traversing a datastructure and changing
something: a transformation/mutation; or traversing a datastructure and
calculating a result: query/inspection.
\\

The way to handle such operations depends on what paradigm your data storage
interface follows. If the data in the system is actually backed by a relational model, it is 
possible to step away from the hierarchical model and operate directly on the
database.  However, current industry standard object-oriented
modelling \citep{fowler_peaa} necessitates a dual view on the data: an
object-oriented domain layer, backed by a relational data store, with a 
mapping inbetween. In this case extra decisions must be made about which 
operations to perform on the object side and let percolate to the relational side, 
and which in the other direction. This leads to a full additional layer in the
software architecture that is responsible for marshalling between the object-oriented
and relational view and encapsulating the representation information. One way
of reducing this boilerplate is to have an existing framework handle it, such
as those adhering to the Java Persistence API.


A newer trend is using XML for data storage, with either a the typical XML
query language XQuery and transformation language XSLT to perform operations
directly on the storage model, or again an object-XML mapping layer such as the
Java Architecture for XML Binding specification and then using object-oriented
traversal implementations.
\\

A different option is to stay within a single pure representation. This allows
you to scrap the pure boilerplate of the mapping layer. 
%The current solution used in the industry seems to be performing
%these operations directly on a database representation of the data, thus necessitating a second view on all your data,
%one structured live in the system and one relational in the database
%representation. A goal here would be to make these operations as convenient in
%a pure model as in a relational one.
However, the intuitive pure ways of implementing traversal code typically
require even greater amounts of boilerplate code, hence the need for a simpeler model, even
absorbing the cost of a mapping layer. This seems to be true whether the
programming language is functional or object-oriented.
 

Take the following sketch of a hierarchical system model
\begin{itemize}
\item CarPark : list of cars
\item Car : body, chassis
\item Chassis : frame, powertrain, list of wheels
\item Wheel : position, tire\_id, tirepressure
\end{itemize}
An example of a transformation operation would be changing the tires: for every
wheel, the tireid could need to be swapped between winter and summer tires.
%perform transform: change winter/summer tire (oldid {$\rightarrow$} newid)
An example of a query would be gathering the average tire pressure over a
certain selection of cars.
\\


Whether we are using the object-oriented or functional paradigm, to model 
these as high-level operations requires boilerplate code on each level to distribute
the operation to its children.
\\



Proper object-oriented style advises following the Law of Demeter to hide internal 
structure. This law says an object should only pass messages to its friends.
This avoids change-sensitive and repetitive user code drilling down in another 
object using member selection and loops.
In such of
model of the carpark system not only the \texttt{Tire} class has a method
\texttt{changeTireId( \ldots) } , but also the top level class \texttt{CarPark} 
to provide opaque access, as well as all the intermediate levels. Each method above \texttt{Tire} 
will pass on the method call to its children, either as a single call 
like within \texttt{Car} to its instance of \texttt{Chassis} using simple 
indirection, or as multiple calls in a loop like Chassis to each wheel in its
list of \texttt{Wheel}s.
%Law of Demeter says hide internal structure, don't peek in other objects
\\

This data hiding gives flexibility benefits, and makes sure that the traversal 
logic stays on the provider side instead of the client side.
However, the traversal boilerplate is repeated by the provider in every class 
and for every method that needs to be shielded. If we have a class hierarchy
that is $c$ deep, and we wish to implement $m$ traversing methods, this comes to
$c . m$ instances of boilerplate, obviously not a good scaling factor.
\\



On the other hand, defining this in the functional way leads us to the
same problem: we create different functions for each of the datatypes, again
each distributing the operation to the level below. Otherwise the user has to
decompose data using pattern matching to drill down. Because of the duality between 
object-oriented and functional styles, again boilerplate code must be 
repeated for every function and for every datatype. Again $c * m$ instances of
boilerplate code obscure our code.
\\




Such amounts of boilerplate code have a lot of disadvantages:

\begin{itemize}
  \item Whether retyping or using copy-paste to shorten the production time of
  such code, the maintentance is still hard.
  \item To add a single method or a single class, modifications are needed in
  more than one place.
  \item The repeated code obscures the relevant sections.
\end{itemize}


The goal of datatype-generic methods for such cases, as we will treat in
chapter 8, is to have a general traversal method defined automatically for each
datatype based on its structure. We also need a composition system to compose particular and general parts. Using
such a mechanism we can write each different function that we want to distribute 
over different levels, in the form of a thin layer specifying only the particular 
cases and redirecting all others to the general traversal method. 


Now
for each class we have to provide a single general function of boilerplate per traversal
pattern. % For each class we have to make the type structure clear so the general
%traversal function can be derived. 
This comes to a total of c + m instances of boilerplate code, so
datatype-generic programming can reduce the amount of necessary boilerplate dramatically.




















\chapter{Mathematically inspired abstractions in Scala}
In functional programming in Haskell, the available abstraction mechanisms suffice 
to adapt and instantiate abstract concepts from category theory as named
program abstractions. These concepts embody very general patterns, and can be
recognized in a lot of places once they are known. Though patterns are already
useful just as naming instruments for easier communication, they are more
beneficial once they can be implemented as a named concept.
Then they can be reused instead of constantly reimplemented, and
programmers can write general utility functions for all instances.
\\
In Haskell these are used mainly to deal with either mappings of
functions over complex container structures, or to introduce, compose and
wrap the use of functions with side-effects into a pure language. 


While programming using collection frameworks has seen great use in 
mainstream languages, using interfaces and abstraction over simple types, 
side-effects 
are often not well contained. As interaction between
side-effects is a major source of bugs and, in the case of statefulness, 
concurrency problems, it is worth trying to 
see how far we can use the same containment principles in a next-generation 
mainstream language such as Scala.

Our main running example will be abstraction from the boilerplate that deals
with possible null values.
\\


Since these mathematical concepts sometimes blur the distinction between 
distributing a function over a collections and composition of functions with
side effects, they are sometimes a bit too abstract on first encounters. Most
of the time thinking of what the rather abstract operations mean in the case of
ordinary lists can help to grasp the vocabulary.
\\

These abstractions are structured in Haskell libraries as type classes and type constructor classes.
They can be rendered in Scala in a similar functional style, where methods take
a trait containing function implementations as an implicit parameter. 
However, a less literal translation from Haskell, that exposes the abstract
functionality as methods on an object, better fits the current Scala idiom of
objects with higher-order methods.


Since the functional view is more straightforwardly implementable, this will be used to
introduce the concepts via Haskell translated into Scala. A typeclass is then a 
trait with some necessary methods and some other convenience methods layered on top.
Generic function are parametrized in a type , and take an implicit argument of
the typeclass trait with the same type parameter. This implicit
argument is a witness that the type actually has the methods
available. Then in the body of the generic method, the functions needed can be
accessed as elements of the implicit witness parameter.
\\
% part of what helped me get this was
% http://www.haskell.org/haskellwiki/The_Other_Prelude
The coming sections of this chapter will show some of these abstractions, 
how they can be combined and how they can be used as common structures for API's.
Then we will tackle the port to a more object-oriented syntax.



\section{combinable}
This section introduces concepts that can be used to combine different values
into one.


\subsection{monoids}%dutch: unitaire halfgroep%
A monoid is a mathematical structure consisting of a set of elements, a neutral
element and an associative binary operation on those elements. A monoid is 
probably most familiar as being almost a group, since it lacks the ``inverse'' 
relation of the latter.

\begin{lstlisting}[language = Scala]
trait Monoid[A] {
  def mempty:A 
  def mappend(a1: A, a2: A): A 
}
\end{lstlisting}

Quite a lot of structures in programming are monoidal, because making 
combinations is such a basic feature. The natural numbers form a monoid with 
`0' as neutral element and addition as operator, and a second one with `1' and 
multiplication. 
Booleans have a useful conjunctive monoid with `true' as neutral element and `and' and associative operation, and a second 
disjunctive one with `false' and `or'. 

But not only sets of value types can have monoidal structure.
So do Lists with the empty list as
neutral element and append as operation. Also the Scala Option type, with None
as neutral element, and two possible associative operations: taking either the first or the second
operand as result when both are actual values, and the non-dummy one when only one of the two contains a value.
And further removed from the idea of container of elements,
endomorphisms, being functions whose range equals their domain, form a monoid
with the identity function and function composition.

This monoidal structure on type constructors cannot directly be fitted into the 
existing trait \texttt{Monoid} because that abstracts over a value type. For any 
type \texttt{T}, \texttt{List[T]} will be a \texttt{monoid}, and this 
makes \texttt{List} an instance of a second parallel trait \texttt{Monoid1}
(listing \ref{monoid1}) abstracting over a type constructor.
\begin{lstlisting}[language = Scala,float,label=monoid1,caption=Variation on monoid]
//kind * -> * : extra structure on type constructors with one parameter
trait Monoid1[M[_]] {
  def mempty[A]:M[A]
  def mappend[A](m1: M[A], m2: M[A]) :M[A]
}
\end{lstlisting}

So now we have an interface that expresses combining two elements. This can be 
built upon to write functions that abstract over elements, bounded by the fact that those elements must be combinable.
For example, different functions on lists follow such a pattern. Calculating the sum of a list of integers
is traditionally done in a language without higher-order methods using a counter 
initialized to zero and a loop over all the 
elements that adds each element to the counter.
In Scala the conventional solution is a use of the foldLeft function 
on the list taking the initial element and combining function explicitly.
Using \texttt{Monoid} we can also use a function \texttt{fold} that takes the 
parameters packed together as a \texttt{Monoid} instance.
As an external function:
\begin{lstlisting}[language=Scala]
  def fold[A](list:List[A])(implicit witness:Monoid[A]):A
\end{lstlisting}
The function summing an integer list is then \lstinline[language=Scala]{ def sumlist(list:List[Int]):Int = fold(list)(Monoid.IntPlusMonoid)}
A different variant, foldMap first evaluates the elements using a passed 
function into a monoid type, before combining them.
This allows writing the \texttt{exists} function as a map using a predicate function 
followed by combining the booleans using the disjunctive monoid.
\begin{lstlisting}[language=Scala]
  def foldMap[A,O](fun: A=>O, arg:List[A])(implicit witness:Monoid[O]):O
  def exists[A](pred: A=>Boolean, arg:List[A]):Boolean 
    = foldMap(pred,arg)(Monoid.BooleanDisjunctiveMonoid)
\end{lstlisting}





\subsection{foldable}

\texttt{Foldable} is an abstraction for a container that can use a combining function,
to repeatedly combine its elements into a result value.
Where we just defined \texttt{fold} and \texttt{foldMap} on lists and referred to \texttt{foldLeft}, 
\texttt{Foldable} is the type constructor interface that lists as well as other containers should implement that packages these functions.



A fold operation is a very expressive construct, used most of the time to calculate a kind of
summary of a composite construct. Traditionally some different variants
are provided: combining from the left or from the right; performing the first comparison using the first two elements, or
using an explicitly passed initial default value; combining the elements
directly, or using an extra evaluation function to map each element into a
combinable element.


It is easy to see that layering these different variants on a single function
allows users to choose the most natural dialect of fold for each usage
scenario, while keeping the implementation effort per datatype small.
This makes it a concrete case where Scala traits with default implementations
again provide a big reduction in needed code compared to Java interfaces.




In the Haskell implementation, all \texttt{fold} variants are layered on top of 
a general \texttt{foldRight} and a \texttt{foldMap} function with equivalent
Scala signatures as in listing \ref{foldable_spec}.
\begin{lstlisting}[float,label=foldable_spec,caption=foldable specification]
trait Foldable[FO[_]] {
  def foldMap[A,M](intomonoid: A=>M , arg: FO[A])(implicit mon:Monoid[M]):M
  def foldRight[A,B](combiner: (B,A) => B, init: B, arg: FO[A]):B
...
\end{lstlisting}

Each of these two can be implemented using the other with a piece of rather succinct Haskell.
This layering can also be expressed in Scala.
It is much easier to see how a \texttt{foldMap} can be written on top of \texttt{foldRight} than the other 
way around: the foldMap just passes the \texttt{mempty} and \texttt{mappend} elements 
contained in its \texttt{Monoid} parameter as initial value and combiner to \texttt{foldRight}.
The opposite way round is slightly trickier.
A combiner function taking a B and an A to a B is available. To \texttt{foldMap} we can only 
pass a function from A to a type for which we have a \texttt{Monoid} instance.
We need to rely on the monoid for endofunctions, the functions with equal argument and result type that can be combined with function composition.

The trick we need is to construct a mapping function from an A into an endofunction B to B, that 
takes the outer A and the B parameter and produces a B using the combiner function. 
\begin{lstlisting}[language=Scala]
  def foldMap[A,M](fun:A=>M, arg:FO[A])(implicit witness:Monoid[M]):M = 
    foldRight[A,M]( ((acc:M),(next:A))=>mon.mappend(fun(next),acc), mon.mempty, arg)
  
  def foldRight[A,B](fun: (B,A)=>B, init:B, arg:FO[A]):B = {
     val endomon:Monoid[Endo[B]] = EndoRightToLeftMonoid[B]
     val endoresult :Endo[B]= foldMap[A,Endo[B]](  (a:A) => Endo( (b:B)=> fun(b,a) ), arg)(endomon)
     endoresult(init) 
     }
\end{lstlisting}
Then once we have a \texttt{foldRight} to bootstrap \texttt{foldMap} or a \texttt{foldMap} itself, we can in 
a similar way, using anonymous function, layer the other fold dialects 
on top of these functions. These implementations can of course be overriden to provide 
more direct implementations when speed is more important, but we can express the general layering principle.
 

The introduction of \texttt{Foldable} is useful because we can now define functions 
layered on a fold such as the 
\texttt{exists} example, for all \texttt{Foldable}s at once, instead of writing the 
layering over and over again as boilerplate per datatype.
The dual function of \texttt{exists}, \texttt{forall}, that checks whether a predicate is 
true for all elements, and the \texttt{find} operation that returns the first element 
for which a predicate is true are also prime candidates for specifying as default 
methods within \texttt{Foldable} as in listing \ref{foldable_functions}.
The different \texttt{Monoid} instances described earlier can find heavy use
here. 
\begin{lstlisting}[language=Scala,float,label=foldable_functions,caption=general functions layered on foldMap]
  def forall[A](arg:T[A])(p: A => Boolean): Boolean = 
    foldMap[A,Boolean](p, arg)(Monoid.BooleanAndMonoid)
  def exists[A](arg:T[A])(p: A => Boolean): Boolean = 
    foldMap[A,Boolean](p, arg)(Monoid.BooleanOrMonoid)
  def find[A](arg:T[A])(p: A => Boolean): Option[A] = 
    foldMap[A,Option[A]]({(a:A)=> if(p(a)) Some(a) else None}, arg)
          (Monoid.OptionFirstSomeMonoid)
\end{lstlisting}
These examples shows that enough abstraction power allows us to use abstractions 
that reduce code to its essence.


%But folds can be used to construct any value, not just what mentally maps
%to a summary values. Examples using the fold structure both as a proof technique to encapsulate induction, and with tuple-producing functions to generate drowWhile or even Ackermann's function can be found in
%\citep{A_Tutorial_on_Fold_Graham_Hutton} describes how the fold structure
%can be used as a proof technique to encapsulate induction and how folds
%constructing tuples lead to dropWhile or even Ackermann's function.

\section{chainable computations}
While a container type such as List is often a good mental picture of a type
constructor, here we will see classes that use them more as composeable effectful
computations. The concepts here can be used to transform pure functions into
functions that can deal with nullability of values, failure, non-determinacy in
a structured way.


When working in a language as Java, the programmer has to keep in mind that null
values and exceptions are possible at any time, and in principle should be
guarded against everywhere. Just like always checking the error code of C
functions before it is used, these measures are necessary but form pervasive concerns. 
The techniques developed by the functional programming community and distilled in the
following concepts allow structured and abstracted use of these effects to
seperate the code chaining those functions together from the actual
useful functions that implement the functionality.


In Scala like in Haskell, structuring code using the following concepts indicates 
the presence of effects, generalizes the code over a specific effect.But
unfortunately Scala does not currently have a way of indicating and enforcing the absence of effects.

\subsection{functors}
The most basic notion here, of which all the rest are specializations, is the
functor. A generalized container is a functor when the container can be transformed by specifying how to transform
each of the elements. In a node and arrows diagram of the datastructure the new
container will typically have the same structure, but each of the nodes will be
changed by a single transforming function. A functor is a container that knows
how to distribute a function over its elements. 
If we are working with side-effects like the Option type encapsulates possibly null values, 
a functor can apply a pure function to its contents 
while keeping the side-effect untouched. 
\\

% In object-oriented terms, a functor is like a container with a accompanying
% transforming visitor interface.

A common introductory exercise when familiarizing students with higher-order 
functions is to let them implement the map function for lists and trees. 
Well now, \texttt{Functor} (see listing \ref{functor_spec}) is precisely the
abstraction of anything with this \texttt{map} function. In Scala we also add the \texttt{foreach} when we don't
care about the result, but there is for example a mutable variable that needs
to be updated.

Since the output of the map function is of the same container type 
as the input, abstracting over the container type is crucial.
\begin{lstlisting}[language = Scala,float=p,label=functor_spec,caption=functor specification]]
trait Functor[F[_]] {
  def map[A,B](fun: A=>B, arg:F[A]): F[B]
  def foreach[A](fun: A=>Unit, arg: F[A]): Unit = map[A,Unit](fun, arg)
}
\end{lstlisting}
\label{functor_definition}

A container like \texttt{List} is a functor by just using ordinary function
application on each of its elements, abstracting away a simple loop from the
client code.
But not only obvious containers adhere to this interface, also the Option type,
which Scala uses to reflect nullability in the type system, is a functor as in
listing \ref{option_functor_impl}.

\begin{lstlisting}[float=p,language = Scala,label=option_functor_impl,caption=Option Functor implementation]
implicit object OptionFunctor extends Functor[Option] {
  def map[A, B](f: A => B, ft: Option[A]) = if (ft.isEmpty) None else Some(f(ft.get))  } 
\end{lstlisting}
Listing \ref{option_map_java} shows how nullability would be handled in Java,
while listing \ref{option_map_scala} shows how it can be handled using the
functional-style trait above, and how it can look using a more object-oriented version we aim for
later in this chapter and when we use the sequence comprehension syntactic
sugar. 
\begin{lstlisting}[float, label=option_map_java,caption=Null checking boilerplate code in Java] 
String repeatStringPure(String str) { return str + str; }
String useRepeatString(String maybenull) {
  if (maybenull == null) {null} else { repeatStringPure(maybenull) }
}
\end{lstlisting}
\begin{lstlisting}[float=p, label=option_map_scala,caption=Null check abstracted away in Scala]
def repeatStringPure(str:String) :String = str + str
//functional style using witness trait
def useFun(maybenull:Option[String]) = OptionFunctor.map(repeatStringPure,maybenull)
//object-oriented style we move to later
def useOO(maybenull:Option[String]) = maybenull map repeatStringPure
//sequence comprehension syntax
def useSequence(maybenull:Option[String]) = for { s <- maybenull} yield repeatStringPure(s)
\end{lstlisting}
 
Besides \texttt{List} and \texttt{Option}, if you think a bit deeper, you can see a functor in ordinary functions.
A function has two type parameters, while a functor has one. 
If we fix the first parameter and create for example \texttt{
FunctionProducing[A]}, this is a \texttt{Functor} with 
function composition serving as \texttt{map} operator. 
We can map it with a function A$\Rightarrow$B to a 
\texttt{FunctionProducing[B]} by applying our mapping 
function to the result of the original producing function.


% have only one
% free type parameter left, and make a possible functor, we can
% partially apply either the type parameter for the domain or the range of the
% function. For an example, suppose we fix one parameter to Int.
% Our two derived types map to FunctionProducingInt[T] and
% FunctionConsumingInt[T].
% %derived types by choosing which type parameter to fix, either functions 
% %to Int from a param `t', or functions from Int to a param `T'
% If we start with a function that produces T's from Integers, and we
% get a function from $T \Rightarrow T2 $ in fmap, we can indeed transform into a function
% producing T2 by just using function composition: we return an anonymous
% function which on receiving an Int, first applies the original function 
% and subsequently the new one. So the map operation for producers is 
% really function composition.


% Can we also find a functor instance for the other derived type? To do this, we
% ned to transform a function from $T \Rightarrow Int$ to a function from
% $T2 \Rightarrow Int$, given a function $T \Rightarrow T2$. There is no order of
% function application that can give us the result. 
% 
% 
% However, by symmetry we see that it is possible
% to define a symmetric operation co-map where the mapping function is not of
% type $T \Rightarrow T2$ , but of the inverse type $T2 \Rightarrow T$. In this
% case the operation cofmap is simply the composition of our original function after the
% transforming functor. So while producing functions can be made instances of
% functor, consuming functions can be made instances of a dual notion,
% `cofunctor' or `contrafunctor'

A last interesting fact about functors is that they are automatically
composable. A combination of two type constructors that are \texttt{Functor}s, like a
\texttt{List[Option[T]]} is again always a functor in T. While the inner functor
distributes the passed function, the outer one distributes the inner map, a concept that can be very
succinctly expressed in Haskell.

% \begin{lstlisting}[language=Haskell,float,label=functor_comp_haskell,caption=composite functors in Haskell]
% newtype (f `Compose`  g) a = Comp { unComp :: (f(g a)) }
% 
% instance (Functor f, Functor g) => Functor (Compose f g) where
%   fmap = fmapComposeFunctors        
% 
% fmapComposeFunctors:: (Functor f, Functor g) => (a->b) -> Compose f g a -> Compose f g b
% fmapComposeFunctors  fun (Comp fgs)  = Comp ( fmap (fmap fun) fgs )
% \end{lstlisting}
Using infix type syntax in Scala and \# to select a type from a trait, we can
get a similar effect in Scala as listing \ref{functor_comp_scala}. Using this
composition we can abstract over combinations of for loops and null checking.
\begin{lstlisting}[language=Scala,float,label=functor_comp_scala,caption=composite functors in Scala] 
trait O[F[_],G[_]] {
  type O[foo] = F[G[foo]]
}
def CompositeOfFunctors2Functor[F[_],G[_]](implicit ff:Functor[F], 
                               gf: Functor[G]):Functor[ (F O G)#O] 
                                       = new Functor[ (F O G)#O] {
  def map[A,B](fun:A=>B, arg: F[G[A]]):F[G[B]] =
     ff.fmap[G[A],G[B]]( (ga:G[A]) => gf.fmap(fun,ga)   , arg)
}

val lo = List(Some(1),None,Some(3))
println(CompositeOfFunctors2Functor[List,Option].map((x:Int)=>1+x, lo))
//prints List(Some(2), None, Some(4))

\end{lstlisting}









\subsection{monads}
Further down the path of abstracting over effect chaining, we encounter the 
concept ``monad'' that allows us to chain functions that cause effects.
A container that is a functor 
knows how to compose side-effectless functions over its content, a  monad also 
knows how to compose functions with side-effects.
The monad structure puts more requirements on the datatype, so not all functors are monads, but the
concept is more powerful.
 


For a given container/effect to be a monad, it needs an operation \texttt{`unit'} lifting an
ordinary value into the monad using a dummy effect, and an operation \texttt{`bind'} transforming the contained
values, as in listing \ref{monad_spec}

\begin{lstlisting}[language=Scala,float,label=monad_spec,caption=monad specification]
trait Monad[M[_]] extends Functor[M] {

  def bind[A,B](fun: A=>M[B], arg: M[A]): M[B]
  def unit[A](arg: A): M[A]
}
\end{lstlisting}

The effectful function that a monad can handle with its \texttt{bind} operator
gets to look at the contents of the monad, and generates a new 
container/effect. This obviously introduces 
extra structure/effects, and knowing how to resolve this is precisely the characteristic of a monad.

A first example is lists: \texttt{unit} lifts an argument into a singleton list, and \texttt{bind} takes a function from each element to a list, applies it to the elements and flattens the results into one list.
\\
A simpler example is \texttt{Option}. 
We already know how to map a pure function A $\Rightarrow$B over an \texttt{Option[A]}. 
An effectful function when the effect is nullability, means one like A $\Rightarrow$Option[B] that can fail 
for certain inputs by returning \texttt{None}.

The Option monad hides the null checking in the \texttt{bind} operation, so we just repeatedly \texttt{bind} a 
function over the value instead of applying the function to it. The \texttt{unit} operation 
wraps a value into the monad without adding a side-effect, so we model it as a 
succeeding computation instead of a failing one.
\begin{lstlisting}[language=Scala]
  implicit object OptionMonad extends Monad[Option] {
    def unit[A](arg:A):Option[A] = Some(arg)
    def bind[A,B](fun: A=>Option[B], arg: Option[A]):Option[B] = arg match {
	    case None    => None
	    case Some(a) => fun(a)
    }
  }
\end{lstlisting}
 
The boilerplate between the application of two effectful functions in the
traditional style is the repetition of the null checks or list flattening or some other operation,  
and the sequencing of
operations using a semicolon. In the new functional style 
nested calls to \texttt{bind} with the next function in the sequence handle both.
\begin{lstlisting}[language=Scala]
given fun1: A=>Option[B]
      fun2: B=>Option[C]
      fun3: C=>Option[D]
      val arg = Option[A]
      import OptionMonad.bind
//functional style
val result :Option[D] = bind(fun3,
                             bind(fun2,
                                  bind(fun1, arg)))
//object-oriented style
val result :Option[D] = arg.flatMap(fun1).flatMap(fun2).flatMap(fun3)
//sequence comprehension syntax
val result :Option[D] = for { a <- arg;
                              b <- fun1(a);
                              c <- fun2(b);
                              d <- fun3(c) } yield d
}
\end{lstlisting}

 

Other common examples use monads to implement a read-only configuration 
environment, backtracking search or joining probability distributions using
Bayes' rule \citep{probability_monads}. 
In Haskell monads are also necessary to be able to work with state and
input/output, but a non-pure language like Scala does this natively. 
The use of the monadic style is thus more valuable to express features that are
not available or nicely contained within the core language.


Because many different concepts can be seen as  monads, this makes it possible
to write and reuse libraries of functions that work on all
monads. 
For convenience it is also important to provide synonyms so the method
\texttt{bind} can be used as flatMap in the context of the desugaring operation
for sequence comprehensions.


However, the power of monads comes with a snag: they do not compose cleanly.
While the composition of two functors is a functor, the composition of two
monads is not necessarily another monad. A separate concept of monad
transformers has been introduced to Haskell to create nested monads in those
specific cases where it is possible.

A monadic container wraps a value or computation, and provides a safe way through bind of
operating on the monad with functions that take the unwrapped value. There is however no general safe way of taking the value out of the monadic container.
This has to be done by a specific function, who's name conventionally starts with run, per monad implementation.
This unwrapping function is called only on the interface between pure and monadic code.


Every monad must also be a functor, and in fact we can provide a default
implementation for the \texttt{map} from
functionoperation in function of the monad operations. We simply transform the
pure function argument to \texttt{map} into a bindable function using
\texttt{unit}. Because this should give the same result as a real map with a
pure function, it is important that the \texttt{unit} operation introduces a
dummy effect without any repercussions.
\begin{lstlisting}[language=Scala] 
def fmap_mon[A,B](fun: A=>B, arg: M[A]): M[B] = bind(liftfun(fun), arg) 
def liftfun[A,B](fun: A=>B):A=>M[B] = fun andThen unit[B]
\end{lstlisting}

% 
% 
% \subsubsection{Monad and Comonad example}
% A Monad example that is not a simple collection is a probability
% distribution\citep{probability_monads}. To make an interface to probabilities a
% monad, we have to define a useful \texttt{unit} and \texttt{bind} operator. \texttt{Unit} should convert 
% an element into a distribution of the same type. 
% This is straightforward: it can create a distribution that has 
% probability 1 of generating the provided argument.
% The \texttt{bind} operator takes a probability distribution on A, and combines it with a
% function from A to a probability on B into a resulting probability on B. 
% This looks just like the merging process of Bayes rule with a conditional probability.
% 
% 
% Once we have identified probabilities as monads, we can use the common monad operations as a basis for an API.
% We can also specify a probability monad interface as a subset of monad, and 
% then write generic functions that will work for any implementation of probabilities.
% Now we can also make any ordinary function work with probabilities for its
% arguments by using the `Applicatively' function or sequence comprehensions. 

% 
% 
% A comonad structure is useful for modelling functions 
% on arrays that require the whole or partial context to derive a new value. Such
% a comonad basis would be a good foundation for a domain specific language of
% array processors.

% 
% Another nice example of such a problem is the game of
% life\citep{gameoflife_comonad}. The comonad here would be combining functions
% from a 3 by 3 neighborhood to a single cell value. The bind operation implements 
% the iteration over the whole game area.
% 
% %This is also related to the zipper structure\citep{zipper}, which provides a
% %navigational interface from a hole in a data structure.
%  









\subsection{Applicative functors}
Recently Connor McBride and Ross Paterson introduced a concept in between functors and monads,
that leaves out just enough power to make it composeable again. These
applicative functors \citep{idioms_paper}, also known as idioms, are again special functors wrapping
effects, with a lifting function and a transforming function like monads.


An applicative functor has the methods seen in listing \ref{applicative_spec}in addition to the \texttt{map} from Functor:
\begin{lstlisting}[language=Scala,float,label=applicative_spec,caption= applicative functor specification]
trait ApplicativeFunctor[AF[_]] extends Functor[AF] {
  def pure[A](arg: A): AF[A]
  def ap[C,D](afun: AF[C=>D], arg: AF[C]): AF[D]
\end{lstlisting}

Again, being a useful subset of functor, we can define a default \texttt{map} implementation using \texttt{pure} and \texttt{ap}.
\begin{lstlisting}[language=Scala]
  def fmap_af[A,B](fun: A=>B, arg: AF[A]): AF[B] = ap(pure(fun),arg)
\end{lstlisting}

The \texttt{pure} operation has an identical signature as monad \texttt{unit}, while 
the \texttt{ap} gives yet another interpretation of chaining an effectful function.
The functions that can be distributed 
using an applicative functor, can have effects, but the effect cannot depend on 
the argument going into the function, like in a monad bind.
So the function can have a fixed effect attached to it, but the effect is fixed statically and is not influenced by the input during execution.

The definition of the applicative functor structure for \texttt{Option} is again straightforward.
\begin{lstlisting}[language=Scala]
implicit object Option2ApplicativeFunctor extends ApplicativeFunctor[Option] {
  def pure[E](e:E)=Some(e)
  def ap[A,B](lted:Option[A=>B], me:Option[A]):Option[B] = if (me.isEmpty || lted.isEmpty) None else Some(lted.get(me.get))
}
\end{lstlisting}
This encapsulates the pattern where we have to check for nulls both on the argument and the function. 
Also visible is the function ``Applicatively'', a helper function that is 
overloaded on different function types and abstracts over nested uses of \texttt{ap}.
\begin{lstlisting}[language=Scala]
given fun: Option[Int=>Int=>Int]
      val arg1 = Option[Int]
      val arg2 = Option[Int]
      import Option2ApplicativeFunctor.ap
//functional style
val result :Option[B] = ap (ap (fun, arg1) , arg2)
//syntactic sugar with overloading
val result :Option[B] = Applicatively( fun) (arg1,arg2)
//sequence comprehension syntax
val result :Option[B] = for { a <- arg1;
                              b <- arg2;
                              f <- fun } yield f(a)(b)
}
\end{lstlisting}
% Speaking in terms of a logging effect, the function cannot determine the text that is 
% logged based on the value in the container, but there can be a fixed static logging text 
% attached to the function and written to the log when the function is executed.



Not only are the specifications for applicative functors and monads similar, 
in fact all monads lead to an applicative functor. So earlier when we said 
monads inherit from functors, the applicative functor actually lies between them. 
Again we can define provide default implementations for the supertype 
functions, now \texttt{pure} and \texttt{ap} using \texttt{unit} and \texttt{bind}. 
\texttt{Unit} and \texttt{pure} can just be 
identified so that's easy, but deriving \texttt{ap} is a bit trickier.
\begin{lstlisting}[language = Scala]
def ap_mon[A,B](fun: M[A=>B], arg: M[A]): M[B] =  
  bind[A=>B,B] ( (func:A=>B)=> bind[A,B]((a:A)=> unit(func(a)), arg),fun)
\end{lstlisting}

However, the fact that a monads structure on a type always implies an 
applicative functor structure, does not mean that this is the only one.
For lists, the \texttt{ap} method layered on \texttt{bind} uses flattening semantics. 
There is a another option for \texttt{ap} that uses pairwise application as in listing.
\begin{lstlisting}[language=Scala,float,label=pairwise,caption=different applicative functors for list]
val funlist:List[Int=>Int] = List ( (x:Int)=>x+1, (x:Int)=>x+2)
val arglist:List[Int] = List(10,20)

val resflattening = ListMonad.ap(funlist,arglist)
//resflattening: List[Int] = List(11, 21, 12, 22)

val respairwise = ListPairwiseApplicativeFunctor.ap(funlist,arglist)
//respairwise: List[Int] = List(11,22)
\end{lstlisting}

And because an \texttt{ApplicativeFunctor} is a weaker abstraction than a 
\texttt{Monad}, if your code does not need the \texttt{bind} operator, it is better 
to write against the \texttt{ApplicativeFunctor} interface. This keeps your options open for multiple implementations.

Just like ordinary object-oriented polymorphism advice says to write code against the weakest interface you need, 
in cases where the additional power of monads is not used, it is
advantageous to write code against the applicative functor interface and gain the
composability benefits.


The way applicative functors can generally 
be composed into a new applicative functiors, is again harder to express in Scala than in Haskell because we lack 
the special syntactic sugar introduced in \citep{idioms_paper}.
Luckily, because applicative functors are universally composable, a single implementation is enough.

The following more elaborate definition was derived by reforming and generalising a 
concrete instance, guided by the type structure.
\begin{lstlisting}[language=Scala]
def Composite2Applicative[F[_],G[_]](implicit faf:ApplicativeFunctor[F], 
             gaf:ApplicativeFunctor[G]): ApplicativeFunctor[(F O G)#O] =
                                     new ApplicativeFunctor[(F O G)#O] {
  def pure[A](a:A):F[G[A]] = faf.unit(gaf.unit(a))
  def ap[A,B](lted:F[G[A=>B]], arg: F[G[A]]) :F[G[B]] = {
    def innercurried[X,Y] :G[X=>Y]=>G[X]=>G[Y] 
                                     = Function.curried( gaf.ap[X,Y] _) 
    def liftinnerap[X,Y] :F[G[X=>Y]=>G[X]=>G[Y]]
                                     = faf.unit(innercurried[X,Y])
    def aptofuncs = faf.ap(liftinnerap[A,B], lted)
    faf.ap(aptofuncs, arg)
  }
}
\end{lstlisting}

%Where the map in functor instances wraps a very simple pattern of for loop into a higher-order function,
%applicative do the same but effectfully. Cases where the for loop caused a side-effect 
%instead of just performing a mapping, can be encapsulated as a higher-order function with ap.

%Appfunctors lead to effectful transforming visitors, which is a very common use
%case. 

McBride and Paterson also note that applicative functors can be derived automatically from any monoid.
This involves introducing a wrapper indexed by two types, one of which is a vacuous phantom type, 
which I was able to carry through to Scala as in listing \ref{wrappedmonoid}.
\begin{lstlisting}[language=Scala,float,label=wrappedmonoid,caption=wrapping a monoid as an applicative functor]
class WrappedMonoid[O](implicit monoid:Monoid[O]) {
  case class Accy[o,a](acc:o)(implicit mon:Monoid[o])
  //the A type does not influence the result
  def wrap[A](acc:O):AF[A] = Accy[O,A](acc)(monoid)
  type samemon[Z] = Accy[O,Z]  //simulate partial type application
  type AF[T] = samemon[T]   //provide type alias WrappedMonoid[Elem]#AF 

  val aftr:ApplicativeFunctor[samemon] = new ApplicativeFunctor[samemon] {
    def pure[E](e:E):samemon[E]= Accy[O,E](monoid.mempty) 
    def ap[A,B](lfun:samemon[A=>B], arg: samemon[A]):samemon[B] = 
      new Accy[O,B](monoid.mappend(lfun.acc,arg.acc))
  }
}
\end{lstlisting}
This is a bit esoteric, but this general adapter from the monoid interface to the applicative functor 
interface will be very convenient later.


With regards to the syntax for handling these applicative functors, in Haskell the 
most used way of applying a pure function to effectful arguments is with a set 
of functions generalizing \texttt{map} to functions of different arities.
These are called liftM2, liftM3 and so on, because they were defined to lift a pure function to arguments within a Monad.
But this can be weakened , because one \texttt{unit} to lift the function followed by $n$ instances of \texttt{ap}
 and \texttt{unit} is equivalent to liftMn.


In Scala, to apply a pure function to multiple effectful parameters, the normal idiom is 
\begin{lstlisting}[frame=none,language=Scala]
  for {val1 <- arg1; val2 <- arg2; ...} yield fun(val1,val2,...)
\end{lstlisting}.
However, this needs a monadic \texttt{flatMap} operator, while we have seen that the monad structure is not needed in principle.
Therefore I have introduced a method \texttt{Applicatively} (other names could be ``liftfunction'' or ``effectfully'') 
that is overloaded to provide different sequences of \texttt{unit} and \texttt{ap}.
% 
% \subsubsection{Composed applicative functors example}
% Composing different functors or applicative functors into one by composition 
% is like merging different loops over the same structure into one, purely 
% or combining the different sideeffects.
% This can even be taken literally. If we model two dimensional arrays as lists 
% of lists, the iteration operators map and ap are automatically derived 
% % as the composition of the iterations over the inner and the outer structure.


% we can use \texttt{pure} and \texttt{ap} directly in a similar way. Such a combination sequence of \texttt{pure} and \texttt{ap} 
% then is equivalent to the but we can 
% also use method overloading to provide a set of functions \texttt{Applicatively(function of arity n)(argument list of length n)}.
% This provides an applicative view on what can also be written in sequence comprehension syntax as
% \begin{lstlisting}[language=Scala] 
%   for (val1 <- arg1; val2 <- arg2; ...) yield function(val1,val2,...)
% \end{lstlisting}
% So the applicative functor notion and notation makes it possible to abstract 
% over simple patterns of sequence comprehensions that express just function application. 
% Because some applicative functors are not monads, the applicative syntax is 
% also usable more widely.
% 
% \section{Dual notions}
% When using such universal patterns, it occurs often that slight variants on 
% known patterns are not wrong attempts but are useful patterns in their own right.
% 
% %used in gui or other component programming, drop unless fitting reactive
% % programming example
% \subsection{Cofunctors}
% 
% While a functor[A] composes functions that produce any B from an A, a cofunctor[A] composes functions that consume
% values of type B into an A. As seen earlier in the discussion of functors, the
% function arrow notion leads to a functor when producing and to a cofunctor when
% consuming.
% \begin{lstlisting}
% trait CoFunctor[CF[_]] {
%   def cofmap[A, B](ft: CF[A], f: B => A): CF[B]
% }
% \end{lstlisting}
% These have been used as abstractions of producers and consumers in Haskell user interface libraries. \citep{phoey}
% 
% \subsection{Comonads}
% Just like monads are a useful abstraction, its dual is too. Whereas a monad
% gives a function \texttt{bind} that safely handles functions from unwrapped to
% wrapped values, a comonad lets you safely transform using functions from wrapped to
% unwrapped values. The wrapped value is then some kind of context-decorated value.
% A good example is the combination of an array and a pointer
% into that array. Imagine a blurring filter on data, or the game of Life. Every
% iteration, each element is transformed in a way that depends on its
% surroundings. If you write a function that takes an array and pointer and gives
% a single element, the comonad implementation will distribute this function over
% the entire array and return the new array as it is one iteration later.
% 
% Mind you, a comonad, just like a monad, is a specialization of functor, not of
% cofunctor!
% 
% \citep{essence of dataflow programming}
% %%insert example code
% \begin{lstlisting}
% trait CoMonad[CM[_]] extends Functor[CM]{
%   def counit[A](cma: CM[A]): A
%   def cobind[A,B](cma: M[A], f: CM[A] => B): CM[B]
%   //general layering of functor map on top of other operations
%   def cmap[A,B](cma: CM[A], f: A=>B):CM[B]= {
%     def drop[X]:CM[X]=>X = (cm:CM[X])=> (counit[X](cm))
%     cobind(cma, drop[A] andThen f)
%   }
%   def map[A,B](fun: A=>B, arg:A) = cmap(ft, f)
% \end{lstlisting}
% CoMonads are not applicative functors like monads are, because monads and
% applicative functors share the unit/pure operation, while comonads depend on counit which does the exact opposite.
% 
% 
% \subsubsection{No CoApplicative Functors}
% Now we know that we have a cofunctor as dual to functor, and a comonad as dual to monad, and that a comonad is not an ordinary applicative functor, 
% one would expect there to be a coapplicative functor, layered between functor and comonad.
% 
% However, if we postulate such an interface, consisting of counit and ap, it is not a practically useful one.
% Because while the unit and ap of an applicative functor suffice to
% define map, we cannot combine counit and ap to derive map.
% 
% \begin{lstlisting}
% //Is not a useful abstraction: given ap and counit we cannot implement fmap
% trait CoApplicativeFunctor[CAF[_]] extends Functor[CAF]{
%   //need to implement counit and ap
%   def counit[A](arg: CAF[A]): A
%   def ap[A,B](fun: CAF[A=>B], arg: CAF[A]): CAF[B] 
%   
%   override def map[A,B](fun: A=>B, arg: CAF[A]): CAF[B] = fmap_af(fun,arg)
%   def fmap_af[A,B](fun: A=>B, arg: CAF[A]): CAF[B] 
% }
% \end{lstlisting}


\section{Traversable}
In the same paper \citep{idioms_paper} that formalises applicative functors, McBride and Paterson also
introduce \texttt{Traversable}. This depends on a single function \texttt{traverse}, as easy to
define as Functor \texttt{fmap}, that implements an effectful mapping. 
Here we demand an extra implicit parameter to witness the \texttt{ApplicativeFunctor} 
interface to the effect we want to use.
\begin{lstlisting} [language=Scala]
trait Traversable[T[_]] extends Functor[T]{
  def traverse[AF[_],A,B](f:A=> AF[B], ta: => T[A])(implicit aftr:ApplicativeFunctor[AF]):AF[T[B]]
\end{lstlisting}
The type of function that is distributed looks like the one in monadic \texttt{bind}, but with split type parameters.
While a \texttt{Monad} composes functions into the \texttt{Monad} itself to transform its contents, 
\texttt{traverse} is again a distributing notion such as \texttt{Functor}.
But the result type of \texttt{traverse} shows a swapped order of type constructors compared to an ordinary \texttt{map}.
Where a \texttt{map} would distribute a \texttt{A $\Rightarrow$ B} over a \texttt{ T[A] } 
into a \texttt{ T[B] }, 
\texttt{traverse} 
takes an effectful function and distributes the function over the structure and 
into the effect at the same time.

A container type Cont with a
given element type El, can now lift an effectful mapping function on its
elements \texttt{El $\Rightarrow$ Effect[B] }into an effect wrapped around the resulting structure:
Effect[Cont[B]].

A small example uses the \texttt{Option} effect as applicative functor. 
Suppose we have a number generator. We want to triage the numbers it produces to see if they are even. We can use 
a normal filtering function that takes a number and wraps it in an \texttt{Option}, so all 
uneven numbers get reduced to None, by simple function application.
But as a next step we want to use the same filtering function to triage trees of even numbers. 
Such a tree is valid if it has the 
correct shape and all numbers in it are even, so once a single faulty number shows 
up we have to discard the tree.
\\
All these constructions are of course implementable as lower-level for loops, 
by mixing the test on each individual number together with the bookkeeping of a state flag 
that is set whenever a number fails the test.
The \texttt{traverse} function allows us to untangle these two matters.
This mixing of side-effects such as failure with the iteration itself can be properly separated by letting 
the Option applicative functor handle the dependency on previous values, and using the 
traverse as an effect-enabled iteration.
The \texttt{Traversable} structure threads the optionality through the sequence of values.
In this way, \texttt{traverse} solves those cases where a for loop could not be written as a simple map 
because inside the loop `something more' needed to happen than just iterating.



Rather surprisingly at first, \texttt{Traversable} is also a \texttt{Functor} 
and a \texttt{Foldable} (listing \ref{travfunctfold}). The
automatic default definition of \texttt{map} lifts the mapping function into the no-effect \texttt{Id}
applicative functor. The \texttt{Foldable} implementation works by implementing the
\texttt{foldMap} method that receives a neutral element and an associative operator as a
\texttt{Monoid}. Every monoid leads to an applicative functor as we have seen previously,
and it is this applicative functor that is used in traverse.


\begin{lstlisting}[language=Scala,float=t,label=travfunctfold,caption=Traversable leads to Functor and Foldable]
trait Traversable[T[_]] extends Functor[T] with Foldable[T]{
  
  def traverse[AF[_],A,B](f:A=> AF[B], ta: => T[A])
        (implicit aftr:ApplicativeFunctor[AF]):AF[T[B]]

  //implements functor
  type F[X] = T[X]
  def map[A, B](ft: => F[A], f: A => B): F[B] 
      = traverse[Id,A,B]( (x:A) => Ident(f(x))  , ft)(ApplicativeFunctor.Id2ApplicativeFunctor) 
        .unId

  //implements foldable
  def foldMap[A, M](fun: A=>M, arg: => T[A])(implicit mon:Monoid[M]):M 
      =  accumulate[M,A](fun,arg)
  
  def accumulate[O,A](counter:A=>O, arg: => T[A])(implicit mon:Monoid[O]):O = {
    val wrapmon  :ApplicativeFunctor.WrappedMonoid[O] = new ApplicativeFunctor.
           WrappedMonoid[O] 
    type AF[Z] = wrapmon.AF[Z]
    implicit val aftr:ApplicativeFunctor[AF] = wrapmon.aftr
    val liftfun = (a:A)=>wrapmon.wrap[O](counter(a))
    val result = traverse[AF,A,O](liftfun, arg)(aftr)    .acc
    result
  }
}
\end{lstlisting}

\subsubsection{Iterable}
When implementing access to a collection, there are two different schools of
thought. The first option is to provide external iterators, where the
contents of the collection are exposed as a lazy list or Iterator the client can walk over.
The client code contains the control in this case.


The second option is to
provide internal iterators, where the collection abstraction exposes a number of
higher order functions. These are the well known fold, map, filter, zip
functions and others like them and built on these functions.


Scala currently uses the external iterator interface with hasNext and next
methods as native, and the iterator trait implements some common functions on
top of these. However, it is possible to refactor this iterator interface and
split it in pieces according to which higher-order functions the various
functionality needs. Modeling this with traits using typeconstructor
polymorphism at the same time eliminates the need to override the return types of
methods when implementing iterable for a given container type.

The Iterable trait in Scala contains elements from functor and from foldable.
But the since Traversable subsumes Functor and Foldable, this trait is a candidate for the basis of
a new collection interface for OO languages with first-class functions.
\citep{iterator_essence} shows how this effectful mapping operation
can be used to wrap a lot of non-standard loops into higher-order operatiorns,
giving internal iteration the same power as external iteration.






\newpage
\section{Structures overview}
Because this was quite a menagerie of different concepts a short overview or cheatsheet might be in order.
\\
combinables:
\begin{lstlisting}
Mon is monoid: neutral element and associative combining operation
  mempty: Mon
  mappend: (arg1:Mon) => (arg2:Mon) => Mon

FO[_] is foldable: can be combined into a single value by repeatedly
combining the elements. User function manages result structure.
  foldMap:     (fun: A=>M)  => (mon: Monoid[M])  => (arg:FO[A])   => M
  foldRight:   (comb: A=>B=>B)  => (init: B)     => (arg:FO[A])   => B
\end{lstlisting}

\noindent distributing functions:\\
\begin{lstlisting}
F[_] is functor:   can distribute pure function over elements, maintaining same structure 
  fmap:  (fun:A=>B)        =>  (arg:F[A])    =>  F[B]

T[_] is traversable: can distribute effectful function over elements and into the applicative functor effect
  if AF is applicative functor:  
  traverse: (fun: A=>AF[B]) =>  (arg: T[A])  =>  AF[T[B]]
  also functor and foldable
\end{lstlisting}


\noindent composing functions: (illustrated by fig \ref{function_chaining_figure})
\begin{lstlisting}
Functor again: composes a pure function into a new functor

AF[_] ApplicativeFunctor: extends Functor with ap
  pure: (a:A)    => AF[A]    = unit
  ap: (afun: AF[A=>B])    => (arg: AF[A])   =>  AF[B]

M[_]  Monad: extends Applicative Functor with bind
  unit: synonym for pure from applicative functor   
  bind:  (wrapper: A=> M[B]) => (arg: M[A])    => M[B]
\end{lstlisting}

\begin{figure}
\label{function_chaining_figure}
\caption{The inheritance structure of the function chaining abstractions. 
    The container types and the values they classify are in blue.
    The interesting chaining operators are in red. The functions that are composed over the container are in yellow.
    The functions that lift into the container and the values they lift are in green.
}
\includegraphics[scale=0.8]{F_AF_M_CM.png}
\end{figure}
 
 
\newpage
\section{Porting to Scala:problems and conclusions}


\subsection{design considerations introducing object-oriented syntax}
To make the abstractions we implemented fit in with the Scala container libraries 
and to enable sequence comprehension syntax, they should be converted to object-oriented versions.
While implementing code using the functional interfaces
we have seen is certainly possible, it feels a bit like writing Haskell in Scala with less syntactic support.

In principle it should be easy to perform the translation. In the functional 
version each typeclass instance is reified as a witness trait that contains methods modelling functions.
Those functions are made available by selecting them as members of the witness 
trait, or by importing the contents of the witness and then using the functions freely.
What we want is a style where functionality instead becomes available as members 
methods on the principle object the functions act on.
This is a form of partial application of the principal dispatch object, where we 
go from an explicit object reference to the implicit ``this'' pointer.

Given an implemented
functional interface, an implementation of the corresponding object-oriented
could be made by calling the first adding the \texttt{this} object to the
parameter list. The other way around, a functional interface implementation can
be created by wrapping the object to dispatch on in the implemented
object-oriented version and calling the partially applied method with the remaining arguments.

This proces works flawlessly for interfaces such as the Functor type constructor class.
\begin{lstlisting}[language=Scala]
trait FunctorWitness[F[_]] {
  def map[A,B](fun: A=>B, arg:F[A]): F[B] 
  def foreach[A](fun: A => Unit, arg: F[A]): Unit
      = map[A,Unit](fun, arg);()
}
trait IFunctor[Container[_],A] {
  def map[B](fun: A=>B):Self[B]
  def foreach(fun: A=>Unit):Unit = map[Unit](fun);()
}
\end{lstlisting}
The ideal syntax for the object-oriented version we aim to reach on the client 
side makes the methods available through member selection syntax like a base method.
If the implementing datatype inherits the interface, the methods are automatically available. 
If it acquires the interface through an implicit conversion, the conversion 
transparently wraps the base object and the methods are again selectable through member syntax.
\begin{lstlisting}[language=Scala]
def duplexF[_],A](arg:F[A])(implicit lift:F[A]=>IFunctor[F,A]):F[(A,A)] 
     =  arg.map( (a:A) => (a,a))
\end{lstlisting}


However, when we try to do the same for \texttt{ApplicativeFunctor} we hit a snag. The 
\texttt{pure} method does not dispatch on an \texttt{IApplicativeFunctor} value, 
it is a wrapping function on any contained element.
So if we cannot place 
the method \texttt{pure} in the trait IApplicativeFunctor, where should we put it?


In fact, when we want to write a function that works for any ApplicativeFunctor, we are not 
just specifying the operations on object of the type, we also need operations on the type itself.

In Java terms, we want to abstract over the objects of the type and a static 
factory method at the same time. A class only implements the interface 
ApplicativeFunctor if it has the object method \texttt{ap} and the static class method \texttt{pure}.
So we would need specify static methods in an interface.
In Scala, static elements are shown as companion objects. We would need to be able to specify a companion object with an abstract method.
%http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4093687
Adding this feature to the JVM platform has been an open request for years now as bug 4093687 here, opened on november 18th 1997. 

The current status of the request for enhancement is 'being considered for Dolphin', which is the JVM release of Java7.


Some workarounds are possible. 


It is possible to create a second trait, that 
specifies the contract of a companion trait. A generic method would then always 
need access to an implicit parameter for the companion and an implicit conversion that decorates objects,
as two implicit parameters that jointly implement one concept.


Another option is to use a wrapper. Within this wrapper trait would be a specification 
for the method pure, and a specification for an inner trait.
The object-oriented interface would then be named something like  ApplicativeFunctor\#OO instead of directly IApplicativeFunctor.
This second option requires a virtual trait OO within the trait ApplicativeFunctor. While an automatic virtual 
class desugaring is in the works for Scala, right now the programmer has to implement a straightforward pattern himself.
The drawback is that this requires more boilerplate to fix the types and override a constructor method.


I tried multiple variants, and they are workable, but not as nice as a solution to the static methods in interfaces bug would allow.
\\

A second problem involves how to bind the datatype itself with its implementation of the interface.
The two options are straightforward inheritance and external decorating using implicits.
If we opt for straightforward inheritance, we can require in the interface that the container 
type we are abstracting over indeed implements the required interface. 
The interface can then be annotated with a self type Container[Elem].
Declarations would then look like \lstinline{List[A] extends IFunctor[List,A] }
with the header of IFunctor being 
\begin{lstlisting}[language=Scala]
trait IFunctor[Container<: IFunctor[Container,A], A] {self:Container[A]=> 
    def map[B](fun:A=>B):Container[B]
  ...
\end{lstlisting}
This might look a bit strange initially, but since these headers are only written by the library 
provider it is not necessarily a problem for the programmer who uses the library.
On the client side, documentation for \texttt{List[A]} would just have the method 

\noindent \lstinline{ def map[B](fun: A=>B):List[B] } and the 
IFunctor interface need not be explicitly visible.
This comes at great cost however: since our interface requires that the
container implements it and the self type requires that the trait be combined
with the actual instance, we gain mixin type safety but we can no longer use implicit decorators to
extend the existing classes with this Traversable interface. A more flexible alternative is to 
not require that the container implements the interface, and create an 
implicit method  from T to ITraversable[T] given a implicit instance of TraversableWitness[T]. 
Now the trait can be used both in conversions and for inheritance.

There is a second part to this binding problem. If we don't actually inherit the 
trait, we can not depend on the ``this'' object pointing to the base object. 
``This'' within the wrapper trait will refer to the wrapper instance. All 
that is needed to fix this, is to make the base object available for example 
under the name ``self'', and setting the ``self'' value with the object 
that the implicit conversion is wrapping.


A third problem is also related to the use of implicit conversions to wrap instances.
Taking an implicit parameter to perform an implicit conversion 

\noindent \texttt{Cont[Elem] $\Rightarrow$ IAbstraction[Cont,Elem]} is not sufficient for what we really want to do.
This will only give us the convenience of working with such objects of type \texttt{Cont[Elem]} as though 
they inherited the interface. 

But the important usage in the context of datatype-generic 
programming is not per se to work with specific objects, but to be able to write 
general reusable methods abstracting over the types.
The type abstraction as modelled with an implicit conversion 
does not express the structure we want to demand on the container type.

The simplest example that shows a version of the problem is a convenience method that performs two functorial maps after another.
\begin{lstlisting}[language=Scala]
\\functional:
def maptwice[F[_],A,B](first:A=>B,second:B=>C, arg:F[A])(implicit witness:Functor[F]):F[C] =
   witness.map(second, witness.map(first, arg))
\\OO:
def maptwice[F[_],A,B](first:A=>B,second:B=>C,arg:F[A])(implicit lift: F[A]=>IFunctor[F,A]):F[C] =
  arg.map(first)   . .. \\cannot then perform .map(second)
\end{lstlisting}
The compiler inserts the implicit conversion \texttt{lift} for us to wrap \texttt{arg} in a functor instance.
Then the first map works out fine and returns a \texttt{F[B]}
But when demand to use \texttt{map} on this result, the implicit conversion 
does not apply because we don't have an \texttt{F[A]}.
Of course this is nonsense: The functor structure belongs to the container type 
and should be independent of the element type.

The same problem occurs when we try to implement our \texttt{Applicatively} syntax or to implement the \texttt{traverse} function on a datatype.
They need to use the \texttt{ap} method for different intermediate types.



What we really need is a function that forall T , 
gives an implicit conversion from \texttt{AF[T]} to \texttt{IApplicativeFunctor[AF[A]]}. 
This means that we need to express rank-two polymorphic types, 
so the problem can be resolved using the function-as-object structural typing trick seen on page \pageref{ranktwoscala}. 
The solution is then to not pass a single implicit conversion to IApplicativeFunctor around, 
but a polymorphic generator that for every T returns a conversion function from T to IApplicativeFunctor[T].
Unfortunately, the extra step needed to reach the implicit conversions means that Scala will not implicitly apply those conversions anymore. We have to wrap 
each instance manually, destroying the syntaxless availability as if they were inherited member functions.









A fourth detail we have to get right here to enable inheritance is the variance declaration of our type constructor classes. 
We have to chose the co- and contravariance in 
\texttt{Functor[$?$Container[$?\_$],$?$Elem]}. To fit in with the Scala library and
the properties of immutable datatypes, the type constructors on which we want to operate 
should be covariant in their element type.  
This choice does indeed mean, that in case of real adoption in the standard library, 
a second parallel hierarchy would be needed to use for mutable datastructures which are invariant in their element type.
Having fixed our second question mark, the third must be identical, because we know we will operate on Container[Elem] elements.
We still have the choice in the container type left. The safe subsumption conditions for neither covariance 
nor contravariance seem guaranteed for an entire inheritance hierarchy, so
we will pick invariance.



A last measure we can take, to improve usability, is to provide the
methods map, foreach, flatMap  under those names, so the object-oriented versions 
will automaticaly be usable with the standard sequence comprehension syntax.


\subsection{other language enhancements}
There are some enhancements, that if made to Scala would make the design and usage of these abstractions much nicer.

Firstly, if the implicit conversion mechanism could be extended to the sort of polymorphic 
implicit conversion generators we need, we recover member selection syntax on our wrapped objects.


Secondly, sometimes you want to define a method that relies on the object 
being of the container type with non-atomic type parameters. 
This problem occurs in 
ApplicativeFunctor to write a method dispatching on the lifted function instead
of the lifter argument to provide additional syntactic sugar according to Scala's
dispatching rules, and it occurs when defining the flatten/join method on Monads. In this second case, 
you really want the
main object type Container[El] to be a Container[Container[El]]. One workaround 
is to introduce an extra type parameter [RealEl] on thise methods, and demand an
implicit conversion from El to Cont[RealEl], and use RealEl as element type in
the result type of the method. The implicit method in these cases is just the
identity function but is needed as a hint to the type checker. A low-priority
bug has been opened in the end of March 2008 as ticket 679, but it has been
classified as postponed, so the workaround will probably be needed for a while
longer.

Thirdly, type constructor inference would make the client side code neater. In Scala 2.7 , 
since type inference only works for types of kind \texttt{*}, code that uses 
these abstractions needs a lot of type annotations. Worse, when the programmer 
specifies the type-constructor parameter on a method, all other types that 
could be inferred then have to be explicitly annotated as well, since there 
is no partial type inference or partial type application.




\subsection{Conclusions}

Though a number of small language changes would make 
programming with datatype-generic interfaces in Scala nicer, it is definitely workable in Scala as opposed to Java.
Code in this style contains a lot less boilerplate than is traditional.
Because of default implementations in traits coupled with abstraction over 
container types, methods with a default definition can be inherited as is.

This reduces the implementation cost of these interfaces considerably.
It also becomes possible to write general fully reusable methods for very-high level concepts.
By encapsulating different patterns of loops in a named interfaces and higher-order functions, 
code becomes much less spread out over the page, and easier to read once the concepts are familiar. 


Compared to implementing these abstractions in Haskell, some aspects compare favorably for Scala. 

Haskell type classes don't allow a subclass of a typeclass to specify a
default implementation for a function in the superclass. In Scala implicit
traits use the normal subtyping implementation which allows for deferred
methods to be given default implementations in subclasses, and even overriden
later on.
In Haskell instances of type classes have global scope.
The mechanism of newtyping is needed to identify which typeclass
implementation one wants if there are multiple possibilities. A newtype is
somewhat like a type synonym, but newtypes are distinct at compile-time while being compiled to the same
type, a compile-time syntactic wrapper only. 
In Scala there exists no newtype, but we could emulate it by using a small
wrapper class. However, this is not necessary just to drive the selection of 
implicit parameters since we can just refer by name
to the monoid needed. Here the implementation of implicit parameters as 
ordinary parameters saves a bit of hassle compared to Haskell.
\\

However, when using these mechanisms, a programmer should always remind himself that Scala is fundamentally non-pure.
These systems can document the presence of effects, 
but programmers can always sidestep it. There is no way as of yet of 
checking effectlessness of a piece of code.
So it is documentation, better style and much less boilerplate code that we 
can gain from using this style of programming, 
not the kind of hard type-with-effect system guarentees we get in Haskell. 
On the other hand, it is only because of this tradeoff that the learning curve from Java to Scala can be somewhat gradual.
Also, Scala encourages the single-assignment style, so mutable state is not as big a problem in idiomatic Scala.
% 
% 
% Thus, this type of datatype-generic programming allows us to 
% effectively make the most of a small bit of implementation per datatype. We not
% only get the flexibility benefits on the client side of using interfaces, we
% also get a lot more reuse when implementing them.























\chapter{Porting structure-dependent DGP techniques from Haskell}


The research in Haskell has evolved from initial experimentation over full-fledged compiler
extensions to implementations in libraries. We do not discuss hose
earlier systems that require much compiler support such as Generic Haskell or
Template Haskell but focus on some newer library-based implementations.


\section{regular functors}

A datatype is a regular functor if it is parameterized by element type and
contains only instances of the element type and ordinary recursive components
with the same element type. Lists with their head and tail, binary trees with 
their element and two branches all fit into this structure.

Gibbons \citep{gibbons_regular_functors} shows how to derive useful functions by looking at such a regular
functor as a ``bifunctor''. This bifunctor view is a very specific structural type
presentation, and hence is very powerful. It only applies to some datatypes, 
but when it applies it can provide a varied set of functions for the datatype, 
without any more information than how it is a bifunctor
\\
A bifunctor is the extension of a functor (see \ref{functor_definition}) and its
map operation to two type variables. 

\begin{lstlisting}
trait BiFtor[a,b,Self] { 
  def bimap[c,d](f: a=>c, g: b=>d):Self[c,d]
}
\end{lstlisting}
A regular functor can be transformed to a bifunctor by making the recursion in
the type explicit. For example, a List[E] contains a head:E and a tail:List[E]
and is thus isomorphic to a \texttt{Pair[E,REC](first:A,second:REC)} with type
constraints REC==List[E]. This deeper list of E can again be seen as a
bifunctor by using the same transformation.
The trick to do this generally is to introduce a fixpoint operator \texttt{Fix} in the type:
List[E] is isomorphic to Pair[E,List[E]] by List[E] == Fix[Pair,E] ==
Pair[E,Fix[Pair,E]].
Using type synonyms we can say List[a] = Fix[IListF,a]

Now, what is at first remarkable is that by providing implementations of the
bimap operations for the bifunctor view (both the cons and the nil case of
course) we can automatically layer the well-known map and fold, but also the
less well-known unfold, hylomorph and other \ldots morphism functions on top of the \texttt{bimap}.
All these functions name a specific recursive pattern. 

The syntax looks a bit less lightweight in Scala compared to Haskell, because
Scala does not have type inference for type constructors yet, and we have to
simulate the Haskell ``newtype'' feature by a new class in this case. However, the manual wrapping in and out of the
newtype in Haskell can be performed by implicit transformations
\citep{gibbons_adriaan_implicitwrapping} in Scala, so that the syntax weight
goes down again.

The concept of folding functions and its usage has been covered in the previous 
chapter when implementing an interface for 
fold-capable structures.

An unfold is as the name implies the opposite of a fold. Whereas a fold takes a
composite structure and a method to combine a value and the rest of the
composite structure into a single summary value, an unfold starts with a single
value and unfolds it using repetitive application of a function. This function generates
a piece of composite structure and value from an initial value and base case of
the structure. But just as the result value in a fold isn't necessarily an
information-reducing summary (we can in principle generate \texttt{map} using fold by letting the
combining operator use the constructors of a datatype on the transformed elements to form the same structure),

we
can use more complex generators for unfolds too.
Then the hylomorphism is nothing but a combined unfold and fold. In theory using a hylomorphism 
can help the compiler generate 
code that does not use much intermediate data by directly feeding the produced 
elements to the consuming function, without wrapping and unwrapping it.
\\

However, using explicit recursion like this style requires, seems far from an 
intuitive way to let the programmer declare
datatypes. Better would be to have the declaration occur the normal way and use
behind the scenes apparatus to implement the extension. Possibilities without
actual language extensions include the use of implicit conversions to map
between the regular and the bifunctor view, and the use of a compiler plugin with suitable annotations

as
a metaprogramming tool to extend the datatype at compiletime.








\section{Derivable Type Classes: polytypic programming}

 
For some type classes the Haskell compiler can provide a default implementation, 
for example equality checking and printing. Functions that perform these operations 
can be automatically generated for a
datatype by using the `deriving TypeClassName` form.

In the Java and Scala world too the compiler autogenerates dummy toString, equals and hashcode
methods. Creating better ones manually is a lot of repetitive work. This problem could
for example currently be circumvented by run-time reflection through
classes provided by the apache common reflection toString builder 
\citep{apache_reflectiontostringbuilder}.

Because these methods are declared on the supertype Object, this works for 
methods where the types of parameters and results are known base
types. The \texttt{equals} method compares with another object of type Object, instead
of an object which conforms to the type of the object called. The return type
is String or Boolean, not the type of the object that receives the message.



The concept of derivable type classes is to extend the range of possible 
functions that can be automatically derived like this. This can happen through 
code specialization at compiletime. Then generated function is specified by fixing its behaviour inductively 
by pattern matching on the type structure in a template language.
It can also happen at runtime by analysis of the structure of a runtime value that reifies type information.

The general operations valid on all types that are normally aimed at by such projects are 
parsing and printing to text, parsing and printing to binary formats, equality,
hashing.

There are a number of libraries \citep{comparing_libraries_for_dgp} for Haskell, differing in the 
type representation they use. They also differ in how to type representations 
are accessed: by passing them around together with the value or linking a value
 to its representation through a typeclass that models ``representable'' types.

In the world of the Java Virtual Machine, some type information is saved 
automatically, and any value has a pointer to the ``classfile'' of its class. 
Runtime reflection then works on the elements exposed by this classfile.


To introduce a better way of examining the type structure, either the 
information must be put into this class file or a parallel system of 
representations must be introduced.
Making the type structure information available and automatically
deriving method implementations could in the future be possible using
compiler plugins. One can imagine a system of specific annotations and a plugin
that writes extra methods to the class file. 
For now we have to rely on a parallel representation mechanism to simulate such
a system.













\section{Scrap Your Boilerplate: general traversal}

The original scrap your boilerplate \citep{SYB1}approach tries to eliminate
boilerplate code in traversal by a combination of type testing, polymorphism 
and automated recursion.

The goal of this traversal library is to specify only the cases where the function does 
interesting work, combine those and lift the function to one that 
can polymorphically operate on any single level of the hierarchy. Then a
recursive template function for queries or one for transformations, 
provided on each of the levels of the type hierarchy, distributes the 
function to the entire tree of subnodes.

For the carpark example, changing all wheels, one would define the interesting function and 
use combinators to create a general traversing function.

The solution in Haskell looks like listing \ref{haskellcarpark}.
\begin{lstlisting}[language=Haskell,float,label=haskellcarpark,caption=changing all wheels]
changeTireWheel::id -> id -> Wheel -> Wheel
changeTireWheel oldid newid (Wheel pos tire press)  = ... useful part ...

changeWheelCover :: id -> id -> CarPark -> CarPark
changeWheelCover oldid newid carpark = 
       everywhere mkT (changeWheelCoverWheel oldid newid)
\end{lstlisting}

Here the `mkT' (make transform) combinator takes a typed function and wraps it 
in a polymorphic function that acts like the identity on any 'wrong' datatype.
The `everywhere' function takes a polymorphic transform and creates 
a function that takes any datatype decorated with the recursion 
template to apply the transformation recursively in the whole element tree.
Similar machinery is available for queries.

This requires quite a bit of type hacking in Haskell to introduce 
type testing and casting, as well as rank-two polymorphism.


However, by using the native JVM typetesting and typecasting, this was all quite
readily transferable to Scala. The only complication is again having to
simulate rank-two polymorphic types using the object-function duality trick (see page \pageref{ranktwoscala}).

When we translate the classic SYB example query of counting the total 
wages in a company to Scala, the function looks as follows:
\begin{lstlisting}[language=Scala]
object TestCompanyBill extends Structure with ExampleCompany with StructureSYBImpls{
  def main(args : Array[String]) : Unit = {
    def salaryBill(comp:Company):Float = {
      import syb1.combinators._
      def billS(s:Salary):Float=s.amount
     
      val lifted = liftQuery(0.toFloat)(billS)
      val andrecurse:Company=>Float = everything 
             ((a:Float, b:Float) => a + b)(lifted)
      andrecurse(comp)
    }
  println(salaryBill(com))
  }
}
\end{lstlisting}
The boilerplate we must implement to enable any number of functions like these 
is one implicit parameter per level in the hierarchy. This implicit parameter defines that level's 
instances of the functions that map polymorphic queries and transformations over the direct children 
of the node as an instance of the trait \texttt{OneTraversable}
The pattern for the definition of these mapping is easy to follow (listing \ref{employeescala}):
for a general transformation (gmapT), an object with the same structure is returned, but with the transformation applied to its subnodes; 
for a query (gmapQ), a list of values is returned.

\begin{lstlisting}[language=Scala,float,label=employeescala,caption=Enabling traversal over Employee]
//for one level:
case class Employee(p:Person, s:Salary) extends CompanyElem
implicit def EmployeeIsOneTraversable(e:Employee):OneTraversable[Employee] = new OneTraversable[Employee] {
   def gmapT( func: r2func1T):Employee = Employee(func(e.p),func(e.s))
   def gmapQ[R]( func: r2func1Q[R]):List[R]=  List(func(e.p),func(e.s))
}
\end{lstlisting}




Note that this method does not operate on the structure of the whole type as such, 
but on the values of individual constructors.


%mkT : this wrapper makes a type-specific function into a general transform\\
%mapT: applies general transform to one nodes direct children\\
%everywhere: change transform on node to transform on all its descendants\\
%topdown, bottom up variants
This technique, because it works on the constructor 
level, also works for datatypes that do 
not neatly follow any specific nice structure as we saw for regular functors. 
We also don't have to make every level inherit from a base class 
CompanyElement when that is not a natural way to model the hierarchy.


The scrap your boilerplate system was very succesful and has been extended in follow-up papers.
\citep{SYB2} explains how also functions that create, instead of consume, values 
can be written generically, with a polymorphic version of unfolding.
\citep{SYB3} uses intricacies of the Haskell type class system 
to open up the generic functions so they can easily be extended.



 
\section{compos: traversal on syntax trees }

For the common usecase (at least in the Haskell developer community) of abstract
syntax tree operations, a less general but syntactically shorter method was
developed in \citep{compospaper} under the name ``compos''.
\\
The goal here is to implement a recursive traversal on a tree by pattern
matching on the tree node, implementing the special cases explicitly and
forwarding all unmatched cases to a general traversal operator.
In the simplest case we have one \texttt{Exp} abstract data type 
modelling a tree of Expressions, implemented in Scala using a sealed case class hierarchy, 
where each node contains only primitive data and further child Expressions. 
This is the same pattern we used for our syntax tree in the simply typed 
lambda calculus, before converting it to DeBruijn indices for variable handling.
\begin{lstlisting}[language=Scala]
 sealed trait Exp
 case class EAbs(s:String,e:Exp) extends Exp
 case class EApp(f:Exp,a:Exp) extends Exp
 case class EVar(s:String) extends Exp
\end{lstlisting}

The authors start by defining the general transformation function
\lstinline{def composOp (recursefun: Exp => Exp)(arg: Exp) :Exp}.
This needs to be
implemented for each datatype you want to make traversable. The general schema for each node 
follows the pattern match 
\begin{lstlisting}[language=Scala,frame=none]
case Constructor (childnode1, ... , childnodeN)  
        => Constructor ( childnode1', ... ,childnodeN')
\end{lstlisting}
On the right hand side, the new version of a child node is either the child node transformed by the \texttt{recursefun}
function in case it is an \texttt{Exp}, or the unchanged element if it is primitive data.

So where the full Scrap your boilerplate system uses a polymorphic map that operates on every child, 
in this system the types are already specialised.

A second function is
the query operator 
\begin{lstlisting}[language=Scala]
def composOpFold[b](init: b)(combine: b => b => b)
                (fulltrans: Exp => b)(arg: Exp):b
\end{lstlisting}
Implementing this for each node just means evaluating \texttt{fulltrans} on each meaningful 
subnode and combining the results into a single value. If all the subnodes are 
primitive types, the default zero value is returned.

Notice here another difference with Scrap your boilerplate: the operation performed on a child node is 
just fulltrans(node), so any recursive
operation must be defined by the layer on top of composOfFold by making the
fulltrans function itself manually recursive, where SYB does this with the ``everywhere'' combinator.
\\
A next step, traditional for a Haskell system, makes the generalization of the pure function composOp
into an effectful variant 
\lstinline{composOpM[M[_]] ::(recurse: Expr=>M[Expr])(arg:Expr) (implicit mon: Monad[M] ):M[Expr] }
\\
This follows the known pattern whereby only \texttt{ap} and \texttt{unit} are necessary, 
so this can be generalized to work for applicative functors instead of just monads.
Finally the authors generalize \texttt{composOpM} and \texttt{composOpFold} into a general function
\texttt{compos}, by explicitly passing the \texttt{unit} and \texttt{ap} functions.

The Haskell type signature here uses rank-two types which are not natively available in
Scala.
\begin{lstlisting}[language=Haskell]
compos :: (forall a. a -> m a) 
       -> (forall a b. m (a -> b) -> m a -> m b) 
       -> (Exp -> m Exp) -> Exp -> m Exp 
\end{lstlisting}
An attempt at literal transcription using the trick where a rank-two type is 
simulated by a structural function type (see \ref{ranktwoscala}) that worked for SYB
fails us in this case.
On the following code 
\begin{lstlisting}[language=Scala]
def compos[AF[_]] ( unit: { def apply[A](arg:A):AF[A]})
          ( ap:   { def apply[A,B](fun: AF[A=>B], arg: AF[A]):AF[B] })
                  ( lfun: Exp => AF[Exp], e: Exp) 
                  : AF[Exp] = ...

\end{lstlisting}
the Scala compiler signals that 
\lstinline { Parameter type in structural refinement may not refer to abstract type defined outside that same refinement}
The standard recourse would be to introduce a new named trait, but since these two functions
 are the defining parts of an Applicative Functor instance, we can reuse 
this piece of the library we implemented. We just pass an implicit parameter for the Applicative 
Functor witness for the type constructor AF.
\\
The definition of \texttt{compos} is then as in listing \ref{composimpl},
\begin{lstlisting}[language=Scala,float,label=composimpl,caption=definition of compos]
def compos[AF[_]] (lfun: Exp => AF[Exp], e: Exp)
                  (implicit af: ApplicativeFunctor[AF])
                  : AF[Exp] 
  = e match {
    case EAbs(s,b) => af.Applicatively[String,Exp,Exp]( EAbs.apply _)(af.pure(s), lfun(b) )
    case EApp(f,a) => af.Applicatively[Exp,Exp,Exp]( EApp.apply _)(lfun(f),lfun(a))
    case other => af.pure(other)
}
\end{lstlisting}
which is exactly the same as our original composM function, but generalized to ApplicativeFunctor.
\\

This tradeoff does mean that where in Haskell functions can be passed as such, in 
Scala we need to wrap them up in an implementation of ApplicativeFunctor.
Hence the definition of composOpFold in Haskell
\begin{lstlisting}[language=Haskell]
newtype C b a = C { unC :: b } 

composOpFold :: b -> (b -> b -> b) 
        -> (Exp -> b) -> Exp -> b 
composOpFold z c f = 
    unC . compos (\_ -> C z) 
    (\(C x) (C y) -> C (c x y)) (C . f) 
\end{lstlisting}
The definition of composOpFold requires a dummy type C 
which is used to throw away the tree result, keeping the b result 
which we are interested in.

In Scala we need to package the functions \texttt{pure} and \texttt{ap} as an applicativefunctor instance. 
But if we just pay attention to the init and combine parameters, we recognize the elements of a monoid.
This is the same pattern we encountered when implementing 
\lstinline[language=Scala]{ def foldMap[A, M](fun: A=>M, arg: => T[A])(implicit mon:Monoid[M]):M }
as a specialization of 
\lstinline[language=Scala]{ def traverse[AF[_],A,B](f:A=> AF[B], ta: => T[A])(implicit aftr:ApplicativeFunctor[AF]):AF[T[B]]}
 with an applicative functor derived from a Monoid.
The only difference here is that our expression tree datatype is declared as a base type 
instead of a type constructor.
We could in principle use the parallelism between these 
definitions by implementing an 
implicit conversion from our Exp type to a new datatype 
ExpAsContainer[Exp], and then making that an instance of \texttt{Traversable}.
However, the translation from Exp to ExpAsContainer[Exp] is akin to the unwrapping 
of one level of fixing as with the regular functor case.
This is logical since our simple datatype is a just a tree of expressions, of which every node can be seen as a product of expressions.

So what we have here is proof that the abstractions from chapter 7, for which 
the relevance to programming may have seemed a bit weird, are  rather deep indeed.
\\

So far the method works great, but the situation becomes more complicated when 
our datatype of expression trees is no longer recursive in just 
expressions, but for example contains expressions and statements or, in the case of our System F interpreter,
term level constructions and type level constructions.
We would still like to write one function that works for both expressions and statements, 
but the actions taken are no longer always the same.

The Scrap your boilerplate approach doesn't have this problem because it works with fully 
polymorphic functions, not functions using subtyping to catch different case classes.
\\
In the compos paper this is tackled by declaring a datatype Tree that 
contains all constructors of both expressions and statements. 
Every constructor now contains Trees, so within a node static 
information is forgotten. To reintroduce this information they leave standard 
Haskell and turn to Generalized Algebraic Datatypes (gadts) where every tree node
carries a type index signifying 
whether it is a statement, an expression, a variable or a type. Gadts are in principle supported 
in Scala, but the type inferencer has still limitations in dealing with them.
The bigger problem, similar to the regular functors case, is that 
the data definition now looks very strange from an object-oriented background.






\section{uniplate: traversal on syntax trees }

In the same subcase as covered by the compos approach, where 
we are interested in one type of subnodes of our case classes, the uniplate
project \citep{uniplate_paper} has worked to provide nicer, more natural
interfaces to the recursion.  Instead of the general SYB approach of providing a one level map and letting the user choose
the recursive traversal method, the uniplate library prepackages some of the
most common traversals. The definition of the generic functions becomes
accessible through sequence comprehensions.


The structure that uniplate exposes consists of an ordinary list of 
subexpressions, and a factory method that reconstructs a node based 
on such a list of subexpressions.
Elements of nodes that are not of the expression type cannot be 
reached directly, although one can get the nearest enclosing node 
of the right type and access it from there. When reconstructing, the untouched version of such elements will be reused in the new transformed version of the node.
To enable this style, again all that is needed is one general boilerplate method per case class.
The form this method takes is easiest to show with an example.
\begin{lstlisting}
case class Add(e1:Expr,e2:Expr) extends Expr {
  def uniplate = (List(e1,e2), {case List(c1,c2)=>Add(c1,c2)} )}
case class Let(s:String, varexpr:Expr,body:Expr) extends Expr{
  def uniplate = (List(varexpr,body), {case (c1,c2)=>Let(s,c1,c2)})}
\end{lstlisting}


A small example of gathering all used variable names in an expression can be expressed as follows in this paradigm:
\begin{lstlisting}[language=Scala]
def variables(e:Expr): ListString] = for {Var(y) <- universe(e)} yield y
\end{lstlisting}

By exposing lists, this builds on the mapping, folding and sequence comprehension syntax instead of reimplementing it.
What this system does is turn the previous methods inside out. 
Previously on each node we let loose a function that encapsulates transformations or queries. 
The node boilerplate managed the application of the function to its children, and compositing itself again.
Here the code in each node exposes the actual subnodes and the reconstruction 
logic. In this way instead of pushing a node to the functions, 
here we pull all nodes into a function.
Of course, we also don't need to use the default List implementation. A general definition 
would use an abstract type of which we only know that it is Traversable. The official library implements this using continuations on the background to be speedier than a naive implementation.

On top of the general uniplate method there are some other frequently 
used recursion patterns that are implemented in function of uniplate.






\section{Scrap your boilerplate, reloaded}
A later version \citep{SYBreloaded} of the Scrap your boilerplate approach uses a different kind of 
implementation. Where the earlier papers traversal operations are composed by
combinators, in this new version the structure of a value is exposed, as what 
is called the spine view. The spine view basically exposes a case class 
as a constructor function applied to a set of values. The function
 application here is now reified as data, 
and one can write transforming functions somewhat like folds over the spine of the value.

% This looks a bit like the case class apply and unapply functions used in 
% pattern matching, but instead of specific named patterns, it matches 
% the value into a foldable-like representation.


For every type we now need a reified type representer or converter named TypeMan 
(Type in the original paper). Such a converter knows how to convert a 
value of the proper type into that value's structural spine representation. 
\begin{lstlisting}[language=Scala]
  //contains function concrete => structural for datatype
  //specific for each datatype
  trait TypeMan[A]{  //kind: *=>*
    def toValSpine(a:A):ValSpine[A]
  }
\end{lstlisting}
This type converter then implements one part of the view 
between nominal and structural representations of a value.
The other direction is implemented by the datatype for the structural representation itself:
\begin{lstlisting}[language=Scala]
  //structural view of constructor application to values
  //contains function structural => concrete for datatype: structural info is enough to to this
  sealed trait ValSpine[A] { //stages in constructing a value
    def buildFromSpine:A
  }
\end{lstlisting}

The spine view itself looks somewhat like the uniplate approach in the structure it exposes, except 
it mingles the constructing function and the list of parameters together into a single list-like 
structure. This list like structure reifies the partially applied constructor function.
The spine view consists of two case classes: a ConstrExt case denoting a whole value, 
and a Apped case containing a parameter as head and a tail, so we can operate using pattern matching.
\begin{lstlisting}[language=Scala]
  type ConstrDesc = {val n:String} //extra info: name
  case class ConstrExt[A](a:A,desc:ConstrDesc) extends ValSpine[A] {
    def buildFromSpine = a
  }
  case class Apped[A,B](constred: ValSpine[A=>B] , a_ :Typed[A]) extends ValSpine[B] {
    def buildFromSpine = constred.buildFromSpine (a_._1)
  }

  case class Typed[A](value:A,typerep:TypeMan[A]) extends Tuple2[A,  TypeMan[A]](value,typerep)
\end{lstlisting}

Where in uniplate we knew the types of the elements in the list of subnodes because we only 
considered one type, here we need to carry the type information with us. So 
the head of an Apped case contains not just the value, but a ``Typed'', a tuple of 
the value and the specific type converter that handles the value. 

The way a spine value can turn itself back into a nominal version is not too difficult.
The ConstrExt case reifies a fully applied constructor method: 
it contains the resulting nominal value and can just return it.
In the Apped case the object contains one subelement of the class within its Typed, 
as well as a structural representation of the already partially applied 
rest of the constructor. 
It can transform the structural constructor back into an actual 
constructor function and feed that function the constructor parameter it carries.

All structural behaviour is then implemented by first fetching the structural 
representation of a value of type T using its type converter TypeMan[T].
Then you operate on this structural view in whatever way is wanted, and finally 
the modified structural representation of type ValSpine[T] can transform itself back into a T.


The middle part of such a transformation, that performs the changes on the 
structural spine of the value, is implemented as a pattern of 
two corecursive functions.

In the simplest queries, a function on nominal types just dispatches to a second 
function that performs a fold on the structural representation of the value.
A small example that calculates the number of subelements a case class has, 
can demonstrate the way a fold over the spine is expressed.
\begin{lstlisting}
  def arity[T](t:T)(implicit tman:TypeMan[T]):Int = arityprim(tman.toValSpine(t))
  def arityprim[T](sp:ValSpine[T]):Int = sp match {
    case ConstrExt(a,desc) => 0
    case Apped(f, Typed(x,t)) => arityprim(f)+1
  }
\end{lstlisting}

Another example (listing \ref{sumexample} ) exhibits the full typical corecursive structure, 
to count the sum of integers at any level within a value.
\begin{lstlisting}[float,label=sumexample,caption=generic deep sum]
  def sum[A](a:A)(implicit tman:TypeMan[A]):Int = tman match {
      case IntMan => a
      case _    => sumstructural (tman.toValSpine(a))
  }
  def sumstructural[B](s:ValSpine[B]):Int = s match {
      case ConstrExt(c,i) => 0
      case Apped(f, Typed(x,t)) => sumstructural(f) + sum(x)(t)
  }
\end{lstlisting}
Here we see in the function for normal types, how ad-hoc behaviour can be 
defined by pattern matching on the type representer to decide the type of
the value the function is applied to. This works because there are some
predefined instances IntMan, StringMan and so on.

So the pattern consists of two functions. A first one takes normal
values and either catches specific behaviour or redirects to the
second one structural behaviour. A second one handles the structural spines.
The structural function returns a default value in the case of a fully applied
value, and else performs an operation
on the result of a deeper structural call and the result of the first function
on the actual subelement. It is for this last call that we need to keep the
TypeMan of the constructor parameter.


The boilerplate needed to enable this pattern is a version of TypeMan per datatype you want to traverse.
If we could get this information inside 
the class file for all values, this syntactic overhead of carrying the
representation explicitly could be eliminated, just like with ordinary
reflection the class file of a value can be accessed at all times. As it is we
carry the representation along.



The instances of TypeMan can of course be written explicitly for each datatype. 
However, this follows a rigid template and could be automated. Ideally a
compiler plugin would generate an automatic instance for a case class. 
Until Scala compiler plugins are viable, it is easier to use a set of helper 
functions that generate a TypeMan instance for a case class.
\begin{lstlisting}[language=Scala]
  def CaseClass1ToValSpine[T,Param1]
      (clazz:Class[T])(implicit p1tyman: TypeMan[Param1]) :TypeMan[T]
  def CaseClass2ToValSpine[T,Param1,Param2]
      (clazz:Class[T])(implicit p1tyman: TypeMan[Param1], 
                                p2tyman:TypeMan[Param2]):TypeMan[T]
  ...
\end{lstlisting}
The TypeMans that result from these functions construct ValSpines based on the primary 
constructor of the case class gathered by reflection on the class file. This way providing 
a TypeMan instance costs just one line of enabling boilerplate code.






















\chapter{Casestudy: scrap your nameplate}

As explained in section 5.5, the original interpreter for the simply typed 
lambda calculus was extended to handle the polymorphic lambda calculus or System F.
While the typing and evaluation rules are still readily implementable, the 
implementation of capture-avoiding substitution becomes quite complex. We
now need to perform type substitutions in terms and types, and term
substitutions in terms. The rules for dealing with the proper bookkeeping of the DeBruijn
indices in these cases are not very conspicuous. There is not a whole lot of
code (only about 90 lines), but it is very uninviting.

Therefore we will transfer our implementation of substitution from using
DeBruijn indices to a newer technique called Nominal Abstract Syntax.
Then we will try to scrap the boilerplate code this technique requires, using
the structural spines from ``Scrap your boilerplate, reloaded'', roughly guided
by the Functional Pearl ``Scrap your nameplate''
\citep{SYnameplate} demonstrating such a process in Haskell. 

% \section{structure of System F}
% In the simply typed lambda calculus, we can only abstract over values and form a function.
% In System F, the polymorphic lambda calculus, we can also abstract over a type parameter.
% Thus we can abstract a term over a
% type into a polymorphic function. The type of such a type-abstracted term is no
% longer a base type, but a type abstracted over another type. This can be either
% universal abstraction where the type will then be of the form 'forall a . T', or
% existential abstraction where the type hase the form 'exist a  . T' While
% universal abstractions can model ordinary colection-type type abstraction,
% existential abstraction models abstract data type or a module system.
% \\
% 
% To implement this, it is obvious
% we now get abstractions over terms and abstractions over types. When performing
% substitution, we can now need to substitute a term into a term, a type into a
% term, or a type into a type.
\section{switching to nominal abstract syntax}
Dealing with variable names, correct capture-avoiding substitution and
equivalence up to renaming is a notoriously tricky subject to get right.
Over time, a lot of different options have been explored. Options include
first-order abstract syntax with explicit substitution boilerplate; name-free
approaches such as the DeBruijn indices used in \citep{TAPL} and in toy
interpreter, and higher-order abstract syntax which reuses the variable and
binding system of the meta-language.
Because of increased interest in elegant solutions to this problem, related to
the popularity of machine-checked proofs, and the wish to have such proofs resemble
a natural style, a new ecosystem of ``Nominal'' logics and abstract syntax has
been developed. The theoretic principles were developed in
~\citep{PittsAM:newaas-jv} and implemented in a number of languages.
A recent Functional Pearl paper \citep{SYnameplate} explains how to port the
approach as a library to Haskell.

Because we have an intermediate interface that guards the substitution
function, from the point of view of the type checking and evaluation routines,
this change in substitution mechanisms is invisible.

\subsection{structure of nominal abstract syntax}
Nominal abstract syntax is based on a number of new concepts compared to
DeBruijn indices. It introduces names as a separate first-class entity, and
bindings as a datatype encapsulating a binding name and an expression where
this name is in scope, written here name//body .
The insight behind nominal abstract syntax
\citep{cheney_nominal_logic_and_abstract_syntax} is the use of reversible
swapping of names instead of replacement of one name by another as a mechanism
of renaming. An invertible swapping operation that exchanges two names is better
behaved than replacement. 
We define a swapping operation [a$<>$b] on a name c as follows: if c == a,
the result \texttt{c[a$<>$b]} is b, if c == b the result is a and else the
result remains c. If we perform a swapping on a binding: (name//body)[a$<>$b] the result just
propagates the swap to both elements: (name[a$<>$b]//body[a$<>$b]).

This operation preserves equality of names: if two names at different locations
are equal, and we perform the same swapping on them, the resulting names will
still be equal. But also inequality: If they are different names, the results
after swap will still be different from each other. Another property is
freshness: a name is fresh for a term if there are no unbound occurrences of
the name within the term. Again, if a name is fresh the definition of swapping
ensures the resulting name after swapping will still be fresh.

This is better behaved than replacement, because replacement only respects
equality. Indeed, if t==u: t[x:=y] == u[x:=y]. But inequality is not necessarily
maintained: if x != y then x[x:=y] == y == y[x:=y]. 
And substitution can also change a name from fresh to not fresh :  x is not free in $\lambda$ x. f x y;
 but x[x:=y] == x is free 
in ($\lambda$ x. f x y)[x:=y] == ($\lambda$ x1. f x1 y)).

The concepts that must be implemented are the following:
\begin{itemize}
  \item names
  \item a name-binder that couples a name to a term in scope, and for which
  alpha-equivalence coincides with equality.
  \item a swapping operation on expressions, that switches between two names
  \item a freshness operation on expressions, false if the name occurs free
  \item a name generation mechanism that can generate fresh names 
\end{itemize}

Just like for the swapping operation, the freshness and equality rules are
defined in the paper and quite easy to implement.
A name is fresh to another name if they are different. It is also always fresh
to primitive that contains no names. Freshness on a composite term is the
conjunction of freshness on the subterms. Lastly, a name is fresh on a binding
if it is equal to the bound name(because all occurences then are bound and not
free) or if the name is different, if it is fresh to the bound name and to the
term in scope.

The equality on bindings should be defined so that, when the bindings share the
same bound name, they are equal if the bodies are equal. Lastly, and this is
the only tricky case: if the names of the bindings are different (a//t vs b//u),
the bindings are equal when the the name of the first binding is fresh to the
body of the second (a fresh to b), and the body of the first is equal to the
body of the second one with both bound names swapped (t equal to u[a$<>$b]).
\\
Capture avoiding substitution is then defined on top of these definitions as
follows, using ``var(name)'' for a syntax node that refers to a bound name:
\begin{itemize}
  \item var(a)[a:= P] == P
  \item var(b)[a:= P] == var(b)
  \item (b//body) [a:= P] == b//body2 if b fresh to P and body[a:= P] == body2 
\end{itemize}
For a composite structure, the result of substitution is reached by simply
passing the substitution on to the children nodes.

The case for substitution on a binder is only specified if the binding variable
is fresh to the structure being substituted for the name. To implement this
algorithm, we must make sure this is always the case. This is easy if we have
the traditional endless source of new names fresh to the entire environment.
When we want to perform substitution an a binding, we generate a totally new
name and use the swap operation between the old binding name and the new name
on the body of the binding. The new binding so derived is alpha-equivalent to
the original one (this procedure performs exactly the steps to make the above
definition for alpha-equivalence true) and since the new name is new to the
system it is fresh to the structure being substituted.
\\
One extra operation is the ``support'' of a structure, a list of all the free
variables that occur within. Having easy access to the support helps during one
particular type checking rule.

\subsection{conventional implementation of nominal abstract syntax}
Following the structure of the scrap your nameplate paper, we introduce the
following interfaces \texttt{Nominal} and \texttt{CanSubstIn} to our Scala
implementation., together with a class \texttt{Name} and \texttt{Binds}
synonymous to the \texttt{\\} syntax
\begin{lstlisting}
trait Nominal[Self] {
  def swap(a: Name, b: Name): Self
  def fresh(a: Name): Boolean
  def supp: List[Name]
}
trait CanSubstIn[SubstParam, Self] { 
  def subst(sub: Name => Option[SubstParam]): Self
}
\end{lstlisting}
Both names and bindings are \texttt{Nominal}, and since most of the rules of
abstract nominal syntax have to do with these cases, these can be nicely
packaged away as a library. %The code that deals with generating new variables
%as wel as some syntactic niceties can also be hidden from the view of the user
%by using \texttt{apply} and \texttt{unapply} methods on companion objects.
\begin{lstlisting}
class Name(val name: String) extends Nominal[Name] {
  def swap(a: Name, b: Name) = if(this == a) b else if(this == b) a else this
  def fresh(a: Name) = this != a
  def supp = List(this)
  ...
}
type Binds[x] = \\[x]
class \\[T](private val binder: Name, private val body: T)(
            implicit val bodynom: T => Nominal[T]) extends Nominal[\\[T]] {

  def unabs: (Name, T) = { val newBinder = Name(binder);  
                             (newBinder, body swap (binder, newBinder)) }

  def swap(a: Name, b: Name) = \\(binder swap(a, b), body swap(a, b)) // boilerplate
  def fresh(a: Name) = if(a == binder) true else body fresh (a)
  def supp = body.supp filter (_ != binder)
  ...  
}
\end{lstlisting}

The first thing we then have to do is change our abstract syntax nodes, to use
names and binding constructs instead of integers. We go for example from 
\lstinline[language=Scala]
{case class Lam(hint: VarHint, ty: LType, body:LTerm) extends LTerm }
to
\lstinline[language=Scala]
{case class Lam(ty:LType, abs: \\[LTerm]) extends LTerm }

Because the methods in a \texttt{Binds} depend on those in its body, and in System F we have abstractions that bind LTerms and abstractions that bind
LTypes, we need to make both LTerm and LType \texttt{Nominal}. We can do this
just by making LTerm inherit \texttt{Nominal[LTerm]} and likewise for LType.
Then in each nodetype we have to implement the three methods, according to the
trivial rules for composite datastructures. This needs three lines of
boilerplate code per node in the abstract syntax tree. We can catch the
identical cases for all the leaf nodes by subtyping, so this alone saves us
quite a bit of boilerplate. All in all we need about 60 lines of user-written
code for these trivial definitions.


Of course this \texttt{Nominal} framework will serve to implement
substitution. Implementing substitution takes again some code on the library
side for names and binders, but the code on the user side is now not homogenous
anymore. We have to deal with the pattern for composite structures which is
straightforward, as well as the specific behaviour for variables.
Using inheritance to specify \texttt{CanSubstIn} would be ideal, but then LTerm
needs to be declared \texttt{LTerm extends Nominal[LTerm] with (LTerm
CanSubstIn LTerm) with (LType CanSubstIn LTerm)}, since terms contain
abstraction over terms and types. Scala unfortunately not allow inheriting from
the same trait twice with different type parameters. The solution is to use
implicit conversions.

In our substitution interface, the arguments are decorated with a specific
instance of \texttt{CanSubstIn},based on the combination of the type of the
substitution and the type of the element that is substituted. So we implement
three different implicit conversions with their own substitution function:
\texttt{LTermTakesLTypeParam(t)}, \texttt{LTermTakesLTermParam(t)} and
\texttt{LTypeTakesLTypeParam(t)}. Each leads to a substitution function that
pattern matches on the case class and performs the substitution logic, either
applying the substitution in case of a variable or redirecting to the library
code in case of a binding. 
In this library code for bindings 
we need to manually partially apply a type parameter, so a call looks a bit unwieldy:
\texttt{(new AbsTakesSubstitution[LTerm]).AbsIsSubstable(t)
         (LTermTakesLTermParam).subst(sub) }

 Or in the majority of cases where we have an
ordinary non-leaf class, distributing the substitution function over the
children (using the same implicit conversions to make them substitutable).
This
leads to about 90 lines of boilerplate code.

The code as a whole is much cleaner, but we will see whether we can't do 
something about the amount of dumb code we are implementing.



\section{Generic \texttt{Nominal}}
The ``scrap your nameplate'' approach tackles automatic definition of
\texttt{Nominal} by implementing it as a derivable type class. 
As there is as of
yet no compile-time system for derivable type classes in Scala, everything will
be done at runtime through reflection. Future work once plugins for the Scala
compiler are viable may remedy this.
\\
Because the elements of \texttt{Nominal} can be derived purely structurally,
they can also be implemented using well-behaved corecursive functions
over the typespine of a value.
The code that implements \texttt{fresh} is given in listing
\ref{freshname_listing}.In the non-structural function we specifically capture
cases where an inherited implementation is available to dispatch to the
cases for names and bindings. We then need to provide the implicit conversions
\texttt{LTerm => Nominal[LTerm]} and \texttt{LType => Nominal[LType]}.

\begin{lstlisting}[language=Scala,float,label=freshname_listing,caption=datatype-generic \texttt{fresh}] 
def freshname[T](a:Name)(t:T)(implicit tman: TypeMan[T]):Boolean =  t match 
  { case inheritancetrumps: Nominal[T] => inheritancetrumps.fresh(a) 
    case _ => freshname_gen(tman.toValSpine(t))(a)}
  }
  def freshname_gen[T](ts:ValSpine[T])(a:Name):Boolean = ts match {
      case ConstrExt(_,_) => true
      case Apped(deep,Typed(arg,innertman))=> freshname(a)(arg)(innertman) && freshname_gen(deep)(a)
  }
\end{lstlisting}

Using the library class \texttt{GenNominal} we can implement the implicit
conversion as in listing \ref{termtonominalconv}. The specific implicit conversion 
is the place to specify cases that should not happen generically through the spineview.
\begin{lstlisting}[language=Scala,float,label=termtonominalconv,caption=Layering \texttt{Nominal}]
implicit def LTerm2Nominal(arg:LTerm):Nominal[LTerm] = new GenNominal[LTerm](arg) {
  override def fresh(a:Name) = arg match 
    { case special:Record => special.fresh(a)
      case _ => super.fresh(a)
    }
  override def swap(a:Name,b:Name) = arg match {
    case special: Record => special.swap(a,b)
    case _ => super.swap(a,b)
  }
  override def supp:List[Name] = arg match {
    case special:Record => special.supp
    case _ => super.supp
  }}


  //use to layer nominal on spineview
  class GenNominal[Self]( val self:Self)(implicit val tyman:TypeMan[Self]) extends Nominal[Self]{
    assume (tyman != null, "creating gennominal with tyman null: "+self)
    def swap(a: Name, b: Name): Self = (swapnames_gen(tyman.toValSpine(self))(a,b)).buildFromSpine
    def fresh(a: Name):Boolean  = freshname_gen(tyman.toValSpine(self))(a)
    def supp: List[Name] = suppnames_gen(tyman.toValSpine(self))
  }
\end{lstlisting}
One such implicit conversion for \texttt{LTerm} and one for \texttt{LType} 
together mean almost 30 lines user-written code.

But to enable such generic operations on the spine of values, there must be 
instances of \texttt{TypeMan}, to convert a value to its spine representation.
Creating these \texttt{TypeMan}s through reflection as explained 
previously 
takes 
14 
lines, 
one for each case class. We also need to add a TypeMan for 
\texttt{Binds[LTerm]} and one for \texttt{Binds[LType]} because these occur 
within our nodes and the TypeMan for a node depends on the 
TypeMan instances for its subnodes. For these two instances, the TypeMan inherits 
from a library-provided PrimitiveTypeMan. The TypeMans will not really be used 
because of the exception clause seen in listing \ref{freshname_listing}

This last reason also applies to all the case classes who's children are 
declared as \texttt{LTerm} or \texttt{LType}. We need a \texttt{TypeMan} for those
classes up in the hierarchy.
These are also the \texttt{TypeMan} instances that the call to \texttt{GenNominal} requires.
These have to be implemented by manually pattern matching on the argument and 
redirecting to the proper TypeMan implementation, as listing \ref{typetypemandispatcher} demonstrates. 
The two dispatchers 
come to an 
additional 
25 lines of boilerplate.

\begin{lstlisting}[language=Scala,float,label=typetypemandispatcher,caption=dispatching function for LType]
//dispatcher
implicit object LTypeTypeMan extends TypeMan[LType] {
        def toValSpine(t:LType):ValSpine[LType] = t match {
                case x:BaseType => new sybrevolutions.forvalues.PrimitiveTypeMan(x.name).toValSpine(x)

                case x:TyVar => TyVarTypeMan.toValSpine(x)
                case x:TyArr => TyArrTypeMan.toValSpine(x)
                case x:TyUniv => TyUnivTypeMan.toValSpine(x)
                case x:TyExist => TyExistTypeMan.toValSpine(x)
}}
\end{lstlisting}


\section{Generic substitution}

The structure of substitution is conceptually a bit harder than that of \texttt{Nominal}
The base cases of the inductive definition are now variable references, names and bindings.

We still need three versions of \texttt{CanSubstIn}: \texttt{LType CanSubstIn LType}, 
\texttt{LType CanSubstIn LTerm} and \texttt{LTerm CanSubstIn LTerm}.

We again implement the functionality using mutually recursive functions over the structural spine of a value.
We need two such sets of functions: one function that for \texttt{LTerm} into \texttt{LTerm}, 
and one for \texttt{LType} into anything. 

The full code for performing term into term substitution can be seen in listing \ref{termintotermsubstable}.
When the function \texttt{subst}is initially called from the substitution interface, 
it is always to operate on a term. Therefore the value is joined 
with a reference to the typeMan[LTerm] and the flow continues to the specific function. 
The specific function captures some exceptional cases,variables and the general 
case through ordinary pattern matching on the incoming value. Because a pattern match 
is an object of type \texttt{PartialFunction}, it is possible to write 
combinators such as \texttt{Except} and \texttt{orElse} that lift ordinary 
PartialFunctions into a polymorphic variation. This way we can create the 
more complicated behaviour of this function as a composition.
\begin{lstlisting}[language=Scala]
trait PartialPolyFunction1 {
  def isDefinedAt[T](t:T):Boolean
  def apply[T](t:T):T
  def orElse(second:PartialPolyFunction1) = new HigherToLowerPrecedencePartialPolyFunction1(this,second)
}
\end{lstlisting}
The same exception for records as in \texttt{Nominal} is joined here by a shortcut \texttt{LType}.
We encounter such nodes in this specific function because the structural function 
applies it to all of the subelements, also types, in cases such as the node for function abstraction.
The pattern match on bindings happens differently from all the others, by 
pattern matching on the type representation value. If we use 
a normal typetest, type erasure on the JVM doesn't differentiate between a 
\texttt{Binds[LTerm]} and a \texttt{Binds[LType]}.

\begin{lstlisting}[language=Scala,float,label=termintotermsubstable,caption=performing a term substiution on a term]
class TermintoTermSubstable(self:LTerm) extends (LTerm CanSubstIn LTerm) {
  def subst(sub: Name => Option[LTerm]): LTerm 
    = { specificSubstituteWithTerm[LTerm](sub, self)(findTermManhere) }
      
  //function that gets called on children:
  // if a variable => perform substitution
  // if a binds => perform substitution
  // special behaviour => through exception clause
  // anything else => convert to valspine
  def specificSubstituteWithTerm[T](sub: substitution[LTerm], t:T)
                                     (implicit tman:TypeMan[T]):T = {
     //first any exceptions
     val excclauses = Except [Any] { 
         case r@Record(fields) => 
           Record(fields.map( {case (lbl,term) => 
 (lbl, specificSubstituteWithTerm[LTerm](sub,term)(findTermManhere)) }))
         case typ:LType => typ // because ltypes do not take lterms
     }
     //then we get the variable case 
     val variableclause = Except[Any] {
         case dis @RefersByName(n) => (sub(n)).getOrElse(dis)
     } 
     //then the redirect to the general cases
     val general = Except[T] {
         case general =>
         GenericLTermIntoTSubst(tman.toValSpine(general),sub).buildFromSpine
     }
     val combined = excclauses orElse variableclause orElse general
     
    tman match {
       case TypeManDefs.BindsTermTypeMan => 
              (new AbsTakesSubstitution[LTerm]).AbsIsSubstable(t)(LTermTakesLTermParam).subst(sub)
       case _   => combined[T](t)
  }}
      
  def GenericLTermIntoTSubst[T](vs:ValSpine[T], sub: substitution[LTerm]):ValSpine[T] = vs match{
     case c@ConstrExt(_,_) => c
     case Apped(deep,Typed(arg,argtman)) => { 
        implicit val findTMan = argtman
        val argmod = specificSubstituteWithTerm(sub,arg)
        Apped(GenericLTermIntoTSubst(deep,sub), Typed(argmod,argtman)) 
}}}
\end{lstlisting}
Besides the syntax for combinator polymorphic partial functions, 
I have also added an interface RefersByName and a 
companion extractor object that allows matching both variable 
nodes, \texttt{Var} and \texttt{TyVar}, using the same syntax.
\begin{lstlisting}[language=Scala]
object RefersByName {
  def unapply(ref:RefersByName):Option[Name] = Some(ref.getName)
}
trait RefersByName {
  def getName:Name
}
\end{lstlisting}


In the second function, which performs type substitution (
partially in listing \ref{partial_typesubst}), pattern matching on the \texttt{TypeMan} instead of the value is more important.
If we just wrote two dynamic typetests here to distinguish 
\texttt{Binds[LTerm]} from \texttt{Binds[LType]} ,type erasure would always 
cause the first match to succeed. In ordinary programming on the JVM this is
something you either have to live with 
or work around by implementing specific identification methods like ``bindsTerm()'' and ``bindsType()''.
This would destroy the opportunity for a single parametric \texttt{Binds[T]}.
Because our \texttt{TypeMan}s contain 
full type information, we can escape the erasure problem here.

\begin{lstlisting}[language=Scala,float,label=partial_typesubst,caption=fragment of type substitution]
//we would like to write
case body:Binds[LTerm] => (new AbsSubstTo[LType]).AbsIsSubstable(body)(LTermTakesLTypeParam).subst(sub)
case body:Binds[LType] => (new AbsSubstTo[LType]).AbsIsSubstable(body)(LTypeTakesLTypeParam).subst(sub)
//without full type information from \texttt{TypeMan} we have to write
case binding:Binds[LTerm] =>  try {
      (new AbsSubstTo[LType]).AbsIsSubstable(binding)(LTermTakesLTypeParam).subst(sub).asInstanceOf[T]
   } catch { case e:ClassCastException =>  t match{
        case binding:Binds[LType] => (new AbsSubstTo[LType]).AbsIsSubstable(binding)(LTypeTakesLTypeParam).subst(sub).asInstanceOf[T] }
   }
}

//using TypeMan we now write
tman match {
  case TypeManDefs.BindsTermTypeMan => 
    (new AbsTakesSubstitution[LType]).AbsIsSubstable(t)(LTermTakesLTypeParam).subst(sub)
  case TypeManDefs.BindsTypeTypeMan =>
     (new AbsTakesSubstitution[LType]).AbsIsSubstable(t)(LTypeTakesLTypeParam).subst(sub)
   case _   =>  combined[T](t) 
}
\end{lstlisting}



Every user of a library in this style must extend it by writing his own substitution functions, 
composed from library code and the \texttt{TypeMan} pattern.

In the substitution functions itself we now count around 70 lines of code.




\newpage
\section{Conclusions interpreter case study}

The conclusion to be drawn from the working interpreter, is that Scala is 
indeed expressive enough to enable datatype-generic programming and make it useful.

Datatype-generic programming promised
\begin{itemize}
  \item less boilerplate code
  \item greater resilience against changes
  \item greater reuse
  \item clearer code
\end{itemize}

With regard to the amount of boilerplate the story is rather mixed.
We started of with around 90 lines of compacted code of hard to understand 
DeBruijn index manipulation. The whole implementation fits into this, without any reusable library code.

By moving to Nominal Abstract Syntax, the size of the code blows up. We need around 60 lines 
to implement the \texttt{Nominal} base functionality, with another 90 to layer 
substitution on top. And that is with only rather trivial code remaining on the 
user's side, with the tricky rules were implemented as a library.

By using the ``Scrap your boilerplate, Reloaded'' generic mechanism, we need 30 
lines to implement \texttt{Nominal} generically, and around 70 to implement substitution.
However, this methods requires \texttt{TypeMan} boilerplate code by itself, which comes to another 40 lines.

Hence there is a small reduction in code size, but not a significant one. However, already now 
the cost of providing the \texttt{TypeMan}s is amortized over two features. 
I believe that in a more featureful interpreter or compiler, the code redution versus the baseline would be more pronounced.
\\
The code has become more resilient to changes. To add or remove nodes would require only the corresponding modification of the \texttt{TypeMan} instances.
If the new datatypes need exceptional treatment, there are clear locations where those exceptions can be specified.
\\
The library parts are now reusable. This is more a matter of adding the abstractions 
by transferring to Nominal Abstract Syntax from DeBruijn. Modelling names 
and bindings as classes instead of convention centralizes and code. We didn't even 
need advanced abstraction mechanisms, just the right abstractions.
\\
Finally, although reading more abstract code takes some getting used to, it is 
my feeling that the code now indeed expresses the intention more clearly. 
This is an effect of modularization by splitting the decisions and the traversal logic apart.
The code is a lot more structured than if everything had been written directly to the ordinary JVM reflection API. 
\\
\\
\subsubsection*{Possible enhancements}
Further conciseness is possible by integrating the type reification mechanism of 
\texttt{TypeMan} with the standard class files, or with another representation 
without type erasure. Then the implementation could also become more type-safe.

The general structural functions could be made more conspicous by providing default abstractions for common patterns.

The use of manually dispatching functions should be made redundant 
by auto-generating them for sealed case class hierarchies that correspond to abstract data types.

Finally, more experience on bigger projects than this toy interpreter could 
lead to an implementation fit for the standard library, thus enabling true reuse.







\chapter{Conclusion}
In this thesis I have studied the field of typed languages, written an introduction 
to a new programming language and  written a toy interpreter for the simply typed 
and the System F lambda calculus.
I have studied different techniques for writing programs that work over
different types known in the literature. I have ported several to Scala 
and identified problems. I have experimented with name binding and investigated how far a particular 
approach can improve the interpreter as a case study.

Throughout, I gained a lot of experience with the Scala language and respect for the language designing process. 
\\


Scala is an impressive language, capable of concise and elegant programs. The porting process from Haskell 
as well as the application to a case study have shown that it is capable of datatype-generic 
programming.

For both code structuring using advanced interfaces and for minimising traversal 
boilerplate, datatype-generic programming in Scala has proven its worth.

It does makes code more concise, more flexible and more reusable.



Scala is also a language that is not frozen yet.
I have in the process of this thesis identified some areas where the language could 
be adapted to make workarounds and extra verbosity unnecessary. 

Some specific extra features that would make the solutions more elegant include:
\begin{itemize}
\item  Abstracting over class methods in an interface
\item  Rank-two polymorphic implicit conversions
\item  Rank-two polymorphic syntax and standard traits
\item  Overloading similar types with different kinds
\item  Generalized type constraints
\item  Structural syntax for type composition and partial type application
\item  Type constructor inference
\end{itemize}

All in all, I hope that in my future professional career 
I will be able to work with a language as flexible as Scala.






%benefits of type level constructor passing :
%can work with single version with weakest interface, not a function per strength of interface to not lose typing information.
%``Again, this generalization is only possible because in Scala we can pass the original type constructor AF. We don't loose the information when AF is stronger and a monad because we never need to upcast.''


% \subsubsection{extra problems in scala: put somewhere els}
% 
% Monoid vs Monoid1: need definitions per kindedness
% 
% manifest: expose product view of caseclasses? we still need to have the type
% parameters? => try if this improves with inverted typemans
% 
% manifest for sealed should give access to subclasses; perhaps proper lift from
% either on subclasses to func on top in a less ad-hoc way.
% \newpage 
% 











\newpage
\bibliographystyle{abbrvnat}
\bibliography{types,DGP}





\end{document}
