\documentclass[runningheads]{llncs}  %  

\usepackage[utf8]{inputenc}

% to get hyperref to stop complaining about bookmarks
% \makeatletter
% \providecommand*{\toclevel@title}{0}
% \providecommand*{\toclevel@author}{0}
% \makeatother

\usepackage{ifthen}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{xcolor}
\usepackage{hyperref} % generates  "Latex Error: ./tcpoly.tex:116 Missing \endcsname inserted."

% \definecolor{dullmagenta}{rgb}{0.4,0,0.4}   % #660066
% \definecolor{darkblue}{rgb}{0,0,0.4}
%\hypersetup{linkcolor=darkblue,citecolor=darkblue,filecolor=dullmagenta,urlcolor=darkblue} % coloured links

\usepackage{courier}
\usepackage{times}
\usepackage{paralist}

\newcommand{\AWK}{{\color{red}AWK}}
\newcommand{\comment}[1]{}
\newcommand{\TODO}[1]{\mbox{{\color{red}TODO}}\{{\footnotesize{#1}}\}}


\newcommand{\code}[1]{\lstinline{#1}}
\newcommand{\class}[1]{\code{#1}}
\newcommand{\type}[1]{\code{#1}}
\newcommand{\kind}[1]{\code{#1}}
\newcommand{\method}[1]{\code{#1}}
\newcommand{\kto}[1]{\ensuremath{\rightarrow}}
\newcommand{\tmfun}[1]{\ensuremath{\rightarrow}}
\newcommand{\tpfun}[1]{\ensuremath{\Rightarrow}}
\newcommand{\nuObj}{$\nu$Obj}
\newcommand{\OmegaLang}{$\Omega$mega}
\newcommand{\CSharp}{C$^{\#}$}
\def\toplus{\hbox{$\, \buildrel {\tiny +}\over {\to}\,$}}
\def\tominus{\hbox{$\, \buildrel {\tiny -}\over {\to}\,$}}

\lstset{
  literate=
  {=>}{$\Rightarrow$}{2}
  {->}{$\to$}{2}
  {-(+)>}{$\toplus$}{2}  
  {-(-)>}{$\tominus$}{2}  
  {<-}{$\leftarrow$}{2}
  % {\\}{$\lambda$}{1}
  {<~}{$\prec$}{2}
  {<|}{$\triangleleft$}{2}
  {<:}{$<:$}{1}
}

\lstdefinelanguage{scala}{% 
       morekeywords={% 
                try, catch, throw, private, public, protected, import, package, implicit, final, package, trait, type, class, val, def, var, if, this, else, extends, with, while, new, abstract, object, requires, case, match, sealed,override},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{\//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstdefinelanguage{Haskell}{%
   otherkeywords={=>},%
   morekeywords={abstype,break,class,case,data,deriving,do,else,if,instance,newtype,of,return,then,where},%
   sensitive,%
   morecomment=[l]--,%
   morecomment=[n]{\{-}{-\}},%
   morestring=[b]"%
  }
  
\lstset{numberbychapter=false,breaklines=true,language=scala} 
%\lstset{basicstyle=\footnotesize\ttfamily, breaklines=true, language=scala, tabsize=2, columns=fixed, mathescape=false,includerangemarker=false}
% thank you, Burak 
% (lstset tweaking stolen from
% http://lampsvn.epfl.ch/svn-repos/scala/scala/branches/typestate/docs/tstate-report/datasway.tex)
\lstset{
    xleftmargin=1em,%
    frame=single,%  TODO REMOVE only for floating listings
    captionpos=b,%
    fontadjust=true,%
    columns=[c]fixed,%
    keepspaces=true,%
    basewidth={0.56em, 0.52em},%
    tabsize=2,%
    basicstyle=\renewcommand{\baselinestretch}{0.97}\small\tt,% \small\tt
    commentstyle=\textit,%
    keywordstyle=\bfseries,%
}

\input{ott}

\bibliographystyle{abbrv}

\renewcommand{\floatpagefraction}{0.90} 



\title{Generics of a Higher Kind}
\author{Adriaan Moors\inst{1} \and{Frank Piessens\inst{1}} \and{Martin Odersky\inst{2}}}
\institute{K.U. Leuven \\ \email{\{adriaan, frank\}@cs.kuleuven.be} \and EPFL \\ \email{martin.odersky@epfl.ch}}
 
% \usepackage{prebutterma} 
% \idline{Draft (Version of \today)} 
% capitalise dblp.bib
  
\begin{document}
\maketitle
% \thispagestyle{electronic} 

\begin{abstract}
With Java 5 and \CSharp 2.0, first-order parametric polymorphism was
introduced in mainstream object-oriented programming languages
under the name of {\em generics}. Although the first-order variant of
generics is very useful, it also imposes some restrictions: it is
possible to abstract over a type, but the resulting type constructor
cannot be abstracted over. This can lead to code duplication. We
removed this restriction in Scala, by allowing type constructors as
type parameters and abstract types. This paper presents the design and
implementation of the resulting type constructor polymorphism.  It
combines type constructor polymorphism with implicit parameters to
yield constructs similar to, and at times more expressive than,
Haskell's constructor type classes.  The paper also studies
interactions with other object-oriented language constructs, and
discusses the gains in expressiveness.
\end{abstract}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}

First-order parametric polymorphism is now a standard element of
statically typed programming languages.  Starting with System
F \cite{girard:thesis,DBLP:conf/programm/Reynolds74} and functional
programming languages, the constructs have found their way into
mainstream languages such as Java and C\#.  In these languages, 
first-class parametric polymorphism is usually called {\em generics}. Generics rest on sound theoretical foundations, which were established by Abadi and Cardelli \cite{DBLP:journals/iandc/AbadiC96,DBLP:journals/scp/AbadiC95},
Igarashi et al. \cite{DBLP:journals/toplas/IgarashiPW01}, and many
others; they are well-understood by now. 

One standard application area of generics are collections. For
instance, the type \lstinline@List[A]@ represents lists of a given element type
\lstinline@A@, which can be chosen freely. In fact, generics can be seen as a
generalisation of the type of arrays, which has always been parametric
in the type of its elements.

First-order parametric polymorphism has some limitations, however. It makes
it possible to abstract over types, yielding {\em type constructors}
such as \lstinline@List@. However, the resulting type constructors cannot
themselves be abstracted over.  For instance, one cannot usually pass
a type constructor such as \lstinline@List@ as a type argument to another type
constructor. It turns out that this restriction prevents the
formulation of some quite natural abstractions and thus leads to
unnecessary duplication of code. We provide several examples of such
abstractions in this paper.

More generality can be achieved by passing to higher-order
{\em type constructor polymorphism}.  
The generalisation to higher-order polymorphism has been a
natural step in lambda calculus
\cite{girard:thesis,DBLP:conf/programm/Reynolds74,DBLP:journals/iandc/BruceMM90}
and it has also influenced the design of functional programming
languages. For instance, the Haskell programming language
\cite{DBLP:journals/sigplan/HudakPWBFFGHHJKNPP92} supports type
constructor polymorphism, which is also integrated with its type class
concept \cite{DBLP:journals/jfp/Jones95}.  This generalisation to
types that abstract over types that abstract over types
(``higher-kinded types'') has many practical applications. For
example, comprehensions \cite{DBLP:journals/mscs/Wadler92}, parser
combinators \cite{Hutton96:monpars,LeijenMeijer:parsec}, and recent
work on embedded Domain Specific Languages (DSL's)
\cite{DBLP:conf/aplas/CaretteKS07} critically rely on higher-kinded
types.

The same needs -- as well as more specific ones -- arise in
object-oriented programming. LINQ introduced direct support for
comprehensions on .NET\cite{DBLP:conf/ecoop/BiermanMS05,DBLP:conf/oopsla/Meijer07},
Scala has had a similar feature from the start, and Java 5
introduced a lightweight variation. Parser combinators are also
gaining momentum: Bracha uses them as the underlying technology for
his Executable Grammars \cite{1314923}, and Scala's distribution
includes a library \cite{moors07:sparsec} that allows users to express
parsers directly in Scala, in a notation that closely resembles EBNF.

In this paper, we study the design and implementation of type
constructor polymorphism in the Scala programming language. These
higher-order generics have been available in Scala from version 2.5,
which was made available as a public distribution in May 2007.  We
motivate why abstracting over type constructors is useful and how it
can avoid code redundancies. We develop a system of kinds for
characterising types and type constructors. Kinds express which types
or type constructors are admissible instances at an abstraction point;
they play the same role for types that types play for values. Our
kinds capture three different aspects of a type or type constructor:
its shape, its lower and/or upper bounds, and its variance.

We then show how type constructor polymorphism can be combined with
Scala's implicit parameters to provide expressiveness analogous to
type constructor classes in Haskell. In fact, the combination of
higher-kinded types, implicits, and subtyping lets us express concepts
such as bounded monads that are beyond the reach of standard type
constructor classes.

Languages with virtual types or virtual classes can encode type
constructor polymorphism through abstract type members. The idea is to model
a type constructor such as \lstinline@List@ as a simple abstract type that has
a type member describing the element type.  Scala belongs to this
category of languages.  For instance, in Scala you could also define
\lstinline@List@ as a class with an abstract type member instead of as a 
type-parameterised class:
\begin{lstlisting}[frame=no]
abstract class List { type Elem }
\end{lstlisting}
Then, a concrete instantiation of \lstinline@List@ could be modelled as a type refinement, as in
\lstinline@List { type Elem = String }@. The crucial point is that in this encoding \lstinline@List@ is a type, not
a type constructor. So first-order polymorphism suffices to pass the \lstinline@List@ constructor
as an type argument or an abstract type instance.

Compared to type constructor polymorphism, this encoding has three
disadvantages, however. First, it is considerably more
verbose. Second, it requires the definition of named members
representing the element types, which induces the risk of accidental
name clashes in class inheritance hierarchies.  Third, the encoding
permits the definition of certain nonsensical type abstractions that
cannot be instantiated to concrete values later on. By contrast, type
constructor polymorphism has a {\em kind soundness} property which
guarantees that well-kinded type applications never result in
nonsensical types. These are three reasons that argue in favour of
including type constructor polymorphism in object-oriented programming.

The main contributions of this paper are as follows:
\begin{itemize}
  \item We describe an implementation of type constructor polymorphism 
        in a widely used object-oriented language.
  \item We develop a kind-system that captures both lower and upper bounds and variances of types.
  \item We combine higher-kinded types with Scala's implicit parameters to
        provide expressiveness analogous Haskell's type constructor classes.
  \item We show that the combination of higher-kinded types, 
        implicit parameters, and subtyping can express concepts such as a bounded monad, which cannot be expressed by classical type constructor classes.
  \item We discuss an encoding of type constructor polymorphism using Scala's 
        abstract type members.
  \item We formulate the kind soundness property of type constructor polymorphism, 
        and explain why it gets lost in the encoding to abstract type members.
\end{itemize}

The rest of this paper is structured as follows.  Section
\ref{sec:dup} demonstrates that type constructor polymorphism reduces
boilerplate that arises from the use of genericity. We start out with
a simple example, and extend it to a full implementation of the
comprehensions fragment of \type{Iterable}.  Section
\ref{sec:implicits} shows the utility of implicits in OOP, as well as
how they can be used to encode Haskell's constructor type classes.
Section \ref{sec:bounds} further extends our kind system so that we
can safely abstract over type constructors with bounded type
parameters. We then apply this generalisation to \type{Iterable} and
our encoding of type classes.  Section \ref{sec:embedding} relates the
functional and object-oriented styles of building abstractions, and
introduces the notion of \emph{kind soundness}.  Section
\ref{sec:variance} illustrates the need for higher-order variance
annotations, which are required for type soundness.  Finally, we
summarise related work in Section \ref{sec:related} and conclude in
Section \ref{sec:conclusion}.

% % conclusion



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Reducing Code Duplication with Type Constructor Polymorphism \label{sec:dup}}
%\subsection{Introduction}

In this section, we illustrate the benefits of generalising genericity to type constructor polymorphism using the well-known \type{Iterable} abstraction. We begin with a simple example, which is due to Alexander Spoon, but we will extend it to more realistic proportions in section \ref{sec:scala:iterable}.

% \begin{lstlisting}[float,caption=Limitations of Genericity,label=lst:iter:gen]
% trait Iterable[T] {
%   def filter(p: T => Boolean): Iterable[T]
%   def remove(p: T => Boolean): Iterable[T] = filter (x => !p(x))
% }
% 
% trait List[T] extends Iterable[T] {
%   def filter(p: T => Boolean): List[T] 
%   override def remove(p: T => Boolean): List[T] 
%     = filter (x => !p(x))
% } 
% \end{lstlisting}
\begin{figure}
  \centering\fbox{
    \includegraphics[width=\textwidth]{IterableRedundant}}
  \caption{Limitations of Genericity}
  \label{lst:iter:gen}
\end{figure}


Figure \ref{lst:iter:gen} shows a Scala \cite{LAMP-REPORT-2006-001} implementation of the trait \type{Iterable[T]}, which is an abstract class that supports mixin composition. It contains an abstract method \code{filter} and a convenience method \code{remove}. Subclasses should implement \code{filter} so that it creates a new collection by retaining  only the elements of the current collection that satisfy the predicate \code{p}. This predicate is modelled as a function that takes an element of the collection, which has type \type{T}, and returns a \type{Boolean}. \code{remove} is implemented in terms of \code{filter}, as it simply inverts the meaning of the predicate.

Naturally, when filtering a list, we expect to again receive a list. Thus, \type{List} overrides \code{filter} to refine its result type covariantly. For brevity, we omitted \type{List}'s subclasses, which implement this method. For consistency, \code{remove} should have the same result type, but the only way to achieve this is by overriding it as well. The resulting code duplication is a clear indicator of a limitation of the type system: both methods in \type{List} are redundant, but the type system is not powerful enough to express them at the required level of abstraction in \type{Iterable}.


% \begin{lstlisting}[float,caption=Removing Code Duplication,label=lst:iter:tcpoly]
% trait Iterable[T, Container[X]] {
%   def filter(p: T => Boolean): Container[T]
%   def remove(p: T => Boolean): Container[T] = filter (x => !p(x))
% }
% 
% trait List[T] extends Iterable[T, List]
% \end{lstlisting}

\begin{figure}
  \centering\fbox{
    \includegraphics[width=\textwidth]{IterableTcpoly}}
  \caption{Removing Code Duplication}
  \label{lst:iter:tcpoly}
\end{figure}


Our solution, depicted in Fig. \ref{lst:iter:tcpoly}, is to abstract over the type constructor that represents the container of the result of \code{filter} and \code{remove}. Our improved \type{Iterable} now takes two type parameters: the first one, \type{T}, stands for the type of its elements, and the second one, \type{Container}, represents the \emph{type constructor} that determines part of the result type of the \code{filter} and \code{remove} methods. More specifically, \type{Container} is a type parameter that itself takes one type parameter. Note that the name of this higher-order type parameter (\code{X}) is not relevant here.

Now, to denote that applying \code{filter} or \code{remove} to a \type{List[T]} returns a \type{List[T]}, \type{List} simply instantiates \type{Iterable}'s type parameter to the \type{List} type constructor.

In this simple example, we could also have used a construct like Bruce's \type{MyType} \cite{DBLP:conf/ecoop/BruceSG95}. However, this scheme breaks down in more complex cases, as we will demonstrate in Section \ref{sec:scala:iterable}. First, we introduce type constructor polymorphism in more detail.

\begin{figure}
  \centering
    \includegraphics[width=\textwidth]{hierarchy}
  \caption{Diagram of levels}
  \label{fig:levels}
\end{figure}  

\subsection{Type constructors and kinds}

Type constructor polymorphism generalises genericity so that we can abstract over  type constructors (such as \type{List}) as well as proper types (such as \type{Int} or \type{List[Int]}). To distinguish proper types from type constructors, we use ``kinds'' (a term borrowed from functional programming). Kinds are to types as types are to values. This divides our language into three levels: at the bottom, we have objects (our values), as depicted in Fig. \ref{fig:levels}. Objects are classified by types, which reside in the next level. Finally, types are classified by kinds. %As higher levels imply a higher level of abstraction, the number of ``entities'' in a level becomes more constrained as we go up in the hierarchy: the number of objects isn't known until run time, types must be known at compile time, and the number of essentially different kinds is fixed by the language specification. 


Unlike types, kinds are purely structural: they simply reflect the kinds of the type parameters that a type expects. Since proper types all take the same number of type parameters (i.e., none), they are classified by the same kind, \lstinline!*!. To classify type constructors, we use a kind \emph{constructor} \lstinline!From -> To!, which abstracts over the kinds \lstinline!From! and \lstinline!To!. \lstinline!From! is the kind of the expected type argument and \lstinline!To! is the kind of the type that results from applying the type constructor to an argument.

For example, \lstinline!class List[T]! gives rise to a type constructor \lstinline!List! that is classified by the kind \lstinline!* -> *!, as applying \lstinline!List! to a proper type yields a proper type. Note that, since kinds are structural, given e.g., \mbox{\lstinline!class Animal[FoodType]!,} \lstinline!Animal! has the exact same kind as \lstinline!List!.

Our initial\footnote{In Section \ref{sec:bounds}, we will extend this model with support for bounds, and Section \ref{sec:variance} describes the impact of variance on the level of kinds.} model of the level of kinds can be described using the following grammar:

$$
\ottnt{K} ::= \quad \ottkw{*} \quad | \quad \ottnt{K} \; \rightarrow \; \ottnt{K}
$$

The rules that define the well-formedness of types in a language without type constructor polymorphism, correspond to the rules that assign a kind \lstinline!*! to a type. Our extensions generalise this to the notion of kind checking, which is to types as type checking is to values and expressions.

A class, or an unbounded type parameter or abstract type member receives the kind \kind{K' -> *} if it has one type parameter with kind \kind{K'}. For bounded type parameters or abstract members, the kind \kind{K' -> K} is assigned, where \kind{K} corresponds to the bound.  We use currying to generalise this scheme to deal with multiple type parameters. The type application \type{T[T']} has the kind \kind{K} if \type{T} has kind \kind{K' -> K}, and \type{T'} is classified by the kind \kind{K'}.  

Finally, the syntactical impact of extending Scala with type constructor polymorphism is minor. Before, only classes and type aliases could declare formal type parameters, whereas this has now been extended to include type parameters and abstract type members.  Figure \ref{lst:iter:tcpoly} already introduced the notation for type constructor parameters, and Listing \ref{lst:iter:mem} completes the picture with an alternative formulation of our running example using an abstract type constructor member.

\begin{lstlisting}[float,caption=\type{Iterable} with an abstract type constructor member,label=lst:iter:mem]
trait Iterable[T] {
  type Container[X]
  
  def filter(p: T => Boolean): Container[T]
}
\end{lstlisting}


\subsection{Improving Iterable \label{sec:scala:iterable}}
In this section we design and implement the abstraction that underlies comprehensions. Type constructor polymorphism plays an essential role in expressing the design constraints, as well as in factoring out boilerplate code without losing type safety. More specifically, we discuss the signature and implementation of \type{Iterable}'s \code{map}, \mbox{\code{filter},} and \code{flatMap} methods. The LINQ project introduced these on the .NET platform as \code{Select}, \code{Where}, and \code{SelectMany} \cite{DBLP:conf/oopsla/Meijer06}. % the essence of data access .. does not discuss Select, etc.

These methods interpret a user-supplied function in different ways in order to derive a new collection from the elements of an existing one: \code{map} transforms the elements as specified by that function, \code{filter} interprets that function as a predicate and retains only the elements that satisfy it, and \code{flatMap} uses the given function to produce a collection of elements for every element in the original collection, and then collects the elements in these collections in the resulting collection.

\begin{lstlisting}[float,caption=\type{Builder} and \type{Iterator},label=lst:builder]
trait Builder[Container[X], T] {
  def +=(el: T): Unit
  def finalise(): Container[T]
}

trait Iterator[T] {
  def next(): T
  def hasNext: Boolean
  
  def foreach(op: T => Unit): Unit = while(hasNext) op(next())
}
\end{lstlisting}

Thus, if we can factor out iterating over a collection as well as producing a new one, these  methods can be implemented in \type{Iterable} once and for all. Listing \ref{lst:builder} shows the well-known \type{Iterator} abstraction that encapsulates iterating over a collection, as well as our \type{Builder} abstraction, which may be thought of as its dual.

\type{Builder} abstracts over the type constructor that represents the collection that it builds, as well as over the type of the elements. The \code{+=} method is used to supply these elements in the order in which they should appear in the collection. The collection itself is returned by \code{finalise}. For example, a \type{Builder[List, Int]} can be thought of as a \type{ListBuffer[Int]}, as both can be used to create a \type{List[Int]} by supplying its elements in turn.

With these abstractions in place, we turn to Listing \ref{lst:iterable}, and show how an even more flexible trio \code{mapTo}/\code{filterTo}/\code{flatMapTo} is implemented. The generalisation consists of decoupling the original collection from the produced one -- they need not be the same, as long as there is a way of building the target collection. %Using implicits, the compiler infers the right strategy based on the type of the target collection.

As iterating over a collection is orthogonal to building a collection, the \code{build} method from \type{Buildable} does not belong in \type{Iterable}. In other words, it is not necessary to be able to build a collection in order to simply iterate over its elements. However, more complex operations, such as \code{mapTo}, do require an instance of \type{Buildable[C]}. Thus, \type{Iterable}'s methods that build a collection \type{C}, take an extra parameter of type \type{Buildable[C]}. Section \ref{sec:implicits} will show how an orthogonal feature of Scala can be used to relieve callers from supplying an actual argument for this parameter.

The result types of \code{map}, \code{flatMap}, and their generalisations illustrate why a \type{MyType}-based solution would not work: whereas the type of \code{this} would be \type{C[T]}, the result type of these methods is \type{C[U]}: it's the the same type constructor, but it is applied to different type arguments!

\begin{lstlisting}[float,label=lst:iterable,caption=\type{Buildable} and \type{Iterable}]
trait Buildable[Container[X]] {
  def build[T]: Builder[Container, T]
}

trait Iterable[T] {
  type Container[X] <: Iterable[X]
  
  def elements: Iterator[T]

  def mapTo[U, C[X]](f: T => U)(b: Buildable[C]): C[U] = { 
    val buff = b.build[U]
    val elems = elements
    
    while(elems.hasNext){
      buff += f(elems.next)
    }  
    buff.finalise()
  }    
  def filterTo[C[X]](p: T => Boolean)(b: Buildable[C]): C[T] = { 
    val buff = b.build[T]
    val elems = elements
    
    while(elems.hasNext){
      val el = elems.next
      if(p(el)) buff += el
    }
    buff.finalise()
  }
  def flatMapTo[U,C[X]](f: T=>Iterable[U])(b: Buildable[C]): C[U]={ 
    val buff = b.build[U]
    val elems = elements
    
    while(elems.hasNext){
      f(elems.next).elements.foreach{ el => buff += el }
    }
    buff.finalise()
  }
  
  def map[U](f: T => U)(b: Buildable[Container]): Container[U] 
    = mapTo[U, Container](f)(b)
  def filter(p: T => Boolean)(b: Buildable[Container]):Container[T] 
    = filterTo[Container](p)(b)
  def flatMap[U](f: T => Container[U])
                (b: Buildable[Container]): Container[U] 
    = flatMapTo[U, Container](f)(b)
}
\end{lstlisting}



\begin{lstlisting}[float=p,label=lst:buildable1,caption=Building a \type{List}]
object ListIsBuildable extends Buildable[List] {
  def build[T]: Builder[List, T] = new ListBuffer[T] with Builder[List, T] {
    // += is inherited from ListBuffer (Scala standard library)
    def finalise(): List[T] = toList
  }
}
\end{lstlisting}

\begin{lstlisting}[float=p,label=lst:buildable2,caption=Building an \type{Option}]
object OptionIsBuildable extends Buildable[Option] {
  def build[T]: Builder[Option, T] = new Builder[Option, T] {
    var res: Option[T] = None()
    
    def +=(el: T) = if(res.isEmpty) res = Some(el) 
       else throw new UnsupportedOperationException(">1 elements")
              
    def finalise(): Option[T] = res
  }
} 
\end{lstlisting}


Listings \ref{lst:buildable1} and \ref{lst:buildable2} show the objects that implement the \lstinline|Buildable| interface for \lstinline!List! and \lstinline!Option!. An \type{Option} corresponds to a list that contains either 0 or 1 elements, and is commonly used in Scala to avoid \code{null}'s.


Finally, a brief note on methodology: \type{Container} is a parameter in \type{Buildable} because its main characteristic is which container it builds, whereas we use a type member in \type{Iterable}, as its external clients are generally only interested in the type of its elements. Thus, the \type{Container} member is more of an internal matter in \mbox{\type{Iterable}'s} subclassing hierarchy.

  
\subsubsection{Example} 
Suppose we're developing a social networking site, and we want to know the average age of our users.
Since users do not have to enter their birthday, we set out with a \type{List[Option[Date]]}. An \type{Option[Date]} either holds a date or nothing. Listing \ref{lst:iterex} shows how to proceed.

First, we introduce a small helper that computes the current age in years from a date of birth. To collect the known ages, we transform an optional date to an optional age using \code{map} and then collect the results into a list using \code{flatMapTo}.
Note that we use the more general \code{flatMapTo}. If we had used \code{flatMap}, the inner \code{map} would have had to convert its result from an \type{Option} to a \type{List}, as \code{flatMap(f)} returns its results in the same kind of container as produced by the function \code{f} (the inner \code{map}).
Finally, we aggregate the results using \code{reduceLeft}. The full code of the example is available on the paper's homepage\footnote{\url{http://www.cs.kuleuven.be/~adriaan/?q=genericshk}}. 

Note that the Scala compiler infers most proper types (we added some annotations to aid understanding), but it does not infer type constructor arguments. Thus, type argument lists that contain type constructors, must be supplied manually.

\begin{lstlisting}[float=p,label=lst:iterex,caption=Example: using \type{Iterable}]
val bdays: List[Option[Date]] = List(
  Some(new Date("1981/08/07")), None, Some(new Date("1990/04/10")))
def toYrs(bd: Date): Int = // omitted

val ages: List[Int] = bdays.flatMapTo[Int, List]{ optBd => 
                        optBd.map{d => toYrs(d)}(OptionIsBuildable)
                      }(ListIsBuildable)

val avgAge = ages.reduceLeft[Int](_ + _) / ages.length
\end{lstlisting}

Finally, the only type constructor that arises in the example is the \type{List} type argument, as it cannot be inferred. This demonstrates that the complexity of type constructor polymorphism, much like with genericity, is concentrated in the internals of the library. The upside is that library designers and implementers have more control over the interfaces of the library, while clients remain blissfully ignorant of the underlying complexity. Furthermore, the next section discusses how the arguments \code{OptionIsBuildable} and \code{ListIsBuildable} can be omitted.

\section{Leveraging Scala's implicits \label{sec:implicits}}

Since there generally is only one instance of \type{Buildable[C]} for
a particular type constructor \type{C}, it becomes quite tedious to
supply it as an argument whenever calling one of \type{Iterable}'s
methods that requires it.

Fortunately, Scala's implicits
\cite{odersky06:pmtc,odersky:scala-reference} can be used to shift
this burden to the compiler. It suffices to add the \code{implicit}
keyword to the parameter list that contains the \code{b: Buildable[C]}
parameter, and to the \type{XXXIsBuildable} objects. With this change,
which is sketched in Listing \ref{lst:implicits}, callers (such as in
the example of Listing \ref{lst:iterex}) typically do not need to
supply this argument.

\begin{lstlisting}[float=p,label=lst:implicits,caption=Snippet: leveraging implicits in \type{Iterable}]
trait Iterable[T] {
  def map[U](f: T => U)
            (implicit b: Buildable[Container]): Container[U] 
    = mapTo[U, Container](f) // no need to pass b explicitly    
  // similar for other methods
}

implicit object ListIsBuildable extends Buildable[List] { ... }
implicit object OptionIsBuildable extends Buildable[Option] { ... }

// client code (previous example, using succinct function syntax):
val ages: List[Int] = bdays.flatMapTo[Int, List]{_.map{toYrs(_)}}
\end{lstlisting}

Implicits are one of the major innovations in Scala. They have been
implemented since version 1.4 of the language, but so far have not yet
been described in a conference or journal publication. One aspect of
implicits is that they can encode type classes, as they are found in
Haskell. The ``overhead'' in language concepts to do this encoding is
very small, because implicits make use of the standard object-oriented
machinery whereas Haskell introduces more specialised concepts.  In a
C++ context, implicits resemble Siek and
Lumsdaine's ``concepts'' \cite{DBLP:conf/oopsla/GregorJSSRL06}.

With the introduction of type constructor polymorphism, our encoding of type classes is
extended to constructor classes, such as \type{Monad}, as discussed in Section
\ref{sec:bounds:tpclass}. Moreover, our encoding exceeds the original because we
integrate type constructor polymorphism with subtyping, so that we can abstract over
bounds. This would correspond to abstracting over type class contexts, which is not
supported in Haskell
\cite{hughes99:restricted,jones94:setmonad,kidd07:setmonad,chak07:classfamilies}.
Section \ref{sec:bounds:tpclass} discusses this in more detail.

Listing \ref{lst:monoid} introduces implicits by way of a simple example. It defines an
abstract class of monoids and two concrete implementations, \code{StringMonoid} and
\code{IntMonoid}. The two implementations are marked with an \lstinline@implicit@
modifier.

The principal idea behind implicit parameters is that arguments
for them can be left out from a method call. If the arguments corresponding to an
implicit parameter section are missing, they are inferred by the Scala compiler.

Listing \ref{lst:monoid:sum} implements a \lstinline@sum@ method, which works
over arbitrary monoids. \lstinline@sum@'s second parameter is marked
\lstinline@implicit@. Note that \code{sum}'s recursive call does not need to pass along the \code{m} implicit argument.


\begin{lstlisting}[caption=Using implicits to model monoids,label=lst:monoid,float]
abstract class Monoid[T] {
  def add(x: T, y: T): T
  def unit: T
}

object Monoids {
  implicit object stringMonoid extends Monoid[String] {
    def add(x: String, y: String): String = x.concat(y)
    def unit: String = ""
  }
  implicit object intMonoid extends Monoid[Int] {
    def add(x: Int, y: Int): Int = x + y
    def unit: Int = 0
  }
}
\end{lstlisting}

\begin{lstlisting}[caption=Summing lists over arbitrary monoids,label=lst:monoid:sum,float]
def sum[T](xs: List[T])(implicit m: Monoid[T]): T 
  = if (xs.isEmpty) m.unit else m.add(xs.head, sum(xs.tail))
\end{lstlisting}

The actual arguments that are eligible to be passed to an implicit
parameter include all identifiers that are marked \code{implicit}, and that can be accessed at the point
of the method call without a prefix. For instance, we can open up the scope of the
\lstinline@Monoids@ object using an import statement, such as ~\mbox{\lstinline!import Monoids._!} This makes the two implicit definitions of \lstinline@stringMonoid@ and \lstinline@intMonoid@
eligible to be passed as implicit arguments, so that we can write:

\begin{lstlisting}[frame=no]
sum(List("a", "bc", "def"))
sum(List(1, 2, 3))
\end{lstlisting}
These applications of \lstinline@sum@ are equivalent to the following two applications, 
where the formerly implicit argument is now given explicitly.
\begin{lstlisting}[frame=no]
sum(List("a", "bc", "def"))(stringMonoid)
sum(List(1, 2, 3))(intMonoid)
\end{lstlisting}

If there are several eligible arguments which match an implicit
parameter's type, a most specific one will be chosen using the
standard rules of Scala's static overloading resolution. If there is
no unique most specific eligible implicit definition, the 
call is ambiguous and will result in a static error.

\subsection{Implicit parameters generalise subtype bounds} Adding a parameter list to a method clearly constrains how that method may be called. For example, since \code{mapTo} declares a parameter of type \type{Buildable[C]}, it can only be called when an argument of type \type{Buildable[C]} can be supplied. Making this parameter list implicit shifts this burden to the compiler, but the constraint remains.

Similarly, subtype bounds on a method's type parameters constrain how this method may be called, as the actual type arguments supplied by the client must meet the declared bounds. Conceptually, this may be thought of as an implicit parameter that represents the coercion that corresponds to the subtype bound. 

Thus, both mechanisms can be used to restrict method calls. In both cases, the compiler carries the burden of providing the witness to that constraint. However, implicit parameters are more general than subtype bounds in two ways.

First, the programmer has access to the value that witnesses the constraint. With subtyping, a bound on a type parameter \type{T} simply results in more information being available on values of type \type{T}. To use that information, a value of type \type{T} is needed. 

In the case of type constructor parameters, such as \type{C} in \code{mapTo}, it is less useful to gain more information about values of type \type{C[T]} for some type \type{T}.  More concretely, suppose we would use a subtype constraint \type{C[X] <: Buildable[C]} instead of an implicit parameter. Thus, for any \type{T}, a \type{C[T]} is also a \mbox{\type{Buildable[C]}}. In this case, this results in a Catch-22: we need an instance of \type{Buildable[C]} to create an object of type \type{C[T]}, but the only way to acquire a \type{Buildable[C]} is to coerce a value of type \type{C[T]}. Moreover, the type \type{T} needs to be specified even though it is irrelevant for the \type{Buildable} abstraction.

%To make the consequences of using the bound \type{C[X] <: Buildable[C]} more concrete: this would require that \type{List[T]} extend \type{Buildable[List]}. However, this does not make sense, as \type{Buildable[List]}'s \code{build} method must then be called on a \type{List[T]}, even though its purpose is precisely to create such instances.

Using an implicit parameter instead of a bound solves both problems, without increasing the burden on clients of the abstraction. The compiler automatically supplies the right instance of \type{Buildable[C]}, without the need to specify a type \type{T} or acquire a value of type \type{C[T]}.

Second, implicit parameters can constrain other abstract types besides the method's own type parameters. As an example of such a generalised constraint \cite{DBLP:conf/ecoop/EmirKRY06}, the \code{map}/\code{filter}/\code{flatMap} methods all require an implicit parameter of type \type{Buildable[Container]}. This effectively constrains the \type{Container} type constructor member, which is not possible using ordinary subtype bounds.


%\TODO{novel approach to the builder pattern, makes it possible to implement a generalised \code{sequence}}


\subsection{Encoding Haskell's type classes with implicits \label{sec:tpclass}}
Haskell's type classes have grown from a simple mechanism that deals with overloading \cite{DBLP:conf/popl/WadlerB89}, to an important tool in dealing with the challenges of modern software engineering. Its success has prompted others to explore similar features in Java \cite{DBLP:conf/ecoop/WehrLT07}.


\lstset{language=Haskell}

\begin{lstlisting}[float,caption=Using type classes to overload \code{<=} in Haskell,label=lst:ord:haskell,language=haskell]
class  Ord a  where
  (<=) :: a -> a -> Bool
  
instance Ord Date where
  (<=)     = ...
  
max     :: Ord a => a -> a -> a
max x y = if x <= y then y else x
\end{lstlisting}

\subsubsection{An example in Haskell} Listing \ref{lst:ord:haskell} defines a simplified version of the well-known  \type{Ord} type class. Essentially, this definition says that if a type \type{a} is in the \type{Ord} \emph{type class}, the function \code{<=} with type \lstinline!a -> a -> Bool! is available. The \emph{instance declaration} \lstinline!instance Ord Date! gives a concrete implementation of the \code{<=} operation on \type{Date}'s and thus adds \type{Date} as an \emph{instance} to the \type{Ord} type class. To constrain an abstract type to instances of a type class, \emph{contexts} are employed. For example, \code{max}'s signature constrains \type{a} to be an instance of \type{Ord} using the context \code{Ord a}, which is separated from the function's type by a \code{=>}.

Conceptually, a context that constrains a type \type{a}, is translated into an extra parameter that supplies the implementations of the type class's methods, packaged in a so-called ``method dictionary''. An instance declaration   specifies the contents of the method dictionary for this particular type. 


\lstset{language=Scala}


\begin{lstlisting}[float,caption=Encoding type classes using Scala's implicits,label=lst:ord:scala]
trait Ord[T] {
  def <= (other: T): Boolean
}

import java.util.Date

implicit def dateAsOrd(self: Date) = new Ord[Date] {
  def <= (other: Date) = self.equals(other) || self.before(other)
}

def max[T <% Ord[T]](x: T, y: T): T = if(x <= y) y else x
\end{lstlisting}

\subsubsection{Encoding the example in Scala} It is natural to turn a type class into a class, as shown in Listing \ref{lst:ord:scala}. Thus, an instance of that class corresponds to a method dictionary, as it supplies the actual implementations of the methods declared in the class. The instance declaration \lstinline[language=Haskell]!instance Ord Date!  is translated into an implicit method that converts a \type{Date} into an \type{Ord[Date]}. An object of type \type{Ord[Date]} encodes the method dictionary of the \type{Ord} type class for the instance \type{Date}. 

Because of Scala's object-oriented nature, the creation of method dictionaries is driven by member selection. Whereas the Haskell compiler selects the right method dictionary fully automatically, this process is triggered by calling missing methods on objects of a type that is an instance of a type class that does provide this method. When a type class method, such as \code{<=}, is selected on a type \type{T} that does not define that method, the compiler searches an implicit value that converts a value of type \type{T} into a value that does support this method. In this case, the implicit method \code{dateAsOrd} is selected when \type{T} equals \type{Date}.

Note that Scala's scoping rules for implicits differ from Haskell's. Briefly, the search for an implicit is performed locally in the scope of the method call that triggered it, whereas this is a global process in Haskell.

Contexts are another trigger for selecting method dictionaries. The \code{Ord a} context of the \code{max} method is encoded as a view bound \lstinline!T <% Ord[T]!,
 which is syntactic sugar for an implicit parameter that converts the bounded type to its view bound. Thus, when the \code{max} method is called, the compiler must find the appropriate implicit conversion. Listing \ref{lst:max1} removes this syntactic sugar, and Listing \ref{lst:max2} goes even further and makes the implicits explicit. Clients would then have to supply the implicit conversion explicitly: \code{max(dateA, dateB)(dateAsOrd)}.


\begin{lstlisting}[float,label=lst:max1,caption=Desugaring view bounds]
def max[T](x: T, y: T)(implicit conv: T => Ord[T]): T 
  = if(x <= y) y else x
\end{lstlisting}

\begin{lstlisting}[float,label=lst:max2,caption=Making implicits explicit]
def max[T](x: T, y: T)(c: T => Ord[T]): T = if(c(x).<=(y)) y else x
\end{lstlisting}

\subsubsection{Conditional implicits}
By defining implicit methods that themselves take implicit parameters, we can encode Haskell's conditional instance declarations. For example:

\begin{lstlisting}[language=haskell,frame=no]
instance Ord a => Ord (List a) where
  (<=)     = ...
\end{lstlisting}
This is encoded in Scala as:
\begin{lstlisting}[frame=no]
implicit def listAsOrd[T](self: List[T])(implicit v: T => Ord[T]) = 
  new Ord[List[T]] {
    def <= (other: List[T]) = // compare elements in self and other
  }
\end{lstlisting}
Thus, two lists with elements of type \type{T} can be compared as long as their elements are comparable. To ensure that the compiler's search for implicit arguments terminates, the Scala Language Specification defines a contractiveness check for implicit methods \cite{odersky:scala-reference}.

Type classes and implicits both provide ad-hoc polymorphism. Like parametric
polymorphism, this allows methods or classes to be applicable to
arbitrary types. However, parametric polymorphism implies that a
method or a class is truly indifferent to the actual argument of its
type parameter, whereas ad-hoc polymorphism maintains this illusion by
selecting different methods or classes for different actual type
arguments.

This ad-hoc nature of type classes and implicits can be seen as a
retro-active extension mechanism. In OOP, virtual classes
\cite{ernst99b} have been proposed as an alternative that's better
suited retro-active extension. However, ad-hoc polymorphism also
allows types to drive the selection of functionality as demonstrated
by the selection of (implicit) instances of \type{Buildable[C]} in our
\type{Iterable} example\footnote{Java's static overloading mechanism
  is another example of ad-hoc polymorphism.}. \type{Buildable}
clearly could not be truly polymorphic in its parameter, as that would
imply that there could be one \type{Buildable} that knew how to supply
a strategy for building any type of container.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Bounds for Type Constructors \label{sec:bounds}}
Subtyping and parameterisation are both important sources of polymorphism. We discuss how they interact and how this impacts our model of kinds. To illustrate the advantages of integrating subtyping and type constructors, we extend \type{Iterable} to deal with bounded collections. In this regard, Scala's type system is more expressive than Haskell's: it is not possible to abstract over type class contexts in Haskell, so that \type{Set} (a bounded collection) cannot be made into a \type{Monad} (an \type{Iterable}) \cite{hughes99:restricted,chak07:classfamilies}. 


\subsection{Tracking bounds in kinds}
Given the definition \code{class NumericList[T <: Number]}, \lstinline!NumericList! must not simply be classified as a \lstinline!* -> *!. Otherwise, the kinding rule for type applications would consider \type{NumericList[String]} well-kinded, which clearly is not the case. 

Thus, we must make information about the bounds on type parameters  available to the kinding system. We enrich the definition of \lstinline!*! so that it includes the lower and upper bounds that must be satisfied by the types that it classifies:

$$
\ottnt{K} ::= \quad \ottkw{*(} \;  T \; \ottkw{,} \; T \; \ottkw{)}   \quad | \quad 
              \ottnt{K} \; \rightarrow \; \ottnt{K}
$$

This improvement neatly incorporates bounded types into our three-level model of classification.
As \type{Any} represents the top of Scala's subtype lattice, and \type{Nothing} is the bottom type, we recover \kind{*} as \kind{*(Nothing, Any)}, but we will keep using it as a shorthand. Since we predominantly use upper bounds, we abbreviate \kind{*(Nothing, T)} to \kind{*(T)}. 
Thus, \type{NumericList} has kind \kind{*(Number) -> *}.

We further adapt our kinding system to assign the kind \kind{*(T, T)} to a type \type{T} that previously simply received kind \kind{*}, so that \type{String} has kind \kind{*(String, String)}. With these improvements, the illegal application \mbox{\type{NumericList[String]}} is ruled out. 

Unfortunately, even though \code{Int <: Number}, \type{NumericList[Int]} is not well-kinded in this system, as \kind{*(Int, Int)} cannot be related to \kind{*(Number)}. To achieve this, we introduce subkinding and kind subsumption.

\subsection{Subkinding}
A kind \kind{*(S, T)} is a subkind of \kind{*(S', T')} if \type{S' <: S} and \type{T <: T'}, which corresponds to the usual notion of interval inclusion. The \lstinline|->| kind constructor behaves like the type constructor for function types: it is contravariant in the kind of the parameter, and covariant in the kind of the result.

We also allow kind subsumption. That is, if \type{T: K} and \kind{K} is a subkind
of \kind{K'}, then \type{T: K'}. For instance,  \type{Int} : \kind{*(Int, Int)} implies \type{Int} : \kind{*(Nothing, Number)}. Thus, \type{NumericList[Int]} is well-kinded.


\subsection{Bounded Iterable \label{sec:boundediter}}
Given \type{Iterable}'s definition in Listing \ref{lst:iterable}, it is illegal to subclass it as in Listing \ref{lst:numlistmem}. If this were allowed, \type{Iterable}'s abstract type member \type{Container} would forget the bound on \type{NumericList}, so that a client could use the method \code{map[String]} on a \code{NumericList} to coax it into producing a \type{NumericList[String]}. 
 
The kinding rules that we introduced earlier, suffice to detect this problem. To make them easier to apply, Listing \ref{lst:numlistparam} rephrases the problem using only type parameters (this equivalence is discussed in Section \ref{sec:embedding}).

\begin{lstlisting}[float,label=lst:numlistmem,caption=\type{NumericList}: an illegal subclass of \type{Iterable}]
trait NumericList[T <: Number] extends Iterable[T] {
  type Container[X <: Number] = NumericList[X]
}  
\end{lstlisting}

\begin{lstlisting}[float,label=lst:numlistparam,caption=Rephrasing \type{NumericList} with type parameters]
trait Iterable[T, Container[X]]

trait NumericList[T <: Number] extends Iterable[T, NumericList]
\end{lstlisting}

The type application \type{Iterable[T, NumericList]} is not well-kinded because \type{Iterable} declares the kind \kind{* -> *} for the \type{Container} parameter, whereas \type{NumericList} has kind \kind{*(Number) -> *}, which does not conform to \kind{* -> *}.

\begin{lstlisting}[caption=Essential changes to extend Iterable with support for bounds,label=lst:boundediter,float]
trait Builder[Container[X <: B], T <: B, B]
trait Buildable[Container[X <: B], B] {
  def build[T <: B]: Builder[Container, T, B]
}
trait Iterable[T <: Bound, Bound] {
  type Container[X <: Bound] <: Iterable[X, Bound]

  def map[U <: Bound](f: T => U)
           (implicit b: Buildable[Container, Bound]): Container[U]
}
\end{lstlisting}

To allow subclasses of \type{Iterable} to declare a bound on the type of elements they accept, \type{Iterable} must abstract over this bound. Listing \ref{lst:boundediter} generalises the interface of the original \type{Iterable} from Listing \ref{lst:iterable}. The implementation is not affected by this change. \type{NumericList} can now be defined as shown in Listing \ref{lst:numlistbound}.

\begin{lstlisting}[float,label=lst:numlistbound,caption=Safely subclassing \type{Iterable}]
trait NumericList[T <: Number] extends Iterable[T, Number] {
  type Container[X <: Number] = NumericList[X]
}  
\end{lstlisting}

Again, the client of the collections API is not exposed to the relative complexity of Listing \ref{lst:boundediter}.  However, without it, a significant fraction of the collection classes could not be unified under the same \type{Iterable} abstraction. Thus, the clients of the library benefit, as a unified interface means that they need to learn fewer concepts.


\subsection{Exceeding type classes \label{sec:bounds:tpclass}}
\begin{lstlisting}[float,caption=\type{Set} cannot be made into a \type{Monad} in Haskell,label=lst:setmonad:haskell,language=haskell]
class Monad m where
  (>>=) :: m a -> (a -> m b) -> m b
  
data (Ord a) => Set a = ...

instance Monad Set where
 -- (>>=) :: Set a -> (a -> Set b) -> Set b
\end{lstlisting}

As shown fragmentarily in Listing \ref{lst:setmonad:haskell}, Haskell's \type{Monad} abstraction \cite{DBLP:conf/afp/Wadler95} does not apply to type constructors with a constrained type parameter, such as \mbox{\type{Set}.} The reason is similar to why the \type{NumericList} from the previous section could not be made into a subclass of \type{Iterable}, unless the latter accommodates bounds from the start. Resolving this issue in Haskell is an active research topic \cite{chak07:classfamilies,DBLP:conf/popl/ChakravartyKJM05,hughes99:restricted}.

In this example, the \type{Monad} abstraction\footnote{In fact, the main difference between our \type{Iterable} and Haskell's \type{Monad} is spelling.} does not accommodate constraints on the type parameter of the \type{m} type constructor that it abstracts over. Since \type{Set} is a type constructor that constrains its type parameter, it is not a valid argument for \type{Monad}'s \type{m} type parameter: \type{m a} is allowed for any type \type{a}, whereas \type{Set a} is only allowed if \type{a} is an instance of the \type{Ord} type class. Thus, passing \type{Set} as \type{m} could lead to violating this constraint.

\begin{lstlisting}[float,caption=\type{Monad} in Scala,label=lst:monad:scala]
trait Monad[A, M[X]] {
  def >>= [B](f: A => M[B]): M[B] 
}
\end{lstlisting}

\begin{lstlisting}[float,caption=\type{Set} as a \type{BoundedMonad} in Scala,label=lst:setmonad:scala]
trait BoundedMonad[A <: Bound[A], M[X <: Bound[X]], Bound[X]] {
  def >>= [B <: Bound[B]](f: A => M[B]): M[B] 
}

trait Set[T <: Ord[T]]

implicit def SetIsBoundedMonad[T <: Ord[T]](s: Set[T])
                : BoundedMonad[T, Set, Ord] = ...
\end{lstlisting}

For reference, Listing \ref{lst:monad:scala} shows a direct encoding of the \type{Monad} type class. To solve the problem in Scala, we then generalise \type{Monad} to \type{BoundedMonad} in Listing \ref{lst:setmonad:scala} to deal with bounded type constructors. Finally, implicits are used to retro-actively turn \type{Set} into a \type{BoundedMonad}, as explained in Section \ref{sec:tpclass}.

\comment{Note that a more faithful encoding would use view bounds instead of subtype bounds, but this is not yet accepted by the Scala compiler. The problem is that the compiler has to detect termination of implicit
applications, which is complicated by type constructors. In practice, this limitation is not often encountered, and if it is, it can be worked around by using subtype constraints instead of view bounds. Nonetheless, we hope to relax this restriction in the future.}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Embedding in an Object-Oriented Language \label{sec:embedding}}
Scala supports two styles of abstraction: the functional style uses parameterisation, whereas abstract members represent the object-oriented way. Both styles have different strengths  \cite{DBLP:conf/ecoop/BruceOW98,DBLP:conf/ecoop/ThorupT99,DBLP:conf/ecoop/Ernst01}, and the examples in the previous sections have employed them in a way that is idiomatic to Scala.

It is natural to ask how these styles relate, and whether one style could be used exclusively. To answer this, we show how type constructor polymorphism can be encoded using abstract type members and abstract type member refinement, which are purely object-oriented mechanisms. 

Although the encoding works for correct programs, it does not preserve the safety properties of type constructor polymorphism. Certain illegal type applications are encoded as valid Scala programs, which delays the detection of these errors.

By analogy to ordinary soundness, which ensures that a well-typed program does not contain function applications that pass the wrong type of arguments at run time, we use the term ``kind soundness'' for the property that well-kinded type applications never result in vacuous types. Recently, we developed a purely object-oriented formalism that possesses this property  \cite{moors08:scalina}. 


\subsection{Encoding type constructors using abstract type members \label{sec:embedding:encoding}}
At first sight, Scala's abstract type members closely correspond to type parameters, and abstract type member refinement (a restricted form of mixin composition) is the object-oriented counterpart of type application. Abstract type member refinement allows to override abstract type members with concrete ones. The rules for application and abstraction carry over straightforwardly to this approach. In fact, via this correspondence, Scala already provided preliminary support for type constructor polymorphism before our extension. Listing \ref{lst:iter:oo} illustrates this encoding using our running example. 


\begin{lstlisting}[float,caption=Encoding the \type{Container} type parameter as an abstract member,label=lst:iter:oo ]
trait TypeFunction1 { type A }

trait Iterable extends TypeFunction1 {
  type Container <: TypeFunction1

  def map[B](f: A => B): Container{type A = B}
}

trait List extends Iterable { type Container = List }
\end{lstlisting}



\subsection{Kind soundness \label{sec:embedding:kindsoundness}}
Listing \ref{lst:numericlis:oo} illustrates the limitations of the encoding. As discussed in Section \ref{sec:boundediter}, \type{NumericList} is not a valid subclass of an \type{Iterable} that does not accommodate bounds. Thus, it must be rejected by the compiler, as the inherited \code{map} method's result type may apply \type{NumericList} to any type, but \type{NumericList} only accepts subtypes of \type{Number} as its type argument. 


\begin{lstlisting}[float,caption=\type{NumericList} eludes the type checker,label=lst:numericlis:oo ]
trait NumericList extends Iterable {
  type A <: Number
  type Container = NumericList // Incorrect, but no error reported!
}
\end{lstlisting}

The Scala compiler silently accepts this program, even though we could never complete its implementation (at some point we will have to instantiate a \type{NumericList} for an arbitrary type of elements, and the compiler will catch our mistake).

Note that this indulgence does not imply \emph{type} unsoundness, as these erroneous types cannot be instantiated. Nonetheless, we regard it as a shortcoming of the compiler that these degenerate types are allowed to slip by unnoticed. Even though they are prevented from being instantiated, they could be unmasked earlier.

To motivate this desire for early detection of these inconsistencies, consider the analogy with abstract classes. Suppose classes would be allowed to be abstract implicitly, so that accidental abstract classes would not be discovered until a client attempts to instantiate them. However, this situation is considered undesirable by most languages, so that an abstract class must be marked as such explicitly. This eliminates the possibility that the programmer simply forgot to implement a method.

Not detecting erroneous type applications, which manifest themselves as intersection types that unexpectedly do not have any instances, has the same effect as allowing any class to be abstract implicitly: eventually, the error is detected, but it could have been signalled earlier. Even though other uses of intersection types might sensibly result in empty types, we do not consider this to be one of them.

This kind unsoundness has its roots in the \nuObj~calculus \cite{DBLP:conf/ecoop/OderskyCRZ03}, which allows abstract type members to be refined \emph{covariantly}, thus \lstinline!NumericList <: TypeFunction1!. In this case, this means that \lstinline!NumericList <: TypeFunction1! holds, whereas the corresponding subkinding judgement \lstinline!*(Number) -> * <: * -> *! does not hold. 

Related work seems to deviate from \nuObj's design, although making a precise comparison is complicated by the differences in features supported by the various approaches. In the notation of Cardelli \cite{DBLP:conf/edbt/Cardelli88}, the main two types are classified as follows:

\begin{lstlisting}
NumericList : ALL[A::POWER[Number]] TYPE
TypeFunction1 : ALL[X::TYPE] TYPE
\end{lstlisting}

Cardelli does not define subkinding for these kinds, but does define subtyping for polymorphic functions (``All[X::K]B \textless{}: All[X::K`]B' if K`\textless{}::K (where \textless{}:: denotes a subkind relation[, \ldots{}]), and B\textless{}:B' under the assumption that X::K'''). It seems reasonable to lift this rule (which deals with functions that take a type to yield a \emph{value}) to the level of kinds, which results in our rule that deals with functions that take a type to yield a \emph{type}.

Similarly, in the notation of Compagnoni and Goguen \cite{DBLP:journals/iandc/CompagnoniG03}:

\begin{lstlisting}
NumericList : Pi A <: Number : *. *
TypeFunction1 : Pi X <: T* : *. *
\end{lstlisting}

Although the authors require these bounds to be equal for the kinds to be comparable (their treatment does not include subkinding), we generalise based on the same observation as the previous paragraph, but using a slight different source of inspiration. Namely, Full System F$_{<}$:'s \cite{DBLP:journals/iandc/CardelliMMS94} rule that deals with bounded quantification at the value level (Sub Forall) also requires \emph{contravariance} for the bounds of the quantifier.

We recover early error detection in Scalina \cite{moors08:scalina}, a purely object-oriented calculus, by differentiating covariant and contravariant members, instead of assuming they all behave covariantly. This distinction corresponds to the fact that some members abstract over input, whereas others represent the output of the abstraction. Input members should behave contravariantly, like the types of a method's parameters, whereas covariance is required for output members, which correspond to a method's result type. With this distinction, a purely object-oriented calculus can encode functional-style abstraction with the same safety guarantees.


\subsection{Dependencies}
Like other members, type members are conceptually selected on values. However, for soundness and decidability, the set of values on which an abstract type member may be selected is restricted to \emph{paths} \cite{DBLP:conf/ecoop/OderskyCRZ03}. Roughly, a path is an immutable value. Formally, type members are selected on types, which may embed paths by way of singleton types, lightweight dependent types. Abstract type members may only be selected on singleton types. Thus, even though we distinguish three levels (values, types, and kinds), they are not strictly separated. Scala has always supported path-dependent types, and now kinds may also depend on types. 

It is important to note that these dependencies are quite restricted. Types may only depend on paths, immutable values for which a simple --- statically decidable --- notion of equality is defined. This design seems like a good trade-off between a fully dependently typed languages (such as Cayenne \cite{DBLP:conf/afp/Augustsson98}, or Epigram \cite{DBLP:conf/afp/McBride04}) and a language that maintains a strict phase separation (such as Haskell \cite{DBLP:journals/sigplan/HudakPWBFFGHHJKNPP92} and \OmegaLang). Interestingly, singleton types are considered an important pattern in \OmegaLang~\cite{citeulike:975433}.



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Type Constructors and Variance \label{sec:variance}}

Another facet of the interaction between subtyping and type constructors is seen in Scala's support for definition-site variance annotations \cite{DBLP:conf/ecoop/EmirKRY06}. Essentially, variance annotations provide the information required to decide subtyping of types that result from applying the same type constructor to different types.

As the classical example, consider the definition of the class of immutable lists, \lstinline!class List[+T]!. The \lstinline!+! before \lstinline!List!'s type parameter denotes that \lstinline!List[T]! is a subtype of \lstinline!List[U]! if \lstinline!T! is a subtype of \lstinline!U!. We say that \lstinline!+! introduces a covariant type parameter, \lstinline!-! denotes contravariance (the subtyping relation between the type arguments is the inverse of the resulting relation between the constructed types), and the lack of an annotation means that these type arguments must be identical.

Variance annotations pose the same kind of challenge to our model of kinds as did bounded type parameters: our kinds must encompass them as they represent information that should not be glossed over when passing around type constructors. The same strategy as for including bounds into \lstinline!*! can be applied here, except that variance is a property of type \emph{constructors}, so we should track it in \lstinline!->!

Without going in too much detail, we illustrate the need for variance annotations on higher-order type parameters and how they influence kind conformance. 

Listing \ref{lst:variance} defines a perfectly valid \type{Seq} abstraction, albeit with a contrived \code{lift} method. Because \type{Seq} declares \type{C}'s type parameter \type{X} to be covariant, it may use its covariant type parameter \type{A} as an argument for \type{C}, so that \type{C[A] <: C[B]} when \type{A <: B}. 

\type{Seq} declares the type of its \code{this} variable to be \type{C[A]} (\code{self: C[A] =>} declares \code{self} as an alias for \code{this}, and gives it an explicit type). Thus, the \code{lift} method may return \code{this}, as its type can be subsumed to \type{C[B]}.

Suppose that we allowed a type constructor that is invariant in its first type parameter, to be passed as the argument for a type constructor parameter that assumes its first type parameter to be covariant. This would foil the type system's first-order variance checks: \type{Seq}'s definition would be invalid if \type{C} were invariant in its first type parameter.

The remainder of Listing \ref{lst:variance} sets up a concrete example that would result in a run-time error if the type application \type{Seq[A, Cell]} were not ruled out statically.

More generally, a type constructor parameter that does not declare any variance for its parameters, does not impose any restrictions on the variance of the parameters of its type argument. However, when either covariance or contravariance is assumed, the corresponding parameters of the type argument must have the same variance.

\begin{lstlisting}[caption=Example of unsoundness if higher-order variance annotations are not enforced.,label=lst:variance,float]
trait Seq[+A, C[+X]] { self: C[A] =>
  def lift[B >: A]: C[B] = this
}

class Cell[A] extends Seq[A, Cell] { // (only) compile-time error
  private var cell: A = _
  def set(x: A) = cell = x
  def get: A = cell
}

class Top
class Ext extends Top {def bar() = println("bar")}

val exts: Cell[Ext] = new Cell[Ext]
val tops: Cell[Top] = exts.lift[Top]
tops.set(new Top)
exts.get.bar()  // run-time error if compile-time error is ignored
\end{lstlisting}
\comment{Scala has definition-site variance, e.g., Iterable is covariant in first arg
--> trait Iterable[+T] 
  --> type Container[X] --> PROBLEM: Container[T] illegal
   --> type Container[+X], otherwise we can't write Container[T] 
 (T is covariant, cannot appear in invariant position)

example of problem if we ignore these rules
}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Related Work \label{sec:related}}
Since the seminal work of Girard and Reynolds in the early 1970's, fragments of the higher-order polymorphic lambda calculus or System F$_\omega$ \cite{girard:thesis,DBLP:conf/programm/Reynolds74,DBLP:journals/iandc/BruceMM90} have served as the basis for many programming languages. The most notable example is Haskell \cite{DBLP:journals/sigplan/HudakPWBFFGHHJKNPP92}, which haves supported higher-kinded types for over 15 years \cite{DBLP:conf/hopl/HudakHJW07}. 

Although Haskell has higher-kinded types, it eschews subtyping. Most of the use-cases for subtyping are subsumed by type classes, which handle overloading systematically \cite{DBLP:conf/popl/WadlerB89}. However, it is not (yet) possible to abstract over class contexts \cite{hughes99:restricted,jones94:setmonad,kidd07:setmonad,chak07:classfamilies}. In our setting, this corresponds to abstracting over a type that is used as a bound, as discussed in Section \ref{sec:bounds}.

The interaction between higher-kinded types and subtyping is a well-studied subject  \cite{DBLP:conf/edbt/Cardelli88,DBLP:journals/tcs/PierceS97,DBLP:journals/iandc/CompagnoniG03}. As far as we know, none of these approaches combine bounded type constructors, subkinding, subtyping \emph{and} variance, although all of these features are included in at least one of them. A similarity of interest is Cardelli's notion of power types \cite{DBLP:conf/popl/Cardelli88}, which corresponds to our bounds-tracking kind \mbox{\lstinline|*(L, U)|.}

%Another interaction that occurs in Scala is that between subtyping and dependent types \cite{DBLP:journals/tcs/AspinallC01}
% subtyping and matching \cite{DBLP:journals/toplas/AbadiC96}

%\OmegaLang~\cite{citeulike:975433} is a Haskell-based language that (most notably) supports user-defined kinds and type-level computation. To a certain extent, it seems possible to encode the first mechanism using sealed hierarchies of abstract classes in Scala. Scala's singleton types are a good match for the Singleton pattern, which --- according to Sheard ---  is an important concept in \OmegaLang.  It may be possible to encode a limited form of type-level computation using Scala's implicits. However, dedicated support is clearly needed for this feature to be powerful enough. Part of our ongoing work is geared towards bringing the essence of \OmegaLang's power to Scala. The main goal of this effort is to realise an extensible type system that can be used for program verification.

%The main ideas can be implemented in Java or \CSharp, see Altherr and Cremet \cite{Altherr07:fgjomega} for a proposal of a Java extension.

Type constructor polymorphism has recently started to trickle down to object-oriented languages. Cremet and Altherr's work on extending Featherweight Generic Java with higher-kinded types \cite{Altherr07:fgjomega} partly inspired the design of our syntax. Other than that, we are not aware of other contemporary object-oriented languages with a similar set of features. C++'s template mechanism is related, but, while templates are very flexible, this comes at a steep price: they can only be type-checked after they have been expanded. Recent work on ``concepts'' aims to alleviate this \cite{DBLP:conf/oopsla/GregorJSSRL06}. 





%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion \label{sec:conclusion}}

Genericity is a proven technique to reduce code duplication in object-oriented libraries, as well as making them easier to use by clients. The prime example is a collections library, where clients no longer need to cast the elements they retrieve from a generic collection.

Unfortunately, genericity's first-order nature makes it self-defeating: abstracting over proper types gives rise to type constructors, which cannot be abstracted over. Thus, by using genericity to reduce code duplication, other kinds of boilerplate arise. Type constructor polymorphism allows to further eliminate these redundancies, as it generalises genericity to type constructors. 

As with genericity, most use cases for type constructor polymorphism arise in library design and implementation, where it provides more control over the interfaces that are exposed to clients, while reducing code duplication.  Moreover, clients are not exposed to the complexity that is inherent to these advanced abstraction mechanisms. In fact, clients \emph{benefit} from the more precise interfaces that can be expressed with type constructor polymorphism, just like genericity reduced the number of casts that clients of a collections library had to write.

We implemented type constructor polymorphism in Scala 2.5. The essence of our solution carries over easily to Java, see Altherr et al. for a proposal \cite{Altherr07:fgjomega}. 

Finally, we have only reported on one of several applications that we experimented with. Embedded domain specific languages (DSL's) \cite{DBLP:conf/aplas/CaretteKS07} are another promising application area of type constructor polymorphism. We are currently applying these ideas to our parser combinator library, a DSL for writing EBNF grammars in Scala \cite{moors07:sparsec}. Independently, Ostermann et al. \cite{ostermann07:privcomm} are investigating similar applications, which critically rely on type constructor polymorphism. 



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Acknowledgements}
The authors would like to thank Dave Clarke, Marko van Dooren, Burak Emir, Erik Ernst, Bart Jacobs, Jan Smans, and Alexander Spoon for their insightful comments and interesting discussions. We also gratefully acknowledge the Scala community for providing a fertile testbed for this research.
% Miles Sabin. Lauri Alanko, Ross Judson, 

The first author is supported by a grant from the Flemish IWT. Part of the reported work was performed during a 3-month stay at EPFL.



\bibliography{manual,dblp}
%\bibliography{final}

\end{document}
