\newcommand{\svnid}{\code{$Id:tcpoly.tex 47 2007-09-11 16:23:38Z adriaanm $}}
\documentclass[preprint]{sigplanconf}

% don't change the next three lines, fonts will get messed up!
\usepackage[T1]{fontenc}
\usepackage[scaled]{luximono}
\usepackage{times}

\usepackage{textcomp}

\usepackage{hyperref}
\usepackage{amsmath}
\usepackage{color}


\newcommand{\AWK}{{\color{red}AWK}}
\newcommand{\comment}[1]{}
\newcommand{\TODO}[1]{\mbox{{\color{red}TODO}}\{{\footnotesize{#1}}\}}

\usepackage{listings}
\newcommand{\code}[1]{\lstinline{#1}}
\newcommand{\class}[1]{\code{#1}}
\newcommand{\type}[1]{\code{#1}}
\newcommand{\method}[1]{\code{#1}}
\newcommand{\kto}{\ensuremath{\rightarrow}}
\newcommand{\tmfun}{\ensuremath{\rightarrow}}
\newcommand{\tpfun}{\ensuremath{\Rightarrow}}
\newcommand{\nuObj}{$\nu$Obj}

\lstdefinelanguage{scala}{% 
       morekeywords={% 
                private, public, protected, import, package, implicit, final, package, trait, type, class, val, def, var, if, this, else, extends, with, while, new, abstract, object, requires, case, match, sealed},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{\//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstset{breaklines=true, language=scala} 
%\lstset{basicstyle=\footnotesize\ttfamily, breaklines=true, language=scala, tabsize=2, columns=fixed, mathescape=false,includerangemarker=false}
% thank you, Burak 
% (lstset tweaking stolen from
% http://lampsvn.epfl.ch/svn-repos/scala/scala/branches/typestate/docs/tstate-report/datasway.tex)
\lstset{
    fontadjust=true,%
    columns=[c]fixed,%
    keepspaces=true,%
    basewidth={0.56em, 0.52em},%
    tabsize=2,%
    basicstyle=\renewcommand{\baselinestretch}{0.97}\tt,% \small\tt
    commentstyle=\textit,%
    keywordstyle=\bfseries,%
}

\lstset{
  literate=
  {=>}{$\Rightarrow$}{2}
  {->}{$\to$}{2}
  {<-}{$\leftarrow$}{2}
  % {\\}{$\lambda$}{1}
  {<~}{$\prec$}{2}
  {<|}{$\triangleleft$}{2}
  {<:}{$<:$}{1}
}


\usepackage{amsmath,amssymb}
\usepackage{supertabular}
%\usepackage{geometry} %% EVIL
\usepackage{ifthen}

\input{ott}

\begin{document}

\conferenceinfo{WXYZ '05}{date, City.} 
\copyrightyear{2007} 
\copyrightdata{[to be supplied]} 

\titlebanner{working draft \svnid}        % These are ignored unless
\preprintfooter{Use cases for and meta-theory of higher-order nominal subtyping in Scala.}   % 'preprint' option specified.

\title{Taking Scala Higher}
\subtitle{Higher-Kinded Types for the Masses}

\authorinfo{Adriaan Moors \and Frank Piessens} % \and \mbox{Wouter Joosen}
           {K.U. Leuven}
           {\emph{first}.\emph{last}@cs.kuleuven.be}
\authorinfo{Martin Odersky}
           {EPFL}
           {\emph{first}.\emph{last}@epfl.ch}

\maketitle

\begin{abstract}
\end{abstract}

\category{CR-number}{subcategory}{third-level}

\terms
term1, term2

\keywords
keyword1, keyword2

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
Contemporary object-oriented languages -- such as Java, C\#, and Scala -- all support ``genericity'', or parametric polymorphism. This mechanism allows programmers to abstract over types. The typical example is \type{List}, which abstracts over the type of its elements. However, in doing so, \type{List} loses its status as a first-class type: the abstraction mechanism does not deal with the higher-order case. 

The thesis of this paper is that this limitation should be lifted in object-oriented languages. Functional programming languages, such as Haskell, have been putting this kind of higher-order polymorphism to good use for well over a decade. Scala 2.0 already provided partial support via encodings. Since 2.5, it is fully supported. We call this feature ``type constructor polymorphism''. We will illustrate and motivate it using examples and a formalisation of a core subset of Scala, called ``Scalina''.

Type constructor polymorphism is part of Scala's \TODO{ambition} to cleanly integrate functional and object-oriented concepts into one language. This strategy has already been shown to be very fruitful in the recent past \cite{DBLP:conf/oopsla/OderskyZ05,emir07:oopatmatch}. We seek to continue this trend with our extension.

\TODO{select subset of the things we actually discuss: interaction of higher-kinded types with nominal subtyping and variance, type parameters and abstract type members, path-dependent types, and pattern matching.  mechanised the meta-theory of our calculus in Twelf}


The prime example of the utility of type constructor polymorphism \TODO{}

Our formal account closely follows the spirit of \nuObj \cite{DBLP:conf/ecoop/OderskyCRZ03}, in that our main design goal is to provide one abstraction mechanism to support FP-style and OO-style, as well as value-level and type-level abstraction. Abstract members play this crucial role. Application is modelled using mixin composition. 

However, there are a some differences between Scalina and $\nu$Obj. First, we model polymorphic methods using objects with abstract members, whereas \nuObj uses ``first-class'' classes for that. Essentially, we allow classes with abstract members to be instantiated (although it is possible to require certain members to be concrete). Member selection is only allowed when all members of an object are concrete. Objects with abstract members are our Lambda's. Value refinement (a limited form of mixin composition) is used to make abstract members concrete, and thus corresponds to application.

Second, we do not allow the bounds on abstract members to be strengthened in subclasses, as this leads to non-sensical type applications as discussed in section \TODO{}. \TODO{In full F-sub, the subtyping rule for universally quantified types requires the bounds to vary contravariantly. I think this invariant behaviour of bounds may enable decidability.}

\TODO{Ernst said state is essential when modelling path-dependent types. I agree. Should we also model variance?}

FP-style vs OO-style abstraction
\comment{
  functional
    abstraction: parameterisation (position-based)
    concretisation: application
  object-oriented
    abstraction: abstract members (name-based)
    concretisation: refinement
Value vs Type abstraction
  uniform mechanism on value-level and type-level
}

\paragraph{Contribution} 

\paragraph{Structure of the paper}
In the next section, we will introduce our extension using a few examples in a subset of Scala (its syntax is described in \ref{sec:syntax:subscala}). Section \ref{sec:elab} specifies how this surface syntax is translated into Scalina. We will use the running example to illustrate the various features of our extension. More advanced applications of higher-kinded types in Scala are discussed at the end of the paper in \ref{sec:advex}. After the informal introduction, we will discuss Scalina's syntax (\ref{sec:syntax:scalina}), the small-step structured operational semantics (\ref{sec:opsem}), the static semantics (\ref{sec:statsem}) and the main results from the meta-theory (\ref{sec:metatheo}).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Examples}


\subsection{Iterable}
\begin{lstlisting}[language=Scala,float,caption=Extract of Scala's  Iterable interface,label=lst:iterable0]
trait Iterable[A] {
  def map[B](f: A => B): Iterable[B]
  def flatMap[B](f: A => Iterable[B]): Iterable[B]
  def filter(p: A => Boolean): Iterable[A] 
}
\end{lstlisting}

Fig. \ref{lst:iterable0} depicts an extract from Scala's \type{Iterable} interface, which provides the necessary operations to support Scala's for-compre\-hen\-sions. \code{map} takes a function from \type{A} to \type{B} and applies it to every element of the current collection to produce a new collection of \type{B}'s. \code{flatMap} generalises this behaviour in that, for every element of type \type{A}, the user-supplied function may produce a collection of \type{B}'s, instead of just a single element. The produced elements will all be merged into one collection. Finally, \code{filter} examines every element in the current collection and returns a new one containing only those elements that matched the user-supplied predicate \method{p}.

Using parametric polymorphism (``genericity''), the type \type{Iterable[A]} expresses it provides iteration over elements of type \type{A}. The type of the constituents of a collection produced by one of the methods discussed above is specified similarly. However, our knowledge of the type of the new \emph{container} is less precise: the programmer is left in the dark as to whether mapping a function over a \type{List} (a typical subclass of \type{Iterable}) will return a new \type{List}. It might as well return a \type{Set}: the only thing a client of these methods of \type{Iterable} knows about the container, is that the result is again an \type{Iterable}.

The current Scala libraries use a rather ad-hoc solution to this problem. Without type constructor polymorphism, we cannot abstract over the part of the result type that varies, as it's a type constructor. Therefore, every subclass of \type{Iterable} refines the result type of the relevant methods covariantly, scattering this part of \type{Iterable}'s contract over the class hierarchy. Furthermore, this ad-hoc solution only works for result types. 

With type constructor polymorphism, we can abstract over the type constructor that represents the container, just like we could already abstract over the type that represents the contained elements. Fig. \ref{lst:iterable1} shows a refined \type{Iterable} interface. A subclass, such as \type{List}, can now override a single abstract type member \type{MyType[x]} with \code{type MyType[x] = List[x]}. Note that the name of the type member only alludes to Bruce's work \cite{TODO}, we leave the integration of these orthogonal features to future work. We could also use Scala's support for explicit self types to further restrict \type{MyType}, but, again, we consider these refinements outside the scope of this paper.

\begin{lstlisting}[language=Scala,float,caption=Initial refinement of Iterable with higher-kinded types,label=lst:iterable1]
trait Iterable[A] {
  type MyType[x] <: Iterable[x]
  
  def map[B](f: A => B): MyType[B]
  def flatMap[B](f: A => MyType[B]): MyType[B]
  def filter(p: A => Boolean): MyType[A] 
}
\end{lstlisting}


\subsection{Bounded Iterable}
Unfortunately, Fig. \ref{lst:iterable1}'s version of \type{Iterable} excludes reusing it for collections that require their elements to be bounded, such as \type{OrderedSet}:
\begin{lstlisting}
trait OrderedSet[T <: Ordered[T]] extends Iterable[T] {
  type MyType[x] = OrderedSet[x]
  // ...
}
\end{lstlisting}
This definition is illegal, since \type{MyType} is a type function that accepts any kind of argument, whereas \type{OrderedSet[x]} is only well-formed if \code{x <: Ordered[x]}. That constraint is `hidden' by equating \type{MyType[x]} to \type{OrderedSet[x]}. This problem is solved in Fig. \ref{lst:boundediterable}: the declaration of \type{MyType} exposes the bound on the type of the elements by taking a type argument \type{x} that is bounded by \type{Bound[x]}.

Note that this definition of \type{Iterable} subsumes the previous one, as \type{Bound} may be instantiated to \type{Any}\footnote{\type{Any} is the kind-polymorphic top-type: it's a type constructor that takes any number of type arguments. All well-formed types are subtypes of \type{Any}, and, dually, supertypes of \type{Nothing}.}.

\begin{lstlisting}[language=Scala,float,caption=Iterable that also abstracts over the elements' bounds,label=lst:boundediterable]
trait Iterable[A <: Bound[A], Bound[_]] {
  type MyType[x <: Bound[x]] <: Iterable[x, Bound]
  
  def map[B <: Bound[B]](f: A => B): MyType[B]
  def flatMap[B <: Bound[B]](f: A => MyType[B]): MyType[B]
  def filter(p: A => Boolean): MyType[A] 
}

trait OrderedSet[T <: Ordered[T]] extends Iterable[T, Ordered] { 
  type MyType[x <: Ordered[x]] = OrderedSet[x]
}
\end{lstlisting}


\section{Informal description}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Surface Syntax \label{sec:syntax:subscala}}
\begin{figure}
\begin{verbatim}
Path ::= Id |  Path `.' Id

Expr ::= 
    FormalsV `=>' Expr
  | if `(' Expr `)' Expr [`else' Expr]
  | [SimpleExpr `.'] Id `=' Expr
  | SimpleExpr
  
SimpleExpr    ::= 
    new Template
  | Path
  | SimpleExpr `.' Id 
  | SimpleExpr ActualsT
  | SimpleExpr ActualsV
  | `{' {Expr}`;' `}'
  | `(' Expr `)'

ActualsV ::= `(' {Expr}`,' `)' 
ActualsT ::= `[' {Type}`,' `]'


Type ::= SimpleType {with SimpleType} [`{' 
  {Dcl | type Id [FormalsT] `=' Type} `}']

SimpleType ::=  
    SimpleType ActualsT 
  | SimpleType `#' Id
  | Path
  | Path `.' type
  | `(' Type `)'

Template ::= SimpleType [ActualsV] 
  {with SimpleType} 
  [`{' [id [`:' Type] `=>'] {Def | Dcl} `}']

FormalV  ::= Id `:' Type
FormalsV ::= `(' {FormalV}`,'  `)'
         
FormalT  ::= Id [FormalsT] [`>:' Type] 
                           [`<:' Type] 
FormalsT ::= `[' {FormalT}`,'  `]'

Dcl ::= 
    val FormalV
  | var FormalV
  | def Id [FormalsT] [FormalsV] `:' Type
  | type FormalT

Def ::= 
    val FormalV `=' Expr
  | var FormalV `=' Expr
  | def Id [FormalsT] [FormalsV] `:' Type 
       `=' Expr
  | type Id [FormalsT] `=' Type
  | class Id [FormalsT] [FormalsV] 
      'extends' Template
\end{verbatim}
\caption{A Subset of Scala Syntax (TODO: formatting + further reduction) \label{fig:syntax:subscala}}
\end{figure}

Fig. \ref{fig:syntax:subscala} describes the subset of the full Scala syntax that we use in this paper. For now, we will simply introduce it by example, we will go into more detail in the following sections, along with the description of more interesting things, such as the type system. The listing in Fig. \ref{lst:booleans:subscala} \TODO{illustrates the core concepts}

\lstinputlisting[language=Scala,float,caption=Encoding the Booleans using the surface syntax,label=lst:booleans:subscala]{booleans.scala}

Syntactically, adding support for higher-kinded types, had very little impact: abstract type members and type parameters may now declare type parameters. Before, the signature for an abstract type (\TODO{FormalT}) was just an identifier followed by bounds, now the identifier may additionally be followed by a list of formal type parameters. 

For example, a type parameter \type{m} that takes one type argument, is declared as \type{m[x]}. \type{x} is in scope in the immediately enclosing list of parameters as well as in \type{m}'s bounds. This is thus a valid declaration: \type{m[x <: Ordered[x]] <: Collection[x]}. This \type{m} may only be applied to arguments \type{t} for which \type{t <: Ordered[t]}. The instantiation of the type \type{m} itself is constrained by this requirement as well as by the bound \type{<: Collection[x]}.

\subsection{Type checking type constructor polymorphism}
To build up some intuitions as to how type constructor polymorphism affects type checking of Scala programs, we'll consider the relevant rules in turn. First, we'll sketch the syntax-directed type checking rules. Second, we'll have a look at the subtyping relation. But first of all, we'll establish the terminology we'll use in the rest of this paper.

\subsubsection{Of terms, types, and kinds}
One of the key differences between Scala and languages like Java or Haskell, is that Scala does not maintain a strict separation between terms and types. Under certain conditions, a type may contain a term in Scala. The details don't matter for the purpose of this section, but this decision selects a different subset of the design space for type systems, than do, e.g., Java and Haskell.

Haskell is an example of a language that enforces a strict separation between the levels of terms, types and kinds. Entities in one level are said to be ``classified'' by one or more entities in the level immediately above that level.  In Java, the level of kinds is not made explicit. Nonetheless, the essence of Haskell's three-level system applies if we consider Java's level of kinds to consist of exactly one kind. Haskell calls this kind ``$\star$''. It classifies proper types, i.e., those that in turn classify values. Haskell expands the level of kinds by introducing a kind constructor \tpfun, that produces kinds that classify type constructors. This constructor may be thought of as the kind-level equivalent of the function type constructor \tmfun.

In Haskell a one-argument function is given a type like $s \tmfun t$. Scala and Java follow a different approach for defining methods: instead of directly assigning them a ``method-type'', methods are -- in a way -- defined \emph{prototypically}. We describe the shape of a correct invocation by specifying typed (and named) formal arguments. The same difference exists at the type level. In Haskell the \type{List} type constructor may be explicitly classified by the kind $\star \tpfun \star$, whereas the Scala syntax for defining ``functions'' at the type level follows the same philosophy as for the value-level methods. This design was first proposed by Altherr and Cremet \cite{}.

While this syntax is convenient for a programmer that's already familiar with the way methods are defined, an explicit notion of function types and kinds makes for more elegant typing rules. At the value level, a method \lstinline|def  plus(a: Int)(b: Int): Int| corresponds to the function type \type{Int => Int => Int}, and a method with dependent types, \lstinline|def duplicate(scope: Scope)(binder: scope.Binder): scope.Binder| yields the dependent function type \lstinline|(scope: Scope) => scope.Binder => scope.Binder|. In other words, \code{duplicate} is a value constructor that takes a value \code{scope} of type \type{Scope}, and a value \code{binder} of type \type{scope.Binder} to return a value of type \type{scope.Binder}.
  
This translation can be lifted to the level of types by replacing uses of the classification relation (`\code{:}') by subtyping (`\code{<:}'), and using the kind constructor `\code{=>}' instead of the type constructor `\code{->}'. Thus, the abstract type declaration \lstinline|m[x, y <: x] <: Pair[y, x]| expresses that \type{m} is of kind \lstinline|(x <: Any*) -> (y <: x) -> Pair[y, x]|, i.e., it is a type constructor that takes a type \type{x} of kind $\star$, a type \type{y} that is a subtype of \type{x} and produces a subtype of \lstinline|Pair[y, x]|. Note that classification by $\star$ is interchangeable with requiring a subtype of \type{Any*}, since \type{Any*} : $\star$ and subtyping is only defined on types of the same kind.  \TODO{find better notation for Any*, note that an unqualified Any is kind-polymorphic (maybe the latter usage should get a different notation)}


\paragraph{Checking type application} There are three syntactic constructions that perform type application. Two of them realise the functional interpretation of application: given the list of declared formal type arguments, and a list of supplied actual arguments, we must check that these lists have the same length, and that each actual argument (simultaneously taking into consideration all the actual arguments) conforms to the bound declared for the formal argument. 

Note that this definition is exactly the same as for value application, modulo the translation discussed above. For value application, conformance means that the type of the actual argument must be a subtype of the type declared for the formal argument. Conformance for type application amounts to checking that the kind of the actual type argument is a \emph{subkind} of the kind declared for the formal argument. 

\TODO{The third kind of application results from mixin composition. Despite its different appearance, it behaves according to the same rules as functional-style application.}

\paragraph{Subkinding}

\begin{verbatim}
((x <: S) -> T)  <:  ((x <: S') -> T')

if Forall x. S' <: S and x <: S' ==> T <: T'
\end{verbatim}

For a type function to be a subkind of another type function, it must accept at least the same arguments and produce a result type that's at least as good. In other words, the bound on the argument may vary contravariantly, and the bound on the result type may vary covariantly.

\subsection{Execution}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{More Advanced Applications\label{sec:advex}}

\subsection{Idioms and Traversals}
\begin{lstlisting}[language=Scala,float,caption=Idiomatic Iterable,label=lst:iterableidi]
trait Iterable[A, This[_]] {
  def imap[B, m[_]](f: A => m[B])(implicit accum: Accumulator[m, This]): MyType[B]
}

trait Accumulator[ElemStruct, Container] {
// type id[x] = x
// trait Const1[c] { type res[x]=c }    // trait Const1 { type Const1_a1; type res={type a1; type res=Const1_a1}}
//  ElemStruct = id ==> map
//  ElemStruct = Iterable ==> flatMap (?)
//  ElemStruct = Const1[Boolean]#res ==> filter
}
\end{lstlisting}


\subsection{Datatype-Generic Programming}

\subsection{TODO: interaction with path-dependent types and GADT's?}

\subsection{Units of Measurement}
% TODO: acknowlegde Don Syme for this example
to reflect units with their operations form an abelian group, could use the coercion-interpretation of subtyping and use implicits to pass the witnesses around?


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Scalina}
\begin{verbatim}
Based on \nuObj, but
  - simplified initialisation
     - no terms in body (everything goes 
       through indirection)
  - first-order abstract objects (first-
     order functions)
    - concreteness is specified at the 
      member-level, enforced for member
      selection, not required for instantiation
    - value refinement
  - well-kinding ==> type application never fails
  - variance annotations?
  - decidable (?): kernel F-sub
  
\end{verbatim}


\subsection{Core Syntax \label{sec:syntax:scalina}}
\begin{figure*}
\ottgrammartabular{
\ottt\ottinterrule
\ottp\ottinterrule
\ottT\ottinterrule
\ottm\ottinterrule
\ottcm\ottinterrule
\ottafterlastrule
}

\caption{Scalina Syntax \label{fig:syntax:scalina}}
\end{figure*}

\lstinputlisting[language=Scala,float,caption=Encoding the Booleans using the core syntax,label=lst:booleans:scalina]{booleans.scalina}

To build up intuition for the correspondence between the surface syntax and the core calculus, Fig. \ref{lst:booleans:scalina} shows how the fragment in Fig. \ref{lst:booleans:subscala} can be expressed in pure Scalina. In the next section, we will focus on the translation. First, we'll discuss it as a program in its own right.

At the top-level, Fig. \ref{lst:booleans:scalina} defines a record type with four members. This members can refer to each other via recursion through the self variable, \code{root}. Scalina syntax requires the self variable and the self type to be specified explicitly (as syntactic sugar, we allow them to be omitted, which corresponds to declaring a fresh self variable with type \type{self}). Here, the recursive type \type{self}, which stands for the surrounding record, is used as the self type. \TODO{self can only be used in the self type}

The first member is the nominal type binding \type{Bool}, which is defined to expand to the given record type. Scalina, like \nuObj, introduces nominal subtyping in an otherwise structural calculus using the nominal type binding, which is different from both an abstract and a concrete type: \type{Bool} is indeed concrete, but is not \emph{equal} to the structural type on the right.

\type{Bool}'s \type{and} type member is a type alias for a structural type that has one value member that is explicitly defined abstract, which means that the record type may be instantiated without this member being concrete. This generalisation allows us to model a function (i.e., a $\lambda$-abstraction) as an object with abstract members for its arguments. 

The \code{_false} and \code{_true} instances of \type{Bool} provide concrete implementations for the abstract methods by mixing in the corresponding type aliases with structural types that make the \code{res} value member concrete, forming a type that may be instantiated.

Finally, \code{root.res} is considered the result of the Scalina program in Fig. \ref{lst:booleans:scalina}. It shows how \code{root._true}'s \code{and} ``method'' can be invoked using value refinement to pass \code{root._false} as the first argument. Operationally, running the program corresponds to instantiating the top-level type and selecting its \code{res} member.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{From Scala to Scalina \label{sec:elab}}
Scala offers two styles of abstraction: through parameterisation (functional) or abstract members (OO). In our core calculus, we follow \nuObj, and only offer object-style abstraction. In this section, we will show how functional-style abstraction at the level of types and values can be expressed uniformly using abstract members and mixin composition.

Let's start with the simplest of value-level abstractions: the identity function on integers. In Scala, we write \code{def id(x: Int): Int = x}. In Scalina (but still using Scala syntax), this becomes: \lstinline|trait id { val x: Int; val res: Int = x }|. The argument \code{x} is replaced by an abstract value member. The method's body is encoded as a concrete value member that may reference the abstract members that represent the arguments. An application such as \code{id(42)} is desugared to \lstinline|(new id {val x=42}).res|. This composes the abstract class with a refinement that makes the argument concrete (this corresponds to argument passing), instantiates the refined class, and selects the result member (the equivalent of computing the actual result of the function).

To see how abstraction works uniformly for types and values, consider the encoding of the polymorphic identity function: \lstinline|def id[t](x: t): t = x|. We simply add an abstract type member to our previous encoding: \lstinline|trait id {type t; val x: t; val res: t = x }|. Application must now pass one more argument: \lstinline|(new id {type t=Int; val x=42}).res|.

Finally, this scheme works equally well at the level of types: the identity on types, \lstinline|type id[x]=x|, becomes \lstinline|trait id{type x; type res=x}|, and \lstinline|id[Int]| is rewritten to \lstinline|(id{type x=Int})#res|, 

Alas, there are a couple of complications, which stem from the mismatch between the visibility and accessibility of parameters and members. The names of parameters are only visible from inside the defining class; on the outside, only their position is relevant. Members are always accessed by name, and since we do not model access modifiers in Scalina, they are visible to subclasses and clients alike.

An obvious way of preserving the position of parameters, is to encode it in their names. The $i$'th type parameter is encoded as a member named $ta_i$, and the $i$'th value argument is named $va_i$. \TODO{We chose different names for type and value arguments purely for presentation purposes: their namespaces are distinct.}

\subsection{Rewrite Rules}



\subsection{A couple of Snags}
This naive approach quickly leads to (minor) problems. \lstinline|trait Riap[a, b] extends Pair[b, a]| cannot be encoded: we get \lstinline|trait Pair{type ta_1; type ta_2}| and \lstinline|trait Riap extends Pair {type ta_1; type ta_2}|. The distinct sets of parameters are collapsed, ignoring the permutation.  This could be solved by incorporating a global renaming into the encoding, or by introducing the obvious renaming operator into the calculus: 
\begin{lstlisting}
trait Riap extends Pair[ta_* => Pair_ta_$1] { 
  type ta_1; type ta_2; 
  type Pair_ta_1=ta_2; type Pair_ta_2=ta_1
  type res = Riap { type ta_1 = Riap.this.ta_1; type ta_2 = Riap.this.ta_2 }
}
\end{lstlisting}
As similar mechanisms have been discussed in detail by others \cite{vanDooren:ecoop2007}, we will not deal with this issue explicitly.

Note however that a global renaming approach complicates the encoding of type application to abstract type members, since these members may receive classes with differently renamed type members. To accommodate the variation in the naming of the members that represent the type parameters, either different versions of the application are needed or the global renaming has to be undone when a class is passed as an anonymous type constructor. Luckily, the indirection through the \type{res} type member provides the needed extra level of indirection:
\begin{lstlisting}
trait Riap extends Pair { 
  type Riap_ta_1; type Riap_ta_2; 
  type Pair_ta_1=Riap_ta_2; type Pair_ta_2=Riap_ta_1
  type res = Riap { type Riap_ta_1 = Riap.this.Riap_ta_1; type Riap_ta_2 = Riap.this.Riap_ta_2 }
}
// to encode Foo[Riap] (where trait Foo[m[a, b]]):
Foo { type Foo_ta_1 = {type ta_1; type ta_2; type res=Riap{type Riap_ta_1=ta_1; type Riap_ta_2=ta_2}} }
\end{lstlisting}

Bounded abstract type members pose a similar, though more intricate, problem. How should the encoding deal with the bound on \code{flipped} in \lstinline|trait Foo { type normal[a, b]; type flipped[a, b] <: normal[b, a]}|?  Mechanically applying the encoding for type application yields a first attempt (using informal notation):
\begin{lstlisting}
trait Foo { 
  type (normal{type ta_1=a; type ta_2=b})#res <: Any
  type (flipped{type ta_1=a; type ta_2=b})#res  <: (normal{type ta_1=b; type ta_2=a})#res
}
\end{lstlisting}

This can be disambiguated as 
\begin{lstlisting}
trait Foo { 
  type normal  where Forall a. Forall b. (normal{type ta_1=a; type ta_2=b})#res <: Any
  type flipped where Forall a. Forall b. (flipped{type ta_1=a; type ta_2=b})#res  <: (normal{type ta_1=b; type ta_2=a})#res
}
\end{lstlisting}

Cleaning up gives
\begin{lstlisting}
trait Foo { 
  type normal  <: {type ta_1; type ta_2; type res}
  type flipped where flipped#res  <: (normal{type ta_1=flipped#ta_2; type ta_2=flipped#ta_1})#res
}
\end{lstlisting}

However, although we can use recursion to omit the explicit quantification, it seems we cannot get around the need for generalised constraints \cite{emir06}.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{figure*}
\ottdefnsJcorestat
\end{figure*}

\begin{figure*}
\ottdefnsJhelpersstat
\end{figure*}

\subsection{Operational Semantics \label{sec:opsem}}


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Type System \label{sec:statsem}}

\TODO{normal form of types: beta expansion corresponds to performing mixin composition, eta-expansion is not necessary}

\TODO{canonical form for paths as in vc}

\TODO{kind unsoundness: can variance help? whether you can strengthen the bounds on a type member depends on its variance
"the function position of a type application is a covariant position (?), but in order to strengthen the bounds of a type member, it must be marked contravariant"
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Meta-Theory \label{sec:metatheo}}




%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Related Work}
\subsection{Calculi}
  F-omega-sub, Subtyping Dependent Types
  FGJ-$\omega$ (FTfJP)
  \TODO{cite Cremet\&Altherr \cite{Altherr07:fgjomega}}

\subsection{Functional Programming Languages}
Haskell: equivalent of abstracting over bounds would be abstracting over type class contexts, which is not supported

\subsection{Object-oriented Programming Languages}
  OO and higher-kinded types: 
    C++ templates, OCaml, 

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Conclusion}

% \appendix
% \section{Appendix Title}
% 
% This is the text of the appendix, if you need one.


\acks

Bart Jacobs, Lauri Alanko, Marko van Dooren, Burak Emir, Don Syme,
the Scala community

\bibliographystyle{abbrv}
\bibliography{tcpoly,dblp}
% \begin{thebibliography}{}
% 
% \bibitem{smith02}
% Smith, P. Q. reference text
% 
% \end{thebibliography}

\end{document}