\newcommand{\svnid}{\code{$Id: scalina.tex 120 2007-12-17 13:30:53Z adriaanm $}}

\documentclass[natbib]{sigplanconf} %preprint,

% don't change the next three lines, fonts will get messed up!
\usepackage[T1]{fontenc}
\usepackage[scaled]{luximono}
\usepackage{times}

\usepackage{graphicx}

\usepackage[utf8]{inputenc}

\usepackage{textcomp}
\usepackage{stmaryrd}

\usepackage[colorlinks,breaklinks=true]{hyperref}

\usepackage{listings}
\usepackage{amsmath,amssymb}
\usepackage{ifthen}
%\usepackage{fancyheadings}
\usepackage{supertabular}

\usepackage{paralist}

\usepackage{xcolor}
\definecolor{dullmagenta}{rgb}{0.4,0,0.4}   % #660066
\definecolor{darkblue}{rgb}{0,0,0.4}
\hypersetup{linkcolor=darkblue,citecolor=darkblue,filecolor=dullmagenta,urlcolor=darkblue} % coloured links


\newcommand{\AWK}{\mbox{\color{red}AWK}}
\newcommand{\comment}[1]{}
\newcommand{\notnow}[1]{}
\newcommand{\TODO}[1]{\mbox{{\color{red}TODO}}\{{\footnotesize{#1}}\}}


\newcommand{\code}[1]{\lstinline{#1}}
\newcommand{\class}[1]{\code{#1}}
\newcommand{\type}[1]{\code{#1}}
\newcommand{\kind}[1]{\code{#1}}
\newcommand{\method}[1]{\code{#1}}
\newcommand{\kto}[1]{\ensuremath{\rightarrow}}
\newcommand{\tmfun}[1]{\ensuremath{\rightarrow}}
\newcommand{\tpfun}[1]{\ensuremath{\Rightarrow}}
\newcommand{\nuObj}{$\nu$Obj}
\newcommand{\OmegaLang}{$\Omega$mega}

\lstdefinelanguage{scala}{% 
       morekeywords={% 
                try, catch, throw, kind, private, public, protected, import, package, implicit, final, package, trait, type, class, val, def, var, if, this, else, extends, with, while, new, abstract, object, requires, case, match, sealed, deferred},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{\//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstdefinelanguage{scalina}{% 
       morekeywords={% 
                type, val, new, Un, In, Struct, Nominal, Concrete, *, Any, Nothing},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{\//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstset{breaklines=true, language=scala} 
%\lstset{basicstyle=\footnotesize\ttfamily, breaklines=true, language=scala, tabsize=2, columns=fixed, mathescape=false,includerangemarker=false}
% thank you, Burak 
% (lstset tweaking stolen from
% http://lampsvn.epfl.ch/svn-repos/scala/scala/branches/typestate/docs/tstate-report/datasway.tex)
\lstset{
    fontadjust=true,%
    columns=[c]fixed,%
    keepspaces=true,%
    basewidth={0.56em, 0.52em},%
    tabsize=2,%
    basicstyle=\renewcommand{\baselinestretch}{0.97}\small\tt,% \small\tt
    commentstyle=\textit,%
    keywordstyle=\bfseries,%
    numberstyle=\footnotesize,%
    stepnumber=2,%
    numbersep=5pt
}

\def\toplus{\hbox{$\, \buildrel {\tiny +}\over {\to}\,$}}
\def\tominus{\hbox{$\, \buildrel {\tiny -}\over {\to}\,$}}

\lstset{
  literate=
  {=>}{$\Rightarrow$}{2}
  {->}{$\to$}{2}
  {-(+)>}{$\toplus$}{2}  
  {-(-)>}{$\tominus$}{2}  
  {<-}{$\leftarrow$}{2}
  % {\\}{$\lambda$}{1}
  {<~}{$\prec$}{2}
  {<\{}{$\triangleleft\{$}{2}
  {\{}{$\{$}{1}  
  {\}}{$\}$}{1}  
  {[]}{$\Box$}{1}
  {[|}{$\llbracket$}{1}
  {|]}{$\rrbracket$}{1}
  {<:}{$<:$}{1}
  {*}{$\star$}{1}
}

\newcommand{\SystemFOmegaSub}{System~$F_{\omega}^{sub}$}
\newcommand{\sr}{\scalinadrulename}
\input{grammar}
\input{theory}
\input{theory_override}
\usepackage{ottlayout}
%\ottstyledefaults{premiselayout=justify,numberpremises=yes,numbercolour=gray} 

\begin{document}
   \authorpermission

   \conferenceinfo{FOOL '08}{13 January, San Francisco, California, USA.} 
   \copyrightyear{2008} 
   \copyrightdata{} 
   
%\titlebanner{working draft \svnid}        % These are ignored unless
%\preprintfooter{Submission for FOOL '08}   % 'preprint' option specified.

\title{Safe Type-level Abstraction in Scala}
%\subtitle{Towards Type-level Computation in Scala}

\authorinfo{Adriaan Moors\thanks{The first author is supported by a grant from the Flemish IWT. Part of the reported work was performed during a 3-month stay at EPFL. }  \and Frank Piessens} % \and \mbox{Wouter Joosen}
           {K.U. Leuven}
           {\{adriaan, frank\}@cs.kuleuven.be}
\authorinfo{Martin Odersky}
           {EPFL}
           {martin.odersky@epfl.ch}
%LAMP, Station 14 
%Swiss Federal Institute of Technology in Lausanne (EPFL) 
%CH-1015 Lausanne 

\maketitle

%  link to theorystdalone.pdf / placeholder for meta-theory

\begin{abstract}
Most formal accounts of object-oriented languages have focussed on type soundness: the safety that type checking provides with respect to \emph{term-level} computation and abstractions. However, with type-level abstraction mechanisms becoming increasingly more sophisticated, bringing this guarantee to the level of types has become quite pressing. We call this property \emph{kind soundness}: kind checking ensures that type constructors are never applied to unexpected type arguments. We present Scalina, a purely object-oriented calculus that employs the same abstraction mechanisms at the type level as well as at the kind level. Soundness for both levels can thus be proven by essentially the same arguments. Kind soundness finally allows designers of type-level abstractions to join their term-level colleagues in relying on the compiler to catch deficiencies before they are discovered by their clients.
\end{abstract}

\section{Introduction} \label{intro}
Scalina is a purely object-oriented calculus that provides the formal underpinning for our implementation of higher-kinded types \cite{moors07:tcpoly} in Scala \cite{LAMP-REPORT-2006-001}. Scalina introduces a number of novelties with respect to earlier object-oriented calculi \cite{DBLP:journals/toplas/IgarashiPW01,DBLP:conf/ecoop/OderskyCRZ03,DBLP:conf/mfcs/CremetGLO06}. The most notable improvement over the \nuObj~calculus is that kind checking ensures type applications never ``go wrong'': we dub this property \emph{kind soundness}. 

Traditionally, most object-oriented languages and the underlying formalisms use a mix of FP-style and OO-style abstraction. The former style is based on lambda abstraction and function application, and OO-style abstractions are built using abstract members and composition (via subclassing or mixin composition). 

Java, for example, uses functional abstraction for methods and classes, which may be parametric in types and values. Of course, Java also supports OO-style abstraction: a class with an abstract method abstracts from the implementation of that method. A subclass is expected to provide the concrete implementation. 

Like \nuObj, Scalina is a purely object-oriented calculus: there are no constructs for parameterisation. Yet, as we will demonstrate, Scalina is able to express the same abstractions as, for example, \SystemFOmegaSub \cite{DBLP:conf/popl/Cardelli88,DBLP:journals/tcs/PierceS97,DBLP:journals/iandc/CompagnoniG03}, with the same safety guarantees.
% System F-omega: girard:thesis,DBLP:conf/programm/Reynolds74

The rest of this section elaborates on the problem statement and gives some initial insight into  our solution. Then, we get our feet wet with Scalina's syntax and intuitions in Section \ref{scalina}, before delving deeper in the levels of terms (Section \ref{terms}) and types (Section \ref{types}). The latter two sections discuss computation and classification at the respective levels. We briefly motivate Scalina's design and position it in the design space in Section \ref{sec:design}. In Section \ref{encoding} we make the relation between Scalina and \SystemFOmegaSub~more precise.  We sketch the meta-theory in Section \ref{meta}. Finally, we briefly discuss related work (Section \ref{related}) before concluding in Section \ref{conclusion}.




\subsection{Kind soundness}
Scala supports two styles of abstraction: the functional style uses parameterisation, whereas abstract members represent the object-oriented way. 
It is natural to ask whether one style can be used exclusively. At first sight, the object-oriented style can encode the functional one. We restrict the discussion to the technicalities of the encoding; the impact on the programming experience is outside the scope of this paper.

Scala's abstract type members closely correspond to type parameters, and abstract type member refinement can be seen as the object-oriented counterpart of type application. Abstract type member refinement is a restricted form of mixin composition that can be used to override abstract type members with concrete ones.  However, it turns that out this encoding does not preserve the safety properties that are ensured by parameterisation.


\begin{lstlisting}[float,caption=Expressing \type{Iterable} using parameterisation,label=lst:iter:fp]
trait Iterable[A, Container[X]] {
  def map[B](f: A => B): Container[B]
}

trait List[A] extends Iterable[A, List]
\end{lstlisting}


To make this concrete, Listing \ref{lst:iter:fp} uses parameterisation to express the well-known \type{Iterable} abstraction in Scala. The \type{Iterable} trait (an abstract class) takes two type parameters: the first one represents the type of the elements, and the second one abstracts over the type constructor of the container. To denote that it abstracts over a type constructor, the \type{Container} parameter declares a formal type parameter \type{X}.

\begin{lstlisting}[float,caption=Encoding \type{Iterable}'s type parameters as members,label=lst:iter:oo ]
trait TypeFunction1 { type A }

trait Iterable extends TypeFunction1 {
  type Container <: TypeFunction1

  def map[B](f: A => B): Container{type A = B}
}

trait List extends Iterable { type Container = List }
\end{lstlisting}

Listing \ref{lst:iter:oo} demonstrates the object-oriented style. Here, \type{Iterable} abstracts over the type of its elements and the container using abstract members. The \type{A} type member is inherited from \type{TypeFunction1}, and the \type{Container} type constructor parameter is represented as an abstract type member that is bounded to be a \type{TypeFunction1}. \code{map}'s result type is expressed by refining \type{Container}'s abstract type member \type{A} so that it equals \type{B}.


\begin{lstlisting}[float,label=lst:numlistmem,caption=\type{NumericList}: an illegal subclass of \type{Iterable}]
trait NumericList[A <: Number] 
    extends Iterable[A, NumericList]
\end{lstlisting}

So far, the encoding remained faithful to the original. However, a discrepancy emerges when we encode an erroneous program. The type application \type{Iterable[A, NumericList]} in Listing \ref{lst:numlistmem} is not allowed by the compiler, whereas we will see its encoding is accepted without warning. If it were not ruled out, \code{map}'s result type could apply any type \type{B} to \type{NumericList}, while it accepts only subtypes of \type{Number}. By ruling out \mbox{\type{Iterable[A, NumericList]}}, the compiler prevents this error from ever happening.

\begin{lstlisting}[float,caption=The encoding of \type{NumericList} eludes the type checker,label=lst:numericlis:oo ]
trait NumericList extends Iterable {
  type A <: Number
  type Container = NumericList // Incorrect, but no error reported!
}
\end{lstlisting}

Unfortunately, the encoding does not preserve this property, which we call ``kind soundness''. This is illustrated by Listing \ref{lst:numericlis:oo}, which is considered a valid Scala program. The compiler silently accepts this program, even though we could never complete its implementation (at some point we will have to instantiate a \type{NumericList} for an arbitrary type of elements, and the compiler will catch our mistake). To relate this to type soundness, the value-level equivalent of this oversight would be to allow passing a function of type, e.g., \type{Number => Any} to a function that expects a \type{Any => Any}. 

Note that this indulgence does not imply \emph{type} unsoundness, as these erroneous types cannot be instantiated. Nonetheless, we regard it as a shortcoming of the compiler that these vacuous intersection types are allowed to slip by unnoticed. Even though they are prevented from being instantiated, they could be unmasked earlier.

To motivate this desire for early detection of these inconsistencies, consider the analogy with abstract classes. Suppose classes would be allowed to be abstract implicitly, so that accidental abstract classes would not be discovered until a client attempts to instantiate them. However, this situation is considered undesirable by most languages, so that an abstract class must be marked as such explicitly. This eliminates the possibility that the programmer simply forgot to implement a method.

Not detecting erroneous type applications, which manifest themselves as intersection types that unexpectedly do not have any instances, has the same effect as allowing any class to be abstract implicitly: the error is detected eventually, but it could have been signalled earlier. Even though other uses of intersection types might sensibly result in empty types, we do not consider this to be one of them.

This kind unsoundness has its roots in the \nuObj~calculus \cite{DBLP:conf/ecoop/OderskyCRZ03}, which allows abstract type members to be refined \emph{covariantly}, thus \lstinline!NumericList <: TypeFunction1!, so that the encoding of the erroneous type application results in a valid program.

We recover early error detection in Scalina by differentiating covariant and contravariant members, instead of assuming they all behave covariantly. This distinction corresponds to the fact that some members abstract over input, whereas others represent the output of the abstraction. Input members should behave contravariantly, like the types of function arguments, whereas covariance is required for output members, which correspond to a function's result type. With this distinction, a purely object-oriented calculus can encode functional-style abstraction with the same safety guarantees.

If we look at the problem from the point of view of the \emph{clients} of an abstraction, we distinguish external and internal clients. External clients supply information to an abstraction without knowing exactly which subtype of the abstraction they are dealing with. Therefore, the constraints on these missing pieces of information must only be \emph{weakened} in subtypes. Internal clients, which are tightly related by subtyping, should be able to strengthen the result of the abstraction.

Thus, Scalina complements Scala's covariant type members with contravariant ones, which we shall call ``un-members''. Listing \ref{lst:numericlis:scalina} shows a pseudo-Scala rendition of the encoding, where un-members are indicated using the \code{deferred} keyword. They are made concrete by external clients using the \lstinline!... <{ ... }! construct. 

Since the \type{A} type member is an input to the abstraction, it must behave contravariantly, so that \type{NumericList} is not allowed to strengthen the bound on  the \type{A} un-member that it inherited from \type{Iterable}.

\begin{lstlisting}[float,caption=Using un-members to recover kind soundness,label=lst:numericlis:scalina]
trait TypeFunction1 { deferred type A }

trait Iterable extends TypeFunction1 {
  type Container <: TypeFunction1

  def map[B](f: A => B): Container<{type A = B}
}

trait List extends Iterable { type Container = List }

trait NumericList extends Iterable {
  deferred type A <: Number // error: covariant change not allowed
  type Container = NumericList 
}
\end{lstlisting}




\subsection{Methodology}
To summarise the above example, a programmer should use un-members to model the input to an abstraction. This corresponds to the arguments of a method or the type parameters of a generic class. Normal members are used to define the result of the abstraction.

Note that un-members and abstract members impose an ordering discipline. A type un-member that classifies a value un-member must be refined before the value can be supplied. This corresponds to a polymorphic value in functional programming. Furthermore, types may contain abstract members, but objects must not. Therefore, an object cannot be created until the abstract members have been made concrete. 

The example in Section \ref{example:list} will illustrate these points in more detail.

\subsection{Contributions}
Functional abstraction clearly distinguishes a function's arguments from its result. In the object-oriented setting, a similar distinction must be made for abstract members. We introduce un-members, which safely model the input to an abstraction, and re-use traditional members to represent the result of the abstraction. Thus, an object with un-members may be thought of as a curried function that takes its keyword arguments in any order. The members of such an object represent its results.

We study purely object-oriented abstraction in a dependently typed, three-level calculus that uses the same concepts for abstraction and computation on terms and types. As in the \nuObj~calculus, function application is decomposed into refinement and member selection. Because the level of types is modelled after the level of terms, a type-level function is modelled as a type with type un-members. 

The distinction between un-members, which behave contravariantly, and normal, covariant, members, is instrumental in proving soundness on the level of types and kinds. Due to the symmetric design of our calculus, the soundness proofs proceed by similar arguments at both levels.



%: un-members are input members, objects with un-members are like curried functions with keyword arguments that can be passed in any order

\notnow{Finally, we foresee extending Scalina with extensions such as mutable state, virtual classes \cite{DBLP:conf/popl/ErnstOC06,DBLP:conf/aosd/ClarkeDNW07}, and type-level computation. This is beyond the scope of the paper.
}
\section{Scalina: Syntax and Intuitions} \label{scalina}


Scalina is a three-level object-oriented calculus: we distinguish terms (objects), types, and kinds. Terms are for computation, types are used for classification as well as computation, and the role of kinds is strictly limited to classification. Computation is performed using two mechanisms: member selection and member refinement. Classification is more intricate, ranging from merely structural descriptions of the classified entities over nominal classification, the intersection of classifiers, singletons, and strictly empty classifiers.

\comment{We use ``classifier'' to denote a type or a kind. An ``entity'' -- a term or a type -- performs  computation and is subject to classification -- by a type or a kind, respectively.}

\subsection{Syntax}
\begin{figure}
\grammartabularSTY{\scalinagrammt\\\scalinagrammTT\\\scalinagrammT\\\scalinagrammm\\\scalinagrammcm}
  \caption{Scalina Syntax (terms and types)}
  \label{fig:grammar}
\end{figure}

Figures \ref{fig:grammar} and \ref{fig:grammarX} outline Scalina's syntax. We use `$\llbracket$ \ldots $\rrbracket$' to denote the optionality of `\ldots'.

The term level consists of member selection, member refinement, and instantiation. Analogously, a type may be a type selection, a refinement or a structural type. A structural type binds the self variable $x$ in the members it includes; if the type of the self variable is not specified, it is assumed to be the structural type itself. We use the meta-variable $R$ to refer to a structural type. Additionally, a type may be an intersection type, a singleton type (that depends on a path), the top or the bottom of the subtype lattice, or an un-type. Finally, we introduce $\Box T$, which stands for the result of refining all of $T$'s un-members with unknown terms and types. We will discuss this construct in more detail in Section \ref{concepts:types}.

Figure \ref{fig:grammarX} defines the shape of kinds, paths, values, and the typing context $\Gamma$. A path is a chain of member selections that starts with a variable or an instantiation expression \code{new T}, which represents an object. We mainly restrict the shape of paths to simplify the proofs in the meta-theory. 
\subsection{Core concepts}
Before describing the rules that define computation and classification in Scalina, we build up intuitions about the core concepts that underlie these mechanisms.

\subsubsection{Members and un-members} \label{concepts:members}
Members are the liaisons between the different levels: a type describes the value members that may be selected on the terms it classifies, as well as the type members that may be selected on the type itself. The description of a member consists of the label of the member, the classifier of the entity it stands for and -- if the member is concrete -- the actual entity it is bound to (its right-hand side, or RHS). For value members, the classifier is a type and the RHS is a term, and type members specify the kind that classifies the type they are bound to.\comment{ In Scala, kinds are implicit and type bounds play the role of classifier in abstract type members. }

Scalina's \emph{un-members} are a more radical departure from Scala. Un-members are used to encode parameterisation: they are placeholders for members that must be provided by the client of the abstraction, much like the arguments of a function. Un-members are turned into normal members using member refinement, which corresponds to passing arguments to a function. An entity with multiple un-members is the equivalent of a curried function: refining one of the un-members results in an entity with one less un-member to be refined. Once all un-members have been refined, the member representing the function's result may be selected to complete the application. This constitutes the essence of computation -- on terms as well as types -- in Scalina.

Members and un-members can be seen as the two halves of the contract specified by a classifier: members are \emph{available} to the client, whereas it must \emph{supply} the un-members. Note that abstract members have different semantics from un-members: an abstract member is made concrete using composition within a subtyping hierarchy, while an un-member is to be supplied by an external client. A type with abstract members cannot be instantiated. An abstract type can however be constrained (using the kind \kind{Concrete(R)}) so that it does not contain any abstract members.

\subsubsection{Terms}
The canonical form of a term is an object. For syntactic economy, and since Scalina does not model effects yet, an object is represented by the instantiation of a type without abstract members. Conceptually, an entity is just a vessel for denoting to which entity each of its members -- as described by the entity's classifier -- is bound. Thus, an object contains mappings (from a label to a term) for all of the members specified in its type. Operationally, un-members can be thought of as members that are simply absent from this mapping.

\begin{figure}[t]
\grammartabularSTY{\scalinagrammKK\\\scalinagrammK\\\scalinagrammp\\\scalinagrammv\\\scalinagrammG}
  \caption{Scalina Syntax (kinds, etc.)}
  \label{fig:grammarX}
\end{figure}

\subsubsection{Types} \label{concepts:types}
\begin{minipage}{\columnwidth}
\begin{quote}
  If, on the term level, parameterising over functions is useful, doing the same on the level of types sounds like an obvious thing to do.
\end{quote} \hfill Erik Meijer\\
\end{minipage}

To generalise Meijer's motivation for higher-kinded types \cite{meijer07:confessions}, rephrasing in our terminology: ``If, on the term level, abstracting over terms that themselves abstract over terms is useful, doing the same on the level of types sounds like an obvious thing to do.'' Scalina manifestly supports this view by using the same abstraction mechanism on both levels: entities that abstract over other entities (using un-members) are themselves first-class entities.
  
Types play a dual role: besides computation, their main purpose is classifying terms. As explained in the introduction, types differ from terms in that they may contain abstract members for abstraction towards subtyping clients. Another distinction with the term level is that we intend to tone down type-level computation so that it becomes decidable (this is future work).

Types classify terms by specifying the labels and the types of the members that may be selected on these terms. A structural type classifies all terms that have the prescribed members. Note that we use kinds to distinguish nominal types from structural ones. An intersection type is inhabited by the terms that inhabit both its constituent types. A singleton type classifies exactly one object and an un-type does not classify any terms at all. An un-type is used as the classifier of a value un-member.
  
Type-level computation uses the same concepts as computation at the term level. However, because types may contain abstract members, we must be more careful. For soundness, type member selection is only allowed on types that (eventually) consist solely of concrete members, although the exact RHS need not be known. Type selection on a singleton type is always safe, even if the selected type member's right-hand side is not known statically. As long as it is not an un-member, the object that the singleton type depends on, could not have been created unless that member was concrete. 

In Scala, these abstract type members may \emph{only} be selected on singleton types. Scalina generalises this to the notion of \emph{concrete types}, so that abstract type members may be selected on any type that necessarily contains only concrete type members, which naturally includes singleton types. 

Similarly, it is always safe to assume that the type of the self variable does not contain any un-members: the self-variable can only be accessed as a consequence of an external member selection, which in turn is not allowed on objects with un-members. To exploit this invariant, we introduce the type $\Box T$, which stands for the result of refining all $T$'s un-members. We shall illustrate this with an example in Section \ref{example:list}

\comment{
page 5, left, unclear: "Note that singleton types do provide stronger
guarantees with respect to equality of types: types that are selected on
singleton types that depend on paths that refer to the same object may safely
be used interchangeably."

page 5, left, I'm missing intuition to understand the paragraph starting with
"Similar to the fact ..."

page 5, left, example Scala definition. The name u is not bound. After reading
the paper through, I understand that it must stand for the universe type, but
first time readers will find this to be unbound.
}

\comment{following paragraphs are unclear
Note that singleton types do provide stronger guarantees with respect to equality of types: types that are selected on singleton types that depend on paths that refer to the same object may safely be used interchangeably. Selecting a concrete type member with an unknown RHS on a non-singleton type clearly provides less information, but is equally useful in the context of type-level computation. The discussion on the rules for kind assignment in Section \ref{theory:kinding} delves deeper into this issue. 


To make this concrete, the Scala definition \lstinline!class Foo {x => ... }! is encoded as \lstinline!type Foo: Nominal({x => ...}) = {x: []u.type#Foo => ... }!, which means that the self variable \code{x} may be used to access any un-members in \code{Foo}. Here \code{u}, is the self variable of the enclosing universe object that represents the implicit top-level package -- this is explained in more detail in the example in Section \ref{example:list}.
}

The canonical form of a type is computed by performing all allowed member selections. This corresponds to the $\beta$-normal form in functional calculi.   %: member refinement is analogous to the term-level refinement



\subsubsection{Kinds}


Kinds are only used for classifying types: they denote which members may be selected on the types they classify. An interval kind takes over the role of the bounds of a Scala-style abstract type member: \lstinline!In(S, T)! is inhabited by types that are subtypes of \type{T} and supertypes of \type{S}. 


\lstinline!Struct(R)! is inhabited by types that have at least the members specified in \type{R}. These members must be well-formed under the assumption that the self variable has the declared self type. \lstinline!Nominal(R)! is similar to \lstinline!Struct(R)!, except that it serves as a marker for concrete type bindings that represent classes: normalisation should not replace a type selection of this kind with its right-hand side.

Finally, $T$ has kind \lstinline!Concrete(R)! if it has at least the members specified in \type{R}, and none of these are abstract. Furthermore, $\Box T$ must be a subtype of the self type declared in \type{R}, so that such a type may be instantiated (if it is not a singleton type) or be used as the target of type member selection.


\subsection{Example: polymorphic lists}\label{example:list}
\lstset{language=scalina} 
Listing \ref{lst:list} implements polymorphic lists with \code{map} to illustrate Scalina's support for parametric polymorphism and higher-order functions.

First, we introduce a little syntactic sugar. \begin{itemize}
  \item The kind \kind{*} should be expanded to \lstinline!Struct({x => })!,
  \item the type \type{p.L} is shorthand for \type{p.type#L},
  \item the following type members are easily expanded:
  \begin{itemize}
    \item \code{type L = R} becomes \code{type L : Struct(R) = R},
    \item \code{type L <~ T} means \code{type L : Nominal(R) = T}, where \type{R} is the expansion of \type{T} to its least structural supertype (by the $\prec\hspace*{-0.3em}\prec$ relation defined in Fig. \ref{fig:typeExp}).
  \end{itemize}
\end{itemize}



\begin{lstlisting}[caption=Polymorphic List in Scalina,label=lst:list,float=t,language=scalina]
new { u => 
  type Fun1 <~ {self : [] u.Fun1 =>
    type T1   : Un[*]
    type T2   : Un[*]
    val v     : Un[self.T1]
    val apply : self.T2
  }

  type List <~ {self : [] u.List =>
    type Element : Un[*]
    type map = { selfMap : self.map =>
      type Tgt : Un[*]
      val fun: Un[u.Fun1<{type T1=self.Element}
                        <{type T2=selfMap.Tgt}]
      
      val apply: u.List<{type Element=selfMap.Tgt}
    }
    
    val map: self.map
  }

  type Nil <~ u.List & {self : [] u.Nil =>
    val map : self.map = 
      new self.map & { s : self.map =>
        val apply: u.List<{type Element = s.Tgt} 
          = new (u.Nil <{type Element = s.Tgt})
      }
  }

  type Cons <~ u.List & {self : [] u.Cons =>
    val hd: self.Element
    val tl: u.List<{type Element=self.Element}
    
    val map : self.map = 
      new self.map & { s : [] self.map =>
        val apply: u.List<{type Element=s.Tgt} 
          = new u.Cons<{type Element=s.Tgt} & {sc =>
              val hd: s.Tgt
                = (fun<{val v=self.hd}).apply
              val tl: u.List<{type Element=s.Tgt} 
                = (self.tl.map
                     <{type Tgt=s.Tgt}
                     <{val fun=s.fun}).apply
          }
    }
  }  
}  
\end{lstlisting}

\begin{lstlisting}[caption=Parametric List in Scala,label=lst:listscala,float=t,language=scala]
abstract class List[Element] {
  def map[Tgt](fun: Element => Tgt): List[Tgt]
}

class Nil[Element] extends List[Element] {
  def map[Tgt](fun: Element => Tgt) = new Nil[Tgt]
}

abstract class Cons[Element] extends 
                 List[Element] { self =>
  val hd: Element
  val tl: List[Element]
    
  def map[Tgt](fun: Element => Tgt) = new Cons[Tgt]{
    val hd: Tgt = fun(self.hd)
    val tl: List[Tgt] = self.tl.map[Tgt](fun)
  }
}
\end{lstlisting}

Since type members must always be nested in other types, our program is a term that instantiates the structural type that represents our ``universe'' (hence the \code{u} as the self variable). The type \type{u.type#Fun1}, or using syntactic sugar, \type{u.Fun1}, corresponds to a top-level class in Scala.

The first abstraction is a polymorphic unary function. \type{Fun1} is a nominal type that expands to a structural type with self variable \code{self}, whose type is assumed to be the nominal type itself, with all its un-members refined. This special self type is crucial: without it, the body of the function could not access its arguments, as these would be considered un-members. In this example, $\Box u.Fun1$ expands to the structural type \lstinline!{x => type T1: *; type T2: *; val v: x.T1; val apply: x.T2}!

 \type{Fun1} takes two type arguments: the type of its value argument (\type{T1}) and the type of its result (\type{T2}). It also requires one value argument (\type{v}). These arguments are un-members, which must be provided by the caller of the function. The abstract \code{apply} member models the function's body. It must be made concrete before an actual function value can be created.

\type{List} abstracts over the type of its elements (\type{Element}) and declares one abstract method, \code{map}. We define a structural type, \type{map}, and an abstract value member with the same name. This way, it becomes more convenient to make this member concrete, subclasses of \type{List} may simply use an instance of the composition of \type{map} with another type that makes the apply method concrete.

The implementation of the \code{map} ``method'' in \type{Nil} simply returns a new instance of \type{Nil} with the appropriate element-type. In \mbox{\type{Cons},} ~the result is another cons cell that applies the supplied function to the head of the list and that recurses on the tail.  

Note that \code{hd} and \code{tl} model constructor arguments: since they are required for an object of this type to be created, we use abstract members and not un-members.

Listing \ref{lst:listscala} shows a Scala rendition of the example that stays as close as possible to the Scalina version, using an idiomatic mix of functional and object-oriented abstractions.

%\TODO{perhaps show some reduction sequence to go with the polymorphic lists syntactic example?}



\section{Terms} \label{terms}
\subsection{Computation}
\begin{figure}[t]
%\ottstyledefaults{premiselayout=oneline}
\begin{small}
    \scalinadefnsunfold
\end{small}
%\ottstyledefaults{premiselayout=justify}
  \caption{Type Expansion for Run-time Lookup}
  \label{fig:unfold}
\end{figure}

\begin{figure}
\begin{small}
  \scalinadefnseval  
\end{small}
  \caption{Term Evaluation}
  \label{fig:eval}
\end{figure}
Before we turn to the evaluation rules, we briefly consider how members are looked up at run time. For now, type members are statically bound and the role of types during evaluation is strictly limited to mapping the labels of the members of an object to terms. However, we anticipate support for virtual classes, which requires run-time lookup of types. In future work, we will prove that, for the current system, our approach to lookup is equivalent to statically expanding types that are instantiated to mappings of labels to terms, with the corresponding trivial run-time lookup function.

To look up a member at run time, we use the unfold relation ($\prec$) defined in Fig. \ref{fig:unfold}, which relies on the following helper relations: $\scalinant{T} \, \ni \, \scalinant{ll} \, \mapsto \, \scalinant{e} \, ~\backslash\backslash~ \, \mathit{x}$ denotes that $\scalinant{T}$ expands to a structural type that contains a member with label $\scalinant{ll}$ and right-hand side $\scalinant{e}$ in which the self variable $\mathit{x}$ is bound. $\scalinant{ll}$ stands for either a term- or a type-label, and $\scalinant{e}$ is a term or a type. Similarly, $\scalinant{T} \, \ni^{\mbox{\scriptsize un}} \, \scalinant{ll}$ is derivable if $\scalinant{T}$ has an un-member with the specified label: it expects this un-member to be refined. 

Furthermore, we factor out what it means to refine a single member:
$\scalinakw{refineIf}( \scalinant{m'} ,  \scalinant{cm} ) $ can be seen as a function that returns the refinement of the un-member $\scalinant{m'}$ with the $\scalinant{cm}$'s RHS if their respective labels are the same, otherwise it simply returns $\scalinant{m'}$.  Similarly,
$\scalinant{m} = \scalinakw{refines}( \scalinant{m'} ,  \scalinant{cm} ) $ holds if $\scalinant{m'}$ and $\scalinant{cm}$ have the same label and $\scalinant{m}$ is the result of refining $\scalinant{m'}$ with $\scalinant{cm}$. Finally, intersecting structural types corresponds to taking the union ($\uplus$) of the corresponding sets of members, with concrete members in the right type overriding corresponding members in the left one.

The actual lookup proceeds by expanding a type to the corresponding structural type, after which looking up the required label is easy. The only tricky rule in the definition of the expansion relation $\prec$ is \sr{lu\_singX}. During evaluation, all types are of the shape \code{(new T).l1. ... .ln.type}. To reduce a selection $p.l$ to the base case, which is handled by \sr{lu\_sing}, we must lookup $l$ in $p.type$ and inductively expand the resulting singleton type. To avoid extra complexity in the meta-theory, we factor in the evaluation rule for value selection instead of using evaluation directly.


The small-step evaluation relation that defines Scalina's operational semantics \cite{DBLP:journals/iandc/WrightF94}, is shown in Fig. \ref{fig:eval}. It consists of two evaluation rules and four congruence rules. The first evaluation rule, \sr{E\_sel}, rewrites a member selection on an object to the RHS of that member after replacing the self variable by the object that was the target of the selection. The hypothesis that the label must be present in the object is represented as a lookup on the type. The side-condition that the member's RHS must be a path is crucial for proving type preservation: a path may only be replaced by a path.  For now, all terms are paths in Scalina. However, in anticipation of adding effects to the calculus, we already distinguish paths and arbitrary terms.

\sr{E\_rfn} deals with refinement: it checks that the refined member was indeed an un-member (it was missing from the object), and then adds it to the object by refining the type that is used to track its members. The side-condition that $l$ was an un-member is not necessary for proving type soundness, as the typing rules ensure that a well-typed term always meets it. We include it so that we can prove that un-members are never refined more than once by seeing that a program gets stuck if it violates that rule. However, by progress, well-typed terms never get stuck.

The only non-trivial congruence rule, \sr{E\_ctxMem}, performs evaluation under member bindings, which can be thought of as running the constructor. This congruence rule is necessary to fulfil the side-condition of the rule for member selection. The shape of the type $U$ is a technicality required by the proof of type preservation. It can be seen as an artefact of our using full-blown types for simply tracking the members of an object. The remaining congruence rules are standard.




\subsection{Classification}
\begin{figure}
\begin{small}
  \scalinadefnstyping 
\end{small}
  \caption{Term Classification}
  \label{fig:termcls}
\end{figure}

Figure \ref{fig:termcls} defines the shape of well-typed terms. When checking a value member selection, we treat the case where the target of the selection is a path (\sr{T\_selpath}) differently from when it is not (\sr{T\_sel}). Suppose we treated both cases equally. Consider e.g., \mbox{\lstinline!t: \{x => val a: x.b.type = x.b; val b: Any\}!}, so that \mbox{\lstinline!t.a : t.b.type!}. Now, for the singleton type \type{t.b.type} to be well-formed, \code{t} must be a path. Therefore, the selection is not allowed if this is not the case. If the declared type of the member does not rely on the self variable, the target need not be a path.

Note that the rules for member selection rely on subsumption to discard all other members in the type of the target (as well as the selected member's RHS). This is not just a matter of cosmetics: this formulation ensures that the type of the target does not contain any other un-members, as they cannot be forgotten by subsumption. In terms of function types, the underlying intuition is that subsumption cannot change the number of arguments that a function takes. We will discuss this in more detail in the section on subtyping.

\sr{T\_rfn} classifies member refinement -- in a sense, the dual of member selection. Essentially, this corresponds to checking the type of the argument while typing function application. This check is performed by requiring that there is a member with the same label as $cm$, and that the result of refining this member is well-formed.

We cannot use subsumption in this rule as the target that is being refined, may have several un-members. The type of refining a term is a refinement of the type of the term that is refined. Note that this type refinement could not be replaced by an intersection type. For such a type $T \& S$ to be well-kinded, $S$'s members must conform to $T$'s, but here, $T$ contains an un-member whereas $S$ does not, and subtyping can never relate un-members to regular members.

According to \sr{T\_new}, \textbf{new} $T$ is well-typed with type $T$ if $T$ statically expands to the structural type $R$ (by $\prec\hspace*{-0.3em}\prec$, defined in Fig. \ref{fig:typeExp}), where $R$ is of kind \textbf{Concrete}$(R)$. The remaining side conditions rule out degenerate cases. It is necessary to expand $T$ to $R$ and then check $R$ has kind \textbf{Concrete}$(R)$ because just checking that \mbox{$T$ : \textbf{Concrete}$(R)$}, implies that a \emph{subset} $R$ of $T$ is safe to be instantiated, but not necessarily $T$ itself.

Finally, a path has the corresponding singleton type if it is well-typed using the other rules (we assume finite derivations). Subsumption gives a well-typed term a less precise type.


\section{Types and Kinds} \label{types}
\subsection{Computation}
\begin{figure}
\begin{small}
  \scalinadefnstypeNorm
\end{small}
  \caption{Type Normalisation}
  \label{fig:typeNorm}
\end{figure}
  
Type normalisation, as shown in Fig. \ref{fig:typeNorm}, is the ``operational semantics'' of the type level. To compute the normal form of a type, all allowed type member selections are performed, refinements and compositions of structural types are normalised to the corresponding structural type, and paths are safely rewritten if they are statically known to refer to the same object. 

The selection of a type member with declared kind $\mathbf{Nominal}(R)$ is in normal form: these bindings must not be crossed. Hence the side-condition in \sr{N\_sel}. If this condition were omitted, normalisation would no longer be kind-preserving, as a type of kind $\mathbf{Nominal}(R)$ would be replaced by a type of kind $\mathbf{Struct}(R)$, which is not a subkind of $\mathbf{Nominal}(R)$. By analogy to the term level, normalisation checks only the minimal side conditions, a separate theorem proves that it is kind-preserving. 

\begin{figure}
\begin{small}
  \scalinadefnstypeExp
\end{small}
  \caption{Type Expansion}
  \label{fig:typeExp}
\end{figure}

Type expansion includes type normalisation, but is more aggressive: it replaces a nominal type binding with its (structural) right-hand side and widens singleton types. This is needed when calculating all the members in a type. Since type expansion must yield the least structural supertype of a type, we cannot use typing in the rules \sr{X\_sing*}, as this may invoke subsumption. 

\sr{X\_singVar} expands a singleton type that depends on a variable, that must therefore be in $\Gamma$. \sr{X\_singNew} handles the other bases case, similar to run-time expansion of types. Finally, \sr{X\_singSel} peels one layer of member selection from the path by approximating the outermost selection by its declared type. 

\sr{X\_ncsry} expands $\Box T$ to the expansion of $T$, after essentially stripping all the \type{Un[...]}'s from the declared types and kinds of its members. It achieves this by ``pretending'' to refine every un-member with an unknown right-hand side, so that un-members essentially become abstract members.

\subsection{Subtyping}

Subtyping is mostly standard; the main novelties result from the interaction with un-members. Un-types introduce contravariance (by \sr{ST\_un}), thus deviating from the norm of covariance. Since member subtyping is covariant, an un-member with declared type $\mathbf{Un}[T]$ may only be overridden by an un-member with a declared type that is a subtype of $\mathbf{Un}[T]$, thus it has the shape $\mathbf{Un}[T']$ with $T <: T'$. This means the overriding member weakens the restriction on the term that must be supplied by the client.

If a type $S$ expands to a type $T$, then surely it is a subtype of that type $T$. Expanding $S$ can be thought of as computing a least structural supertype of $S$, following type selections, crossing nominal type bindings and widening singleton types. Similarly, type equality ($\cong$) -- the least reflexive, symmetric, and transitive relation that includes normalisation -- is included (by \sr{ST\_eq}).

The rules \sr{ST\_abs\_upper} and \sr{ST\_abs\_lower} incorporate the declared kinds of abstract type members into the subtyping relation.

For simplicity, the current version of Scalina does not model variance for type constructors, which explains why \sr{ST\_invar} considers type un-members to be invariant.

\comment{This would require extra annotations on type un-members. Consider the example in listing \ref{lst:list}: the type of the argument, $T1$, should behave contravariantly, whereas covariance is expected for the type of the result, $T2$. Yet, both members are classified by the same kind.

To address this, we could split the $\mathbf{Un}[K]$ un-kind into $\mathbf{Con}[K]$ and $\mathbf{Co}[K]$, as follows:

\begin{lstlisting}
type Fun1 <~ {self : u.Fun1 =>
  type T1   : Con[*]
  type T2   : Co[*]
  val v     : Un[self.T1]
  val apply : self.T2
}
\end{lstlisting}

This illustrates that these new kinds are related to the $\mathbf{Un}[T]$ un-type: if the embedded $T$ is allowed to vary, it must be of kind $\mathbf{Con}[K]$. For simplicity, we do not address this issue in the current iteration of Scalina, which explains why \sr{ST\_invar} considers type un-members to be invariant.}

Besides the usual width- and depth-subtyping, subtyping of structural types must take extra care to never forget any un-members during subsumption. Intuitively, subsumption allows the client of a type to relax the expectations it has of that type, but it should not result in the client having fewer obligations.

Subtyping of members is defined in Fig. \ref{fig:subm}. Value members always behave covariantly; a type member becomes invariant as soon as it is made concrete. This is related to the fact that Scalina does not admit late-binding for type members.

To relate this to subtyping of function types in \SystemFOmegaSub, a type of the shape $S \rightarrow T$ can only be a subtype of a type with the same shape, i.e., a function of the same arity. In our system, the number of un-members denotes the ``arity'' and types can only be subtypes if they have the same un-members. \type{Any} constitutes the only safe exception to this rule. It is safe for a structural type with un-members to be a subtype of \type{Any}, as no members can be selected on a term that is only known to have type \type{Any}. 

Similarly, if subtyping forgets either constituent of an intersection type, any un-members in the forgotten type must still be present in the remaining one. For example, suppose we have a term of type \lstinline!{x: S => val a: Un[T]; val b: T=x.a} & {x : S => val b : T}!, with \lstinline!S = {x => val a: T; val b: T}!. If we were allowed to subsume the term's type to \lstinline!{x : S => val b : T}!, we could access $b$ before $a$ had been refined.

For brevity, we use $\scalinant{m} \, \scalinakw{deferred}$ to check that $\scalinant{m}$'s classifier is of the shape \type{Un[T]} or \type{Un[K]}.

% does not make sense
% As an aside, we do not allow F-bounded polymorphism\footnote{This restriction is not yet implemented in the typing rules.} \cite{DBLP:conf/fpca/CanningCHOM89}. However, it can be encoded in Scalina using explicit self-types and intersection types. 


\begin{figure}[p]
  \begin{small}
  \scalinadefnssubtyping
  \end{small}
  \caption{Subtyping}
  \label{fig:subtyping}
\end{figure}
\begin{figure}[p] 
 \begin{small}
  \scalinadefnskinding
  \end{small}
  \caption{Classifying Types}
  \label{fig:kinding}
\end{figure}

\begin{figure}[p]
  \begin{small}
  \scalinadefnsmsub
  \end{small}
  \caption{Subtyping for members}
  \label{fig:subm}
\end{figure}

\begin{figure}[p]
  \begin{small}
  \scalinadefnsmwf
  \end{small}
  \caption{Well-formedness of members}
  \label{fig:mwf}
\end{figure}

\subsection{Classification} \label{theory:kinding}

For the constructs that are shared by terms and types, classification is largely analogous. The main difference is that we have to be careful to only select types that will eventually become concrete. For objects, this is always the case, but types with abstract type members are still types. Whereas a term with type $T$ is known to contain concrete versions of all members (not including un-members) in $T$, a type with kind $\mathbf{Struct}(R)$ may contain abstract members. Therefore, we introduce the kind $\mathbf{Concrete}(R)$ that classifies only types with only concrete members.

% TODO: ..., and the assumption that the self type has the inferred kind (to avoid cycles) (?)
The kind of a structural type reflects the type members that may be selected on that type. To be well-kinded according to \sr{K\_R}, the members of a structural type must be well-formed under the assumption that the self variable has the declared self type. The well-formedness judgement for members is defined in Fig. \ref{fig:mwf}. %For convenience, the kind tracks \emph{all} member declarations and definitions in the type, even though it would suffice to retain only the labels and the classifiers of the type and value members. % must track value members for Concrete(R) to have the intended meaning

The intersection of two structural types is classified by the kind that tracks the union of their members. Note that the self type of the overriding type (the right-most constituent) must be a subtype\footnote{This is a slight simplification of \nuObj, where $S_1$ need not be a subtype of $S_2$. \nuObj's composition operator requires the self type for the composition to be specified explicitly. This new self type must be a subtype of the $S_i$.} of the type containing the overridden members. Each overriding member must be a submember of the corresponding member in $T_1$.

There are two ways for deriving that a set of members of a type are concrete. The easy way is if that type is a singleton type. Otherwise, for a type $T$ to be classified as having a certain set of concrete members $m_1 .. m_n$, it must have a structural kind with declared self-type $S$ and $\Box T \,<:\, S$. Naturally, this structural kind must denote the $m_1 .. m_n$ as concrete. However, due to subsumption, this set of members may be a \emph{subset} of the actual members of $T$. Nonetheless, any type member in $R$ may safely be selected: it will eventually become concrete.

Given the notion of types with concrete members -- which was not necessary at the lower level since terms may not contain abstract members -- type member selection is classified analogously to value member selection. Type refinement is almost literally the same as at the term level, as is subsumption. 

Finally, the top and the bottom of the subtype lattice must be classified, as well as un-types.
\type{Any}, and certainly \type{Nothing}, are not essential to the type system. However, \type{Any} is needed to be able to select a member on the universe (the top-level object): that member must have a type that does not depend on the universe's self variable, but all user-defined types are (indirectly) selected on the universe's self type. Since \type{Any} exists outside of the user-defined universe, it can serve this purpose. An alternative would be to introduce another variable binding construct, such as \code{let}. \type{Nothing} is used as the default lower bound of the interval kind. It may be given the same kind as any well-kinded structural type.

% (necessarily,  as in modal logic)

\subsection{Subkinding}
\begin{figure}
\begin{small}
  \scalinadefnssubkinding
\end{small}
  \caption{Subkinding}
  \label{fig:subkinding}
\end{figure}

Subkinding (Fig. \ref{fig:subkinding}) introduces contravariance for un-kinds (\sr{SK\_un}), so that type un-members conform contravariantly. Other than that, the relation defines a simple lattice, with the interval kind at the top.

A nominal type can be subsumed to a structural one (\sr{SK\_nom}) -- but not vice versa! A concrete type is also a structural one (\sr{SK\_CONC}). A structural type that includes all the members in \type{R}, is thus in the interval \code{(Nothing, R)}  (\sr{SK\_struct}). 

Interval inclusion gives rise to subkinding (\sr{SK\_ctx\_in}). The kinds that classify structural types and concrete types have similar subsumption properties based on subtyping (\sr{SK\_ctx\_conc}, \sr{SK\_ctx\_struct}). 





% design space
%\comment{


\newcommand\irn[1]{}
\newcommand\ir[1]{\textsuperscript{{\tiny\ref{#1}}}}
\newcommand\il[1]{\textsuperscript{{\tiny\item\label{#1}}}}


\section{Design Space \label{sec:design}}
After introducing Scalina in detail, we look at the bigger picture by briefly positioning its abstraction mechanisms in the design space. Scalina's main goal is to provide the essential features to model an object-oriented language -- such as objects with named members, mutual recursion through the self variable, and mixin composition -- while also allowing functional concepts to be encoded with the same safety guarantees as in functional calculi.

For an expedient exploration of the design space of abstraction mechanisms, we shall restrict ourselves to investigating the instantiations of the following question: ``Is a \emph{term/type} that abstracts from a \emph{term/type} using \emph{a parameter/an abstract member} a first-class \emph{term/type}?'' We will answer these questions for Java, Scala, \SystemFOmegaSub, and Scalina.

Note that we use `term' to denote anything that resides at the `base' level, such as an object in OO, or a function in FP. We do not imply any connection to syntactic terms. A `type' is something that classifies terms, and thus resides at the next level. We use `entity' to mean either a term or a type, when it only matters that the denoted entity can perform computation. Finally, a `classifier' classifies an entity: a type classifies a term, and a kind classifies a type.

Table \ref{table:absdesign} gives an overview of the analysis discussed below. The row of an entry determines what is abstracted from, and the column denotes the level of the abstraction. When the constructs in a part of the table are not all first-class constructs, {\small{\textbf{(+)}}} and {\small{(-)}} are used to make the distinction. We consider a construct ``first-class'' if it can be abstracted over. `/' means `not supported'. The superscripts in parentheses are intended to aid the reader in correlating the schematic representation in table \ref{table:absdesign} and the following discussion. 

%TODO: define first-class (try: can be bound as param, or can be abstracted over --> but OO can abstract over methods, even they're not first-class in the FP sense)

\begin{inparaenum}[1]
In Java, a term may only abstract from a term\il{pttJ} or a type\il{pTtJ} using parameterisation (functional abstraction): as already mentioned, a method abstracts from the concrete values of its arguments, but a method is not a first-class term in Java. Similarly, a polymorphic method may have type parameters, but again, such a method is not a first-class entity. Terms cannot have abstract members\mbox{\il{attJ}\textsuperscript{\hspace*{-.25em},~}}\il{aTtJ}. 

At the type level, still in Java, a constructor argument\il{ptTJ} can be considered as parameterising a type in a value (a constructor, like a method, is not a first-class term). Since Java 5.0, a class (a type) may take type parameters\il{pTTJ}, but a parameterised type is not a first-class type unless it is fully applied. Finally, a class with an abstract method\il{atTJ} is a first-class type that abstracts from a term. When deciding whether a type is first-class, we do not take into account whether it may be instantiated.

Scala introduces several improvements over Java. Firstly, $\lambda$-abstraction\il{pttS2} is directly supported, thus a term abstracting from a term is a value. Secondly, we recently implemented direct support for type constructor polymorphism in Scala 2.5, so that a parameterised type\il{pTTS} is considered a first-class (higher-kinded) type \cite{moors07:tcpoly}. Finally, a class may have abstract \emph{type} members\il{aTTS}.

\SystemFOmegaSub~is a purely functional calculus. Naturally, we only consider its support for abstraction using parameterisation. A term that abstracts from a term is written as $\lambda x: T. t$\il{pttF}. A term may also be parametric in a type: $\lambda X: K. t$\il{pTtF}. A type abstracts from a type as $\lambda X: K. T$\il{pTTF}. To abstract from terms at the level of types\il{ptTF}, we must turn to dependently typed versions of the calculus.

The overview in table \ref{table:absdesign} contains a striking void in the quadrant of terms with abstract members\ir{attJ}\textsuperscript{,}\ir{aTtJ}\textsuperscript{,}\mbox{\il{attS}\textsuperscript{\hspace*{-.25em},~}}\il{aTtS}. 

Nevertheless, Self, one of the earliest OO languages, represents a method as an object with ``argument slots'' \cite{DBLP:conf/oopsla/UngarS87}. In other words, a method is a first-class term (i.e., an object) that uses abstract members to abstract from other terms. 

Finally, Scalina's object-oriented abstraction mechanisms are split out with respect to the clients they cater to. An object can abstract over an object that is to be supplied by an external client using a value un-member\il{Scalina1}. A type may abstract over a value in the same way  \il{Scalina2}. Since objects are not allowed to have abstract members, they cannot abstract over terms\il{Scalina3} or types\il{Scalina7} for internal clients. Types, on the other hand, may contain abstract value\il{Scalina4} or type\il{Scalina8} members. Finally, an object\il{Scalina5} or a type\il{Scalina6} type can abstract over a type using a type un-member.
\end{inparaenum}


Thus, our brief survey has shown that Scalina supports all variations of abstraction mechanisms that are used in practice, without admitting too many features that do not appear in a full language. We designed Scalina so that it includes the main concepts of object-oriented languages, such as objects with named members and mutual recursion through a self variable, mixin composition, subtyping, and lightweight dependent types. Furthermore, although Scalina does not contain any mechanisms for parameterisation, it can safely and straightforwardly encode functional-style abstraction as well. Others have studied the advantages of OO-style abstraction over the functional style, and vice versa \cite{DBLP:conf/ecoop/BruceOW98,DBLP:conf/ecoop/ThorupT99,DBLP:conf/ecoop/Ernst01,DBLP:conf/jmlc/Ernst06}.



\begin{table}
  \begin{center}\begin{small}
  \begin{tabular}{l|c|c}
Construct           & \ldots in term (1st class?) & \ldots in type (1st class?) \\
\hline
\hspace*{-.7em}Java &                             &                                \\
Parameter (FP)      &                             &                                \\
\hfill Term         &  method (-)  \ir{pttJ}      & constructor (-)    \ir{ptTJ}   \\
\hfill Type         &  method (-)  \ir{pTtJ}      & generic class (-)  \ir{pTTJ}   \\
Abs. mem. (OO)      &                             &                                \\
\cline{2-2}
\hfill Term         &            / \ir{attJ}      & class w/abs. method (\textbf{+}) \ir{atTJ}     \\
\hfill Type         &            / \ir{aTtJ}      &            /                   \\
\hline
\hspace*{-.7em}Scala&                              &                                \\
Parameter (FP)      &                              &                                \\
\hfill Term         &  method (-) \irn{pttS}        & constructor (-)     \irn{ptTS} \\
                    &  anon. function (\textbf{+}) \ir{pttS2} &                     \\
\hfill Type         &  method (-) \irn{pTtS}        & generic class (+)   \ir{pTTS} \\
Abs. mem. (OO)      &                              &                                \\
\cline{2-2}
\hfill Term         &            / \ir{attS}        & abs. \code{val}/\code{def} (\textbf{+})  \irn{atTS}   \\
\hfill Type         &            / \ir{aTtS}        & abs. type member (\textbf{+})   \ir{aTTS} \\
\hline
\hspace*{-.7em}\SystemFOmegaSub &                    &                                 \\
\hfill Term         & $\lambda x : T. t$  \ir{pttF}  &              /\ir{ptTF}        \\
\hfill Type         & $\lambda X <: T. t$  \ir{pTtF} &  $\lambda X :: K. T$   \ir{pTTF}\\
\hline
\hspace*{-.7em}Scalina &                    &                                 \\
\hfill Term (ext.) &  obj. w/value un-member \ir{Scalina1}      &  type w/value un-member  \ir{Scalina2}  \\          
\hfill Term (int.) &          / \ir{Scalina3}                  &   type w/value abs. mem.  \ir{Scalina4}  \\
\hfill Type (ext.) &  obj. w/type un-member \ir{Scalina5}      &   type w/type un-member \ir{Scalina6}   \\          
\hfill Type (int.) &          /   \ir{Scalina7}                &   type w/type abs. mem. \ir{Scalina8}    \\          

  \end{tabular}
    \end{small}
  \end{center}

  \caption{Abstraction mechanisms: overview}
  {\scriptsize \hfill (The superscripts link the entries in the table to the relevant part of the discussion.) \hfill }
  \label{table:absdesign}
\end{table}

%}

\begin{table*}[th]
  \lstset{mathescape=true,language=scalina}
  \begin{center}
  \begin{tabular}{lcl}
$\llbracket t \rrbracket ^ {t'}$           & $\equiv$ & replace the free variable in the encoding of $t$ with $t'$ \\
\hline\\
$\llbracket \lambda x : T. t \rrbracket$   & $\equiv$ & \lstinline!new {self => val a: Un[[|T|]]; val apply: T' = [|t|]$^{\mbox{self.a}}$}! \\
$\llbracket t \, t' \rrbracket$            & $\equiv$ & \lstinline!([|t|] <{val a = [|t'|]}).apply!          \\
$\llbracket \lambda X <: T. t \rrbracket$  & $\equiv$ & \lstinline!new {self => type a: Un[In(Nothing, [|T|])]; val apply: T' = [|t|]$^{\mbox{self.a}}$}!\\
$\llbracket t \, [T] \rrbracket$           & $\equiv$ & \lstinline!([|t|] <{type a = [|T|]}).apply!          \\
\hline\\
$\llbracket \mbox{Top} \rrbracket$         & $\equiv$ & \lstinline!Any!              \\
$\llbracket T \rightarrow T' \rrbracket$   & $\equiv$ & \lstinline!{val a: Un[[|T|]]; val apply: [|T'|]}!              \\
$\llbracket \forall X <: T. T' \rrbracket$ & $\equiv$ & \lstinline!{type a: Un[In(Nothing, [|T|])]; val apply: [|T'|]$^{\mbox{self.a}}$}!\\
$\llbracket \lambda X :: K. T \rrbracket$  & $\equiv$ & \lstinline!{type a: Un[[|K|]]; type apply : K' = [|T|]$^{self.a}$}!       \\
$\llbracket T \, T' \rrbracket$            & $\equiv$ & \lstinline!([|T|] <{type a = [|T'|]})#apply!          \\
\hline\\
$\llbracket * \rrbracket$                  & $\equiv$ & \lstinline!Struct({x => })!          \\
$\llbracket K \Rightarrow K' \rrbracket$   & $\equiv$ & \lstinline!Struct({self => type a: [|K|]; type apply: [|K'|]})!          \\
  \end{tabular}
  \end{center}
  \caption{Informal encoding of \SystemFOmegaSub~syntax in Scalina}
  \label{table:systemf-in-scalina}
  \lstset{mathescape=false,language=scala}
\end{table*}

\section{Encoding \SystemFOmegaSub} \label{encoding}
Table \ref{table:systemf-in-scalina} shows how terms, types and kinds from \SystemFOmegaSub~\cite[Ch. 31]{pierce02:tapl} can be encoded in Scalina. Using Pierce's terminology, an abstraction is modelled as an object with an un-member \code{a} that represents the argument, and a member \code{apply} that encodes the body of the abstraction. Note that we have to infer the type \type{T'}. Application is decomposed into refining the \code{a} un-member with the encoding of the actual argument, and selecting the \code{apply} member.

The encoding of a polymorphic value re-uses the pattern we used for term abstraction, except that the argument is now a type un-member instead of a term-level one. We use an interval kind to model type bounds: `$<: T$' becomes `\code{: In(Nothing, [|T|])}'. Type application does not present new challenges.

At the level of types, function types and universal types become the obvious structural types, which we established when encoding (polymorphic) function values. Similarly, we simply hoist our term-level abstraction and application to the type level to replace operator abstraction and application. The kind-level is easily derived from the type that encodes operator abstraction.

The evaluation of the encoding of a value application proceeds by \sr{E\_rfn} pushing the refinement of the object to the object's type, so that \sr{E\_sel} can look up the \code{apply} member in the type that now has a concrete value for it. Evaluating a type application also uses \sr{E\_rfn} to push the concrete type information into the type of the value, which tracks the value's members. However, this binding is never used during later evaluation steps, as the only types that interact with evaluation, are those that can be used to instantiate a new object. These types must statically expand to a structural type, which is not possible for type un-members.

Finally, we note that the contravariant rule for un-member conformance means that Scalina can encode full \SystemFOmegaSub, and that the undecidability of the latter should thus carry over to Scalina. We defer a more formal account of the correspondence with \SystemFOmegaSub~to future work.


\section{Meta-theory} \label{meta}
The traditional term-level safety proofs show that it suffices to type check a program once in order to guarantee certain properties for every possible evaluation trace. In Scalina, we ensure that member lookup never fails to find the required label with the corresponding right-hand side, and that an un-member is at most refined once on the same object.

The type-level guarantees are similar, though more subtle. Since type selection is only well-kinded if the target of the selection is known to become a concrete type during type checking, we ensure that selection can always proceed on types of kind $\mathbf{Concrete}(R)$. Note that we consider certain other type selections, such as selecting a nominal type, to be in canonical form, so that this kind of selection is not expected to proceed.

We are actively working on the proofs of the meta-theory and their precise formulation.

\section{Related Work} \label{related}

\subsection{Safe type-level abstraction}
Since the seminal work of Girard and Reynolds in the early 1970's, fragments of the higher-order polymorphic lambda calculus or System F$_\omega$ \cite{girard:thesis,DBLP:conf/programm/Reynolds74,DBLP:journals/iandc/BruceMM90} have served as the basis for many programming languages. Furthermore, the interaction between higher-kinded types (types with un-members) and subtyping is a well-studied subject  \cite{DBLP:journals/tcs/PierceS97,DBLP:journals/iandc/CompagnoniG03}. A similarity of interest is Cardelli's notion of power type \cite{DBLP:conf/popl/Cardelli88}, which corresponds to Scalina's \lstinline|In(S, T)| kind.

Despite the vast volume of work on type-level abstraction in functional programming languages, object-oriented languages offer comparatively limited support. Most OO languages do provide parametric polymorphism \cite{DBLP:conf/oopsla/BrachaOSW98}, but few give type constructors first-class status. Cremet and Altherr extend Featherweight Generic Java with higher-kinded types \cite{Altherr07:fgjomega}. To the best of our knowledge, besides OCaml \cite[6.8.1]{ocaml:manual}, Scala is the only OO language with support for type constructor parameters \cite{moors07:tcpoly}. Of course, this can be encoded in languages with abstract type members, such as gbeta \cite{ernst99b}.

We briefly mention type-level computation \cite{citeulike:975433,DBLP:conf/flops/SulzmannWS06,schrijvers07:typefun}, which can be used to enforce properties of term-level programs. The traditional term-level programmer need not be the one who designed the type-level machinery to enforce these properties. 

% OO in FP: fine

% OO in FP+OO: overkill

% Pure OO: that's us, baby!
\subsection{Modelling OO}
Given the wealth of research on extensions of the $\lambda$ calculus, it is only natural that studies of the essence of object-oriented languages build on these ideas. Even though encoding objects requires a lot of extra machinery, such as records, subtyping, and recursive types, this complexity is probably inherent. However, modelling OO using a combination of FP \emph{and} OO seems to fail Occam's razor. Nonetheless, a lot of object calculi fall in this category \cite{DBLP:journals/toplas/IgarashiPW01,DBLP:conf/popl/FlattKF98,DBLP:conf/mfcs/CremetGLO06}.

The other side of the spectrum -- using a purely object-oriented calculus without FP concepts -- can be traced back to Abadi and Cardelli's seminal work \cite{DBLP:journals/scp/AbadiC95,DBLP:journals/iandc/AbadiC96}. However, in their first-order system, ``an object type is invariant in its component types''. Thus, object types cannot encode function types in the presence of subtyping, as the latter require a mix of contravariance and covariance. To solve this, they introduce universal and existential quantification in their second-order system. Universal quantification, like un-members, behaves contravariantly. Similarly, existential quantification introduces covariance, which we allow for normal members. 

In other respects, Abadi and Cardelli's first-order system is more powerful than Scalina: our refinement operator does not allow recursion through the self variable. However, this limitation simplifies the calculus without ruling out refinement's primary use, which is similar to function application: supplying a value to a function does not rely on the values supplied earlier.

 To further the similarity with application, refinements do not require type or kind annotations for the supplied entities. Nonetheless, it is possible to have type un-members that classify value un-members: the type un-members must then be refined before the value un-members, because the supplied types determine the acceptable values for the value un-members. Note that we use first-class types, which do have a self variable, for overriding.

Scalina was directly inspired by the \nuObj~calculus \cite{DBLP:conf/ecoop/OderskyCRZ03}. The main difference is that Scalina introduces un-members and refinement at the term and type level. \nuObj~uses class templates for term-level abstraction, and only provides covariant abstract type members for type-level abstraction. The latter implies that well-formed type-level applications may surprise the type function with unexpected arguments. It is important to note that this does not have any impact on the run-time behaviour of such programs. It does however fail to provide the type-level equivalent of the guarantees that term-level abstraction builders have come to rely on.

%\TODO{In what way is a notion of inputs and outputs of type abstractions not FP (disguised by notation)? If we set aside this bias, is there and expressiveness gain from using un-members instead of FP-style abstraction?}


\section{Conclusion and Ongoing Work} \label{conclusion}
% soundness of higher-kinded types
The immediate goal of Scalina is to provide a foundation for proving our extension of Scala with type constructor polymorphism sound. In this paper, we have shown how  our calculus improves over the \nuObj~calculus with respect to safe type-level abstraction. More specifically, we formulated the notion of \emph{kind soundness}, which ensures safety for type-level abstractions. To achieve this, we distinguish ``input'' and ``output'' members. Given the covariant nature of Scala's abstract type members, they should only be used for output. We introduce un-members to deal with input. Furthermore, we illustrated Scalina's uniform, and purely object-oriented, treatment of term-level and type-level abstractions as first-class entities.

Although we are well on our way to proving Scalina sound at both levels, the meta-theory is not yet complete. Once these results have been established, we will define a type-preserving translation from an essential subset of Scala with type constructor polymorphism into Scalina. Similarly, the full correspondence with variants of \SystemFOmegaSub~remains an interesting topic to explore. 

A broader perspective on our work is that more powerful type-level abstractions are an important tool in improving the robustness of software written in tomorrow's languages. In order to make it practical to use these techniques, we must be able to write type-level abstractions once and re-use them safely in different settings. 

\notnow{More concretely, we see Scalina as a modest initial step on our way to a pluggable type system \cite{bracha04:pluggable}. Type-level computation is intended to enforce properties of term-level programs, but the programmer writing the traditional term-level program need not be the one who designed the type-level machinery to enforce these properties. Many others are working on this topic, both in the FP setting \cite{citeulike:975433,DBLP:conf/flops/SulzmannWS06,schrijvers07:typefun}, as well as in OO \cite{DBLP:conf/oopsla/AndreaeNMM06}.
}

\comment{\begin{itemize}

	\item add split objects: the term-level counterpart of intersection types, align abstract members and un-members more (former covariant, latter contravariant, both may be absent in objects, must track them -- refinement: no classifiers, no recursion through self, individual)

	\item add effects

	\item provides dynamic object-based inheritance

	\item late binding of type members (new T is allowed it T is abstract member) $\rightarrow$ virtual classes + deep mixin composition (?)
	
	\item refine un-member dependencies: members list which un-members they rely on, can read any member whose dependent un-members have already been refined
	
	\item decidability: add some decreasing metric to tame recursion of type members
	
	\item untangle types and kinds: types only contain value members, kinds describe type members of the type they classify
\end{itemize}
}



\acks
The authors would like to thank Erik Ernst for his extremely thorough feedback on earlier drafts of this paper. Furthermore, we gratefully acknowledge Dave Clarke, Burak Emir, Bart Jacobs, Sean McDirmid, and Marko van Dooren for their insightful comments and interesting discussions. Finally, we thank the anonymous reviewers for their helpful suggestions and insightful comments. 
%  We also gratefully acknowledge the Scala community, and especially Lauri Alanko, for providing a fertile testbed for this research.
% Miles Sabin
We typeset Scalina's theory using OTT (with \texttt{ottlayout.sty}) \cite{ott-sub}.

\bibliographystyle{abbrv}
%\bibliography{manual,dblp}
\bibliography{final}

\end{document}
