\documentclass[12pt]{report}
\newcommand{\svnid}{\code{$Id: sparsec.tex 165 2008-02-12 16:52:15Z adriaanm $}}
\usepackage{pdfsync}

\usepackage[margin=2.5cm]{geometry}                % See geometry.pdf to learn the layout options. There are lots.
\geometry{a4paper}                   % ... or a4paper or a5paper or ... 
%\geometry{landscape}                % Activate for for rotated page geometry
%\usepackage[parfill]{parskip}    % Activate to begin paragraphs with an empty line rather than an indent

% don't change the next three lines, fonts will get messed up!
\usepackage[T1]{fontenc}
% \usepackage[scaled]{luximono}
\usepackage{times}

\usepackage{graphicx}

\usepackage[utf8]{inputenc}

\usepackage{textcomp}

\usepackage[colorlinks,breaklinks=true]{hyperref}
\usepackage{amsmath}
\usepackage{xcolor}
\definecolor{dullmagenta}{rgb}{0.4,0,0.4}   % #660066
\definecolor{darkblue}{rgb}{0,0,0.4}
\hypersetup{linkcolor=darkblue,citecolor=darkblue,filecolor=dullmagenta,urlcolor=darkblue} % coloured links


\newcommand{\AWK}{{\color{red}AWK}}
\newcommand{\comment}[1]{}
\newcommand{\TODO}[1]{\mbox{{\color{red}TODO}}\{{\footnotesize{#1}}\}}

\usepackage{listings}
\newcommand{\code}[1]{\lstinline{#1}}
\newcommand{\class}[1]{\code{#1}}
\newcommand{\type}[1]{\code{#1}}
\newcommand{\kind}[1]{\code{#1}}
\newcommand{\method}[1]{\code{#1}}
\newcommand{\kto}[1]{\ensuremath{\rightarrow}}
\newcommand{\tmfun}[1]{\ensuremath{\rightarrow}}
\newcommand{\tpfun}[1]{\ensuremath{\Rightarrow}}
\newcommand{\nuObj}{$\nu$Obj}
\newcommand{\OmegaLang}{$\Omega$mega}

\lstdefinelanguage{scala}{% 
       morekeywords={% 
                try, catch, throw, private, public, protected, import, package, implicit, final, package, trait, type, class, val, def, var, if, this, else, extends, with, while, new, abstract, object, case, match, sealed, for, yield},% 
         sensitive=t, % 
   morecomment=[s]{/*}{*/},morecomment=[l]{\//},% 
   escapeinside={/*\%}{*/},%
   rangeprefix= /*< ,rangesuffix= >*/,%
   morestring=[d]{"}% 
 }
 
\lstset{breaklines=true, language=scala} 
%\lstset{basicstyle=\footnotesize\ttfamily, breaklines=true, language=scala, tabsize=2, columns=fixed, mathescape=false,includerangemarker=false}
% thank you, Burak 
% (lstset tweaking stolen from
% http://lampsvn.epfl.ch/svn-repos/scala/scala/branches/typestate/docs/tstate-report/datasway.tex)
\lstset{
    fontadjust=true,%
    columns=[c]fixed,%
    keepspaces=true,%
    basewidth={0.56em, 0.52em},%
    tabsize=2,%
    basicstyle=\renewcommand{\baselinestretch}{0.97}\small\tt,% \small\tt
    commentstyle=\textit,%
    keywordstyle=\bfseries,%
    frame=single,%
}

\def\toplus{\hbox{$\, \buildrel {\tiny +}\over {\to}\,$}}
\def\tominus{\hbox{$\, \buildrel {\tiny -}\over {\to}\,$}}

\lstset{
  literate=
  {=>}{$\Rightarrow$}{2}
  {->}{$\to$}{2}
  %{-(+)>}{$\toplus$}{2}  
  %{-(-)>}{$\tominus$}{2}  
  {<-}{$\leftarrow$}{2}
  % {\\}{$\lambda$}{1}
  % {<~}{$\prec$}{2}
  %{<|}{$\triangleleft$}{2}
  {<:}{$<:$}{1}
}


\usepackage{amsmath,amssymb}
\usepackage{ifthen}
\usepackage{fancyheadings}
\usepackage{framed}

\bibliographystyle{abbrv}
%\thanks{Draft~\svnid~of report CW491, final version to appear at \url{http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW491.abs.html}}
\title{Parser Combinators in Scala}
\author{Adriaan Moors \and Frank Piessens \and Martin Odersky}
%\date{}                                           % Activate to display a given date or no date

\newtheorem{ex}{Exercise}[chapter]
\newtheorem{sol}{Solution}[chapter]

\usepackage{pdfpages}
\begin{document}
\includepdf[pages={1-2}]{cover.pdf}
  
% \maketitle
% \abstract{Parser combinators are well-known in functional programming languages such as Haskell. In this paper, we describe how they are implemented as a library in Scala, a functional object-oriented language. Thanks to Scala's flexible syntax, we are able to closely approximate the EBNF notation supported by dedicated parser generators. For the uninitiated, we first explain the concept of parser combinators by developing a minimal library from scratch. We then turn to the existing Scala library, and discuss its features using various examples.}


\chapter{Introduction}
In this paper we describe our Scala \cite{LAMP-REPORT-2006-001} implementation of an embedded domain-specific language (DSL) for specifying grammars in a EBNF-like notation. We use the parser combinators approach \cite{DBLP:conf/fpca/Wadler85,DBLP:journals/jfp/Hutton92,DBLP:conf/afp/Fokker95, DBLP:conf/afp/SwierstraD96,LeijenMeijer:parsec} to implement this language as a Scala library.

The next chapter provides a tutorial on how to implement a library of parser combinators from scratch. No prior knowledge of functional programming is required. We do assume familiarity with Scala, and basic notions of parsing. Chapter \ref{ch:sparsec} exemplifies the utility of the actual library of parser combinators that is part of the standard Scala distribution.

By defining the parser directly in the general-purpose language that is used to process the parser's results, the full power of that language is at our disposal when constructing the grammar. There is no need to learn a separate language for things that can already be expressed in the general-purpose language. Thus, the only bump in the learning curve is understanding the combinators offered by the library. 

Any sufficiently complicated DSL is doomed to re-invent many of the mechanisms already available in a general-purpose language. For example, Bracha shows how to leverage inheritance in his Executable Grammars, so that the concrete grammar and the construction of the corresponding abstract syntax trees can be decoupled \cite{DBLP:journals/entcs/Bracha07} in Newspeak, a unityped \cite{harper08:pfpl} language. It remains an open question to devise a type system that can harness this scheme. 

The downsides of not having a special-purpose language for describing the grammar are limited: essentially, the syntax may be more verbose and performance may be affected. We will show that our library minimises the syntactical overhead. We leave performance benchmarks and optimisations for future work. The Parsec library in Haskell was shown to be quite efficient \cite{LeijenMeijer:parsec}, so we expect similar results can be achieved in our library if practical use indicates optimisation is necessary.

%TODO: cite executable grammars by Bracha


Our parser combinators produce back-tracking top-down parsers that use recursive descent with arbitrary look-ahead and semantic predicates. The library provides combinators for ordered choice and many other high-level constructs for repetition, optionality, easy elimination of left-recursion, and so on. In Chapter \ref{ch:sparsec}, we will show how to incorporate variable scoping in a grammar.

There are a few known limitations on the expressible grammars. Left-recursion is not supported directly, although we provide combinators that largely obviate the need for it. In principle, it is possible to implement a different back-end that adds support for left recursion, while exposing the same interface. Recent work has shown how to implement support for left-recursion in Packrat parsers \cite{Warth08:leftrec}. We have investigated using Packrat parsing \cite{DBLP:conf/icfp/Ford02}, but defer a full implementation until we focus on optimising the performance of our library. Besides the implementation effort, Packrat parsing assumes parsers are pure (free from effects), but this cannot (yet) be enforced in Scala.

Furthermore, the choice operator is sensitive to the order of the alternatives. To get the expected behaviour, parsers that match ``longer''  substrings should come first. To increase performance, the ordered choice combinator commits to the first alternative that succeeds. In practice, this seems to work out quite nicely. 

\input{scratch}

\chapter{Scala's Parser Combinators by Example\label{ch:sparsec}}
The library of parser combinators in the standard Scala distribution is very similar to the library that we developed in the previous chapter. For detailed documentation, please consult the API-documentation included in the distribution. In this chapter, we discuss a number of interesting examples.

\section{Parsing and evaluating simple arithmetic expressions}
An introduction to parsing would not be complete without a parser for arithmetic expressions. Except for the import statements, listing \ref{lst:arith} is a complete implementation of such a parser. Let's dissect this example line by line.

The first line declares an application object \class{ArithmeticParser}, which is suitable as a main class (running it will evaluate the expressions in its body). More importantly, \class{ArithmeticParser} is a \class{StdTokenParsers}, which means it contains parsers that operate on a stream of tokens.  \class{StdTokenParsers} earned its `Std' prefix by providing a couple of commonly-used tokens such as keywords, identifiers and literals (strings and numbers). If these defaults don't suit you, simply go over its head and use its super class, \class{TokenParsers}.

A token parser abstracts from the type of tokens it parses. This abstraction is made concrete in line 2: we use \class{StdLexical} for our lexical analysis. It's important to note that lexical analysis is done using the same parsers that we use for syntax analysis. The only difference is that lexical parsers operate on streams of characters to produce tokens, whereas syntactical parsers consume streams of tokens and produce yet a more complex type of structured data. To conclude the lexical aspects of our example, line 3 specifies which characters should be recognised as delimiters (and returned as keyword tokens).

Now we get to the actual grammar. \method{expr} returns a parser that parses a list of \method{term}'s, separated by either a \code{"+"} or a \code{"-"} and returns an integer, which, unsurprisingly, corresponds to the evaluation of the expression. An implicit conversion (\method{keyword} in \class{StdTokenParsers}) automatically lifts a string to the \class{UnitParser} that matches that string and returns nothing.  \class{UnitParser}'s are parsers whose results are discarded. 

In general, \code{p*} means repeat parser \code{p} zero or more times and collect the results of \code{p} in a list. \code{p*(q)} generalises this to repeating \code{p} alternated with \code{q}. If \code{q} returns a result (i.e., it's not a \class{UnitParser}), its result must be a function that combines the result of \code{p} when called right before it and that of \code{p} when called right after it. In our case, \code{q} is \code{"+"} \verb=^^=  \code{\{(x: int, y: int) => x + y\} | "-"} \verb=^^=  \code{\{(x: int, y: int) => x - y\}}, which, when it sees a \code{"+"}, returns the function that sums two integers, and similarly when it encounters a \code{"-"}. \code{p*(q)} uses this function to ``fold'' the list of results into a single result. Again, in our case, this is the sum (or subtraction) of the constituent terms. When \code{q} is a \class{UnitParser}, it's freed from returning such a function and the combinator just collects \code{p}'s results in a list.

Let's pick \code{"+"} \verb=^^=  \code{\{(x: int, y: int) => x + y\} | "-"} \verb=^^=  \code{\{(x: int, y: int) => x - y\}} apart a bit further. Because of Scala's operator precedence, \method{|} is the first combinator to be applied. \code{p | q} is the parser that first tries \code{p} on the input, and if successful, just returns it result. If its result was \class{Failure} (and not \class{Error}), the second parser, \code{q} is used. Note that this combinator is sensitive to the ordering of its constituents. It does not try to produce all possible parses -- it stops as soon as it encounters the first successful result.

The \verb=^^= combinator takes a parser and a function and returns a new parser whose result is the function applied to the original parser's result. In the case of a \class{UnitParser}, the function can only be a constant function (i.e., a value).

Finally, we create a scanner that does lexical analysis on a string and returns a stream of tokens, which is then passed on to the expression parser. The latter's result is then printed on the console.

\begin{lstlisting}[float, caption=Parsing 1+1 etc., label=lst:arith]
object ArithmeticParser extends StdTokenParsers with Application {   
  type Tokens = StdLexical ; val lexical = new StdLexical
  lexical.delimiters ++= List('(', ')', '+', '-', '*', '/')

  def expr = term*("+" ^^ {(x: int, y: int) => x + y} 
                 | "-" ^^ {(x: int, y: int) => x - y})
  def term = factor*("*" ^^ {(x: int, y: int) => x * y} 
                   | "/" ^^ {(x: int, y: int) => x / y})
  def factor: Parser[int] = "(" ~ expr ~ ")" 
                          | numericLit ^^ (.toInt)
  
  Console.println(expr(new lexical.Scanner("1+2*3*7-1")))
}
\end{lstlisting}     

To illustrate what all this syntactical sugar really boils down to, listing \ref{lst:arithdesugar} shows what we'd have to write if Scala's syntax wasn't as liberal. In line with the stoic approach, the code also doesn't use implicit conversions.

\begin{lstlisting}[float, caption=Desugared version of listing \ref{lst:arith}., label=lst:arithdesugar]
def expr = chainl1(term, (keyword("+").^^{(x: int, y: int) => x + y}).|(keyword("-").^^{(x: int, y: int) => x - y}))
def term = chainl1(factor, (keyword("*").^^{(x: int, y: int) => x * y}).|(keyword("/").^^{(x: int, y: int) => x / y}))
def factor: Parser[int] = keyword("(").~(expr.~(keyword(")"))).|(numericLit.^^(x => x.toInt))
\end{lstlisting}    
  
% \section{Overview}
% TODO:  list combinators in order of precedence and discuss briefly
% 
% ~
% ~! (and commit)
%|
%^^
%* or rep
%*(q) or repsep or chainl1
%+ or rep1
%? or opt  

%elem, accept, discard

%fail/failure, success
 


% \section{More examples}
\section{Context sensitivity in parsing XML}
Listing \ref{lst:xml}, which was inspired by \cite{LeijenMeijer:parsec}, shows how to make context-sensitive parsers. It is a self-contained parser for an extremely minimal subset of XML. Thus, it also demonstrates how combinator parsers can be used for scanner-less parsing. Although it is typically more convenient to separate lexical and syntactical parsing, they can also be performed by the same parser.

The essential part of the example is: 
\begin{lstlisting}
openTag into {name => rep(xml) <~ endTag(name) ^^ (ContainerNode(name, _))}
\end{lstlisting}

This constructs a parser that first tries the \code{openTag} parser, and if successful, feeds its result into the next part:
\begin{lstlisting}
rep(xml) <~ endTag(name) ^^ (ContainerNode(name, _))
\end{lstlisting}
This parser, where \code{name} is bound to the tag-name that was parsed by \code{openTag}, accepts any number of nested constructs, and a closing tag with the right name. Thus, the end result is that this parser recognises when tags are not properly closed.

\begin{lstlisting}[float, caption=A sketch for a context-sensitive parser for XML, label=lst:xml]
import scala.util.parsing.combinator._

object XMLParser extends Parsers with Application {
  type Elem = Char
  
  trait Node
  case class ContainerNode(name: String, content: List[Node]) extends Node
  case class TextNode(content: String) extends Node
  
  def str1(what: String, pred: Char => Boolean) = rep1(elem(what, pred)) ^^ (_.mkString(""))
  
  def openTag: Parser[String] = '<' ~> str1("tag name", _.isLetter) <~ '>'
  def endTag(name: String) = ('<' ~ '/')  ~> accept(name.toList) <~ '>'
  def xmlText: Parser[Node] = str1("xml text", {c => !(c == '<' || c == '>')}) ^^ TextNode
                     
  def xml: Parser[Node] = (
        (openTag into {name => rep(xml) <~ endTag(name) ^^ (ContainerNode(name, _))})
       | xmlText ) 

       
  import scala.util.parsing.input.CharArrayReader

  def phrase[T](p: Parser[T]): Parser[T] = p <~ accept(CharArrayReader.EofCh)

  println(phrase(xml)(new CharArrayReader("<b>bold</b>".toArray)))
}
\end{lstlisting}                             

\section{Tracking variable binding}
Listing \ref{lst:lambda} implements a parser for the lambda calculus that also enforces the scoping rules for variables. Essentially, we nest our productions in a class \type{Context}, which models the current scope. Then, in a given scope, a term is a sequence of applications, where a single term in an application may be a parenthesized term, an abstraction, or a bound variable. An abstraction binds an identifier\footnote{The identifier may be surrounded by whitespace -- the lexical parsers are shown in Listing \ref{lst:lexical}.}, and brings it into scope by calling the \code{term} production on the context that is produced by the \code{bind} combinator.

The combinators that maintain which variables are bound are show in Listing \ref{lst:binding}. Basically, a context is a function that takes a variable name (as a \type{String}), and returns a (unique) \type{Name} that represents the variable, if it is bound. The \code{bind} combinator is parameterised in a parser that yields a string-representation \code{name}, which is then turned into a fresh name \code{n}. The result of this combinator is a pair that consists of a new context that binds \code{name} to \code{n}, and \code{n}. This redundancy is needed for the implementation of \code{in}, which wraps a parser that parses a construct (of type \type{T}) in which a variable is bound, in a \lstinline!\\[T]!. 

The \code{bound} combinator succeeds if the given name (as a \type{String}) is in scope, and returns the corresponding \type{Name}.

Finally, Listing \ref{lst:syntax} shows the essence of the case classes that model the abstract syntax tree. 

The implementation of the \type{Binding} trait is out of scope for this paper. It should be clear that these combinators are amenable to most common approaches to dealing with variable binding.


\begin{lstlisting}[float, caption=Parser for the Lambda calculus that tracks variable binding, label=lst:lambda]
trait LambdaParser extends Syntax with Lexical with ContextualParsers {  
  // Context tracks which variables are bound, 
  // the generic functionality is implemented in ContextualParsers
  // Here, we only specify how to create our Context class
  object Context extends ContextCompanion {
    def apply(f: String => Option[Name]): Context 
      = new Context { def apply(n: String) = f(n) }
  }
  
  // Since parsing a term depends on the variable bindings that we have 
  // previously added to the context, we put these parsers in a Context, 
  // which provides the `bind`, `in`, and `bound` combinators.
  trait Context extends ContextCore {
    def term = chainl1(termSingle, ws ^^^ (App(_: Term, _: Term)))
    def termSingle: Parser[Term] = 
      ( '(' ~> term <~ ')'
      | '\\' ~> bind(wss(ident)) >> in{ctx => '.' ~> wss(ctx.term)} ^^ Abs
      | bound(ident)                                                ^^ Var
      )
  }
 
  import scala.util.parsing.input.Reader
  def parse(input: Reader[Char]) = (wss(Context.empty.term) <~ eoi)(input)
}
\end{lstlisting}

\begin{lstlisting}[float, caption=Infrastructure for dealing with variable binding, label=lst:binding]
trait Binding {
  // represents a variable name
	type Name
	// creates a fresh name
	def Name(n: String): Name
	
	// something that contains Name's
	type HasBinder[T]
	
	// some construct of type T in which the Name n is bound
	type \\[T]
	def \\[T](n: Name, scope: HasBinder[T]): \\[T]
}

trait ContextualParsers extends Parsers with Binding {
  type Context <: ContextCore

  val Context: ContextCompanion
  trait ContextCompanion {
    val empty: Context = apply{name => None}
    def apply(f: String => Option[Name]): Context
  }
  
  trait ContextCore extends (String => Option[Name]) {
    def bind(nameParser: Parser[String]): Parser[(Context, Name)] 
      = (for(name <- nameParser) yield {
            val n=Name(name)
            (this(name) = n, n)
        })
        
    def bound(nameParser: Parser[String]): Parser[Name]  
      = (for(name   <- nameParser;
             binder <- lookup(name)) yield binder)
             
    def lookup(name: String): Parser[Name] = this(name) match {
      case None => failure explain("unbound name: "+name)
      case Some(b) => success(b)
    }
    
    def in[T](p: Context => Parser[HasBinder[T]]): Pair[Context, Name] => Parser[\\[T]] = {case (ctx, n) => p(ctx) ^^ (\\(n, _))}
    
    def update(rawName: String, binder: Name): Context = Context{name => if(name == rawName) Some(binder) else this(name) }
  }
}
\end{lstlisting}

\begin{lstlisting}[float, caption=Abstract syntax (details omitted), label=lst:syntax]
  trait Term extends HasBinder[Term]
  case class Var(name: Name) extends Term
  case class Abs(abs: \\[Term]) extends Term 
  case class App(fun: Term, arg: Term) extends Term
\end{lstlisting}

\begin{lstlisting}[float, caption=Lexical parsing, label=lst:lexical]
trait Lexical extends Parsers {
  type Elem = Char
  import scala.util.parsing.input.CharArrayReader
  
  // these combinators do the lexical analysis, which we have not separated explicitly from the syntactical analysis
  def letter = acceptIf(_.isLetter) expected("letter")
  def digit = acceptIf(_.isDigit) expected("digit")
  def ws = rep1(accept(' ')) expected("whitespace")
  // hint: only wrap wss(...) around the parsers that really need it
  def wss[T](p: Parser[T]): Parser[T] = opt(ws) ~> p <~ opt(ws)
  def ident = rep1(letter, letter | digit) ^^ {_.mkString("")} expected("identifier")
  def eoi = accept(CharArrayReader.EofCh)
}
\end{lstlisting}

\chapter*{Acknowledgments}
The authors would like to thank Eric Willigers for his feedback on earlier drafts.


\appendix
\input{solutions}

\bibliography{local,dblp,sparsec}
\end{document}