%!TEX root = /Users/adriaan/src/kinded-scala/papers/sparsec/sparsec.tex

\chapter{Parser Combinators from the Ground Up \label{ch:scratch}}

\section{Intuitions}
As a first approximation, a parser consumes input. In functional programming, we model an ``input consumer'' as a function that takes some input and returns the rest of the input that has not been consumed yet. 

Thus, the type of a parser that examines a string (as its input) can be written as \lstinline!String => String!. The identity function represents a parser that does not consume any input. Another example is a parser that always consumes the first character of its input: \lstinline!(in: String) => in.substring(1)!. Now, what should we do when the input is empty?  Or, how can we implement a parser that refuses certain input?

Naturally, a parser does not accept just any input -- it has to conform to a certain \emph{grammar}. A parser should not just denote how much input it consumed, but also whether it considered the input valid or not. Furthermore, for valid input, a parser typically returns a result based on that input. Invalid input gives rise to an error message. 

Let us refine our model of a parser so that it meets these criteria: a parser is a function that takes some input -- generalising this to be of the abstract type \type{Input} -- and that produces a result, which is modelled by the type \type{Result[T]}. Listing \ref{lst:SimpleResults} implements this in Scala.

\begin{lstlisting}[float, caption=A component for modelling results, label=lst:SimpleResults]
trait SimpleResults { 
  type Input

  trait Result[+T] {
    def next: Input
  }

  case class Success[+T](result: T, next: Input) extends Result[T] 
  case class Failure(msg: String, next: Input) 
                                           extends Result[Nothing]
}
\end{lstlisting}   

Given this \type{SimpleResults} component, we will model a parser that produces results of type \type{T} as a function of type \type{Input => Result[T]}. First, we examine listing \ref{lst:SimpleResults} more carefully. 

A result can be a success, and then it contains a result value of type \type{T}, or a failure, that provides an error message. In either case, a result specifies how much input was consumed by tracking the input that should be supplied to the following parser. 

The declaration \lstinline!trait Result[+T]! says \type{Result} is a \emph{type constructor} that is \emph{covariant} in its first type argument. Because it is a type constructor, we must apply \type{Result} to a concrete type argument, such as \type{String}, in order to \emph{construct} a type that can be instantiated, such as \type{Result[String]} (this is necessary, but not sufficient, as \type{Result} is an abstract class). 

Because of the covariance annotation (the `\code{+}'), \type{Result[A]} and \type{Result[B]} are related with respect to subtyping in the same way as \type{A} and \type{B}. That is, \lstinline!Result[A] <: Result[B]! if \lstinline!A <: B! and vice versa. Note that \type{Nothing} is a subtype of any well-formed type.

Before we study more complicated parsers, consider the parser that only accepts the character \code{'x'}, as shown in listing \ref{lst:xparser}. 

\begin{lstlisting}[float, caption=Parsing `x', label=lst:xparser]
object XParser extends SimpleResults {
  type Input = String
  val acceptX: Input => Result[Char] = {(in: String) => 
    if(in.charAt(0) == 'x') Success('x', in.substring(1))
    else Failure("expected an x", in)
  }
}
\end{lstlisting}


  \begin{ex}[Experimenting]
  What happens when you apply this parser to the input \code{"xyz"}? (That is, what is the result of \code{acceptX("xyz")}?) Try to work out the result on paper before pasting\footnote{Note that \texttt{=>} is typeset as \code{=>}.} the listings in the Scala interpreter to verify your expectations.
  \end{ex}

Notice how the parser denotes that it consumed the first character of the input: the \code{next} field of the result is set to the input minus its first character (the substring that starts after the first character).


  \begin{ex}[Generalisation and Robustness]
  Generalise \code{acceptX} so that it can be used to make a parser that matches on other characters than \code{'x'}. Improve it further so that it deals with the empty input.
  \end{ex}



\section{Sequence and Alternation}
Now we know how to define parsers that accept a single element of input, we will see how these can be combined into parsers that recognise more complex grammars. Once it is clear how to implement the two most typical ways of combining parsers, we shall gradually improve our implementation.

Listing \ref{lst:SimpleParsers} shows a straightforward implementation of alternation and sequencing. A \type{Parser} is a subclass of \type{Input => Result[T]}, which is syntactic sugar for \type{Function1[Input, Result[T]]}. Thus, an instance of \type{Parser[T]} is (an object that represents) a function that takes an instance of \type{Input} to a \type{Result[T]}. As a reminder, the abstract \code{apply} method, which is inherited from \type{Function1}, is included explicitly in listing \ref{lst:SimpleParsers}.


If \code{p} and \code{q} are \type{Parser}'s, \code{p | q} is a \type{Parser} that first tries \code{p}. If this is successful, it returns \code{p}'s result. \code{q} is only tried if \code{p} fails. Similarly, \code{p ~ q} results in a parser that succeeds if \code{p} \emph{and then} \code{q} succeeds. There is nothing special about \lstinline!~! and \code{|}: they are just methods that happen to have names that consist solely of symbols. \code{p ~ q | r} is syntactic sugar for \lstinline!(p.~(q)).|(r)!. The precedence and associativity of method names is defined in the Scala reference \cite[Sec. 6.12.3]{odersky:scala-reference}.

The \code{|} method takes an argument \code{p}, which is a parser that produces results of type \type{U}. \type{U} must be a super-type of the type of the results produced by the parser on which the \code{|} method is called (denoted as \code{Parser.this}). The method returns a new parser that produces results of type \type{U} by first trying \code{Parser.this}, or else \code{p}.

When the parser that is returned by \lstinline!|!, is applied to input, it passes this input on to \code{Parser.this} (the parser on which \lstinline!|! was called originally). The result of this parser is examined using pattern matching. The first case is selected when \code{Parser.this} failed. Then (and only then -- see below), the alternative parser is computed and applied to the same input as \code{Parser.this}. The outcome of this parser determines the result of the combined parser. In case \code{Parser.this} succeeded, this result is simply returned (and \code{p} is never computed).

Note that \code{p} is passed call-by-name (CBN) \footnote{Call-by name arguments are denoted by prefixing the argument type with \code{=>}.}. When \lstinline!|! is called, the compiler silently wraps a zero-argument function around its argument, so that its value is not yet computed. This is delayed until \code{p} is ``forced'' in the body of the method. 

More concretely, \emph{every time} \code{p}'s actual value is \emph{required}, the wrapper function is applied (to the empty list of arguments). Consider the expression \lstinline!q | p!. When the \lstinline!|! combinator is called on \code{q}, \code{p}'s value need not be known, as the method simply returns a new parser. This combined parser textually contains an occurrence of \code{p}, but its actual value does not become relevant until the combined parser is applied to input \emph{and} \code{q} fails.

The sequence combinator is implemented by the \lstinline!~! method. The main difference is that we must be more careful with the input that is passed to each parser: the first one receives the input supplied to the combined parser, and the second parser (\code{p}) is applied to the input that was left over after the first one. If both parsers succeed, their results are combined in the pair \code{(x, x2)}. The first parser to fail determines the unsuccessful outcome of the combined parser.

\begin{lstlisting}[float,caption=Combinators for Alternation and Sequencing,label=lst:SimpleParsers]
trait SimpleParsers extends SimpleResults {
  trait Parser[+T] extends (Input => Result[T]) { 
    def apply(in: Input): Result[T]
    
    def | [U >: T](p: => Parser[U]): Parser[U] 
      = new Parser[U]{ def apply(in: Input) = 
          Parser.this(in) match {
            case Failure(_, _) => p(in)
            case Success(x, n) => Success(x, n)
          }
        } 
          
    def ~ [U](p: => Parser[U]): Parser[Pair[T, U]] 
      = new Parser[Pair[T, U]]{ def apply(in: Input) = 
          Parser.this(in) match {
            case Success(x, next) => p(next) match {
              case Success(x2, next2) => Success((x, x2), next2)
              case Failure(m, n) => Failure(m, n)
            } 
            case Failure(m, n) => Failure(m, n)
          }
        } 
  }  
}
\end{lstlisting}   

  \begin{ex}[Cycles] What happens when you leave off the `\code{=>}' of the types of the arguments of \lstinline!|! and \lstinline!~!? Write down a grammar that relies on the arguments being call-by-name.
  \end{ex}

With these combinators in place, let us construct our first working parser by combining a simple parser that accepts a single character into one that accepts the string that consists of one or more times ``oxo'', where subsequent occurrences are separated by a white space.

First, we generalise our \code{acceptX} parser generator in listing \ref{lst:StringParsers}. The only non-trivial difference is that we allow checking for the end of input using the parser generated by the method \code{eoi}. Internally, we use \code{0} to denote the end of the input has been reached.

\begin{lstlisting}[float,caption=Parsing Strings,label=lst:StringParsers]
trait StringParsers extends SimpleParsers {
  type Input = String
  private val EOI = 0.toChar
  
  def accept(expected: Char) = new Parser[Char]{
    def apply(in: String) = 
      if(in == "") {
        if(expected == EOI) 
          Success(expected, "")
        else 
          Failure("no more input", in)
      } else if(in.charAt(0) == expected) 
        Success(expected, in.substring(1))
      else 
        Failure("expected \'"+expected+"\'", in)
  }

  def eoi = accept(EOI)
}
\end{lstlisting}

Finally, the object \code{OXOParser} (listing \ref{lst:OXOParser}) constitutes a valid Scala program whose first argument must be exactly \code{"oxo"}, \code{"oxo oxo"}, or \code{"oxo oxo oxo"}, and so on.

\begin{lstlisting}[float,caption=oxo oxo \ldots oxo,label=lst:OXOParser]
object OXOParser extends StringParsers {
  def oxo = accept('o') ~ accept('x') ~ accept('o')
  def oxos: Parser[Any] = 
    ( oxo ~ accept(' ') ~ oxos
    | oxo
    )
  
  def main(args: Array[String]) = println((oxos ~ eoi)(args(0)))
}
\end{lstlisting}

To emphasise that this grammar is expressed purely as method calls, listing \ref{lst:OXOParserdesugar} reformulates \code{oxo} and \code{oxos} in a more traditional syntax.

  \begin{ex}
    Verify the correctness of listing \ref{lst:OXOParserdesugar} by compiling the version of listing \ref{lst:OXOParser} using \code{scalac -Xprint:typer}, which prints the compiled source after type checking. At that point, syntactic sugar has been expanded and the omitted types have been inferred.
  \end{ex}
  
\begin{lstlisting}[float,caption=oxo oxo \ldots oxo (hold the syntactic sugar),label=lst:OXOParserdesugar]
  def oxo = accept('o').~(accept('x')).~(accept('o'))
  def oxos: Parser[Any] = oxo.~(accept(' ')).~(oxos).|(oxo)
\end{lstlisting}
        
\begin{ex} Write down the order in which the various parsers are executed for a given input. Work out at least one example for input on which the parser should fail. Verify your solution using the \code{log} combinator of listing \ref{lst:log}. What changes when you omit the \lstinline!~ eoi! in the \code{main} method?
\end{ex}

\begin{lstlisting}[float,caption=Logging,label=lst:log]
def log[T](p: => Parser[T])(name: String) = new Parser[T]{
  def apply(in: Input) : Result[T] = {
    println("trying "+ name +" at \'"+ in + "\'")
    val r = p(in)
    println(name +" --> "+ r)
    r
  }
}
\end{lstlisting}

\section{Factoring out the plumbing}
We will now improve our first implementation using standard techniques from functional programming. Our combinators for alternation and sequencing worked correctly, but were somewhat tricky to get right. More specifically, we had to pay attention to ``threading'' the input correctly when combining the parsers. In this section, we will encapsulate this.

\begin{lstlisting}[float,caption=Improved Results,label=lst:monadicresults]
trait SimpleResults { 
  type Input

  trait Result[+T] {
    def next: Input

    def map[U](f: T => U): Result[U]
    def flatMapWithNext[U](f: T => Input => Result[U]): Result[U]
    def append[U >: T](alt: => Result[U]): Result[U]
  }

  case class Success[+T](result: T, next: Input) extends Result[T] {
    def map[U](f: T => U)
      = Success(f(result), next)
    def flatMapWithNext[U](f: T => Input => Result[U]) 
      = f(result)(next)    
    def append[U >: T](alt: => Result[U])
      = this
  }

  case class Failure(msg: String, next: Input) extends Result[Nothing] {
    def map[U](f: Nothing => U)                        
      = this
    def flatMapWithNext[U](f: Nothing => Input => Result[U])
      = this
    def append[U](alt: => Result[U])
      = alt
  }
}
\end{lstlisting}

The new \type{Result}, as defined in listing \ref{lst:monadicresults}, provides three simple methods.
When called on a \type{Success}, \code{map} produces a new \type{Success} that contains the transformed result value. The \code{map} method is useful when transforming the result of a combinator. The function passed to \code{flatMapWithNext} produces a new \type{Result} based on the result value and \code{next}. We will use this method for chaining combinators. Finally, the only method whose implementation does not follow directly from its type signature, is \code{append}. Our simple model of results does not allow for multiple successful results, so appending a result to a success is a no-op. However, for more sophisticated systems that allow multiple results, \code{append} would add \code{alt} to a collection. Here, it simply returns the current \type{Result}. 

% TODO: improve previous para

For a \type{Failure}, the methods behave dually.

These methods may seem a bit arbitrary, but with them, we can re-implement \type{Parser} as shown in listing \ref{lst:monadicparser}. \code{flatMap}, \code{map}, and \lstinline!|! simply create \type{Parser}'s that call the corresponding methods on the results they produce for a given input. 

Finally, \lstinline!~! can now be implemented very naturally using Scala's for-comprehension syntax. Its implementation can be read as: ``perform parser \code{this} and \emph{bind} its result to \code{a} in the \emph{next} computation, which performs \code{p} and maps its result \code{b} to the pair \code{(a, b)}''. Note that we do not have to do \emph{any} bookkeeping on which input to pass to which parser! (This interpretation of for-comprehensions explains why \code{flatMap} is sometimes also called ``\code{bind}''.)
 
For-comprehensions \cite[Sec 6.19]{odersky:scala-reference} are syntactic sugar for nested calls to \code{flatMap}, \code{map}, and \code{filter} (we do not yet use the latter). More concretely, \code{for(a <- this; b <- p) yield (a,b)} is shorthand for \lstinline!this.flatMap{a => p.map{b => (a, b)}}!.

Our existing \type{OXOParser} can be used as-is with this new version of \mbox{\type{SimpleParsers}.}

\begin{lstlisting}[float,caption=Parsing,label=lst:monadicparser]
trait SimpleParsers extends SimpleResults {
  abstract class Parser[+T] extends (Input => Result[T]) { 
    def apply(in: Input): Result[T]

    def flatMap[U](f: T => Parser[U]): Parser[U] 
      = new Parser[U]{def apply(in: Input) 
                        = Parser.this(in) flatMapWithNext(f)}

    def map[U](f: T => U): Parser[U] 
      = new Parser[U]{def apply(in: Input) 
                        = Parser.this(in) map(f)}
    
    def | [U >: T](p: => Parser[U]): Parser[U] 
      = new Parser[U]{def apply(in: Input) 
                        = Parser.this(in) append p(in)}
  
    def ~ [U](p: => Parser[U]): Parser[Pair[T, U]] 
      = for(a <- this; b <- p) yield (a,b)
  }  
}
\end{lstlisting}

\begin{ex} Implement \code{def flatMap[U](f: T => Result[U]): Result[U]} in the appropriate classes. Experiment with for-comprehensions over \type{Result}'s. Can you think of other applications besides parsing?
\end{ex}

\begin{ex} \label{ex:equivalence} Convince yourself that the two implementations of \lstinline!~! and \lstinline!|! are indeed equivalent, without actually executing anything. Inline the method calls made by the new version until you arrive at our first implementation. (See p. \pageref{sol:equivalence} for the solution to this exercise.)
\end{ex}

\begin{ex} Improve \type{OXOParser} so that \code{oxos} produces a parser that returns a list of strings (where each string equals \code{"oxo"}). (Hint: use \code{map}.)
\end{ex}


\section{More Polish and Advanced Features}
\subsection{Encapsulating Parser Instantiation}
Listing \ref{lst:smaller} shows how we can get rid of the repetitive \lstinline|new Parser[U]{def apply(in: Input) = ... }| fragment. We simply define a method \type{Parser} that makes a new instance of \type{Parser} given a function that fully defines the parser's logic. (We've also omitted the abstract \code{apply} method -- as said before, it is inherited from \type{Function1} anyway.)

\begin{lstlisting}[float,caption=Reducing SimpleParsers,label=lst:smaller]
trait SimpleParsers extends SimpleResults {
  def Parser[T](f: Input => Result[T]) 
    = new Parser[T]{ def apply(in: Input) = f(in) }

  abstract class Parser[+T] extends (Input => Result[T]) { 
    def flatMap[U](f: T => Parser[U]): Parser[U] 
      = Parser{in => Parser.this(in) flatMapWithNext(f)}

    def map[U](f: T => U): Parser[U] 
      = Parser{in => Parser.this(in) map(f)}
    
    def | [U >: T](p: => Parser[U]): Parser[U] 
      = Parser{in => Parser.this(in) append p(in)}
  
    def ~ [U](p: => Parser[U]): Parser[Pair[T, U]] 
      = for(a <- this; b <- p) yield (a,b)
  }  
}
\end{lstlisting}

\subsection[Improving accept]{Improving \code{accept}}
As it stands, our oxo-grammar is already pretty close to BNF notation:

\begin{lstlisting}
  def oxo = accept('o') ~ accept('x') ~ accept('o')
  def oxos: Parser[Any] = 
    ( oxo ~ accept(' ') ~ oxos
    | oxo
    )  
\end{lstlisting}

We will now see how we can reduce this to:
\begin{lstlisting}
  def oxo = 'o' ~ 'x' ~ 'o'
  def oxos: Parser[Any] = 
    ( oxo ~ ' ' ~ oxos
    | oxo
    )  
\end{lstlisting}
    
Without further intervention, this code will not compile, as \type{Char} does not have a \lstinline!~! method. We can use implicit conversions to remedy this.

Simply adding the \code{implicit} keyword to the signature of \code{accept} does the trick:
\begin{lstlisting}
implicit def accept(expected: Char): Parser[Char] = ... // as before
\end{lstlisting}

Now, whenever the compiler encounters a value of type \type{Char} whereas its expected type is \type{Parser[Char]}, it will automatically insert a call to the \code{accept} method! 

In the following sections we will see how we can express the oxo-grammar even more succinctly using more advanced combinators. First, we will refactor \code{accept} and add initial machinery to improve error-reporting.
    
\subsection{Filtering}
It is now time to change the \code{accept} we defined in listing \ref{lst:StringParsers}, so that it is less tightly coupled to the kind of input we are examining. 

\code{accept} generates a parser that accepts a given element of input. To do this, it suffices that it can retrieve one element of input as well as the input that follows this element. We introduce a new abstract type \type{Elem} to represent an element of input, which can be retrieved using \lstinline!def first(in: Input): Elem!. The rest of the input is returned by \lstinline!def rest(in: Input): Input!. Given these abstractions, we can implement \code{accept} once and for all.

For reference, listing \ref{lst:stringparsers2} shows the simplified \type{StringParsers}, which now only contains the essential methods that deal with the specific kind of input that is supported by this component. (Note that \code{first} and \code{rest} correspond to the typical \code{head} and \code{tail} functions that are used to access lists in functional programming.)

\begin{lstlisting}[caption=StringParsers with first and rest,label=lst:stringparsers2,float=htbp]
trait StringParsers extends SimpleParsers {
  type Input = String
  type Elem = Char
  private val EOI = 0.toChar

	def first(in: Input): Elem = if(in == "") EOI else in(0)
	def rest(in: Input): Input = if(in == "") in else in.substring(1)

  def eoi = accept(EOI) // accept is now defined in SimpleParsers
}  
\end{lstlisting}

To further deconstruct \code{accept}'s functionality, we define a simple parser that accepts any given piece of input and passes the \emph{rest} of the input on to the next parser. We will introduce another method that decides whether this piece of input is acceptable.

\begin{lstlisting}
def consumeFirst: Parser[Elem] = Parser{in => 
  Success(first(in), rest(in))
}
\end{lstlisting}

We need one more standard method in \type{Parser}: \code{filter}. This method wraps an existing parser so that it accepts only results that meet a certain predicate (which is modelled as a function \type{T => Boolean}).

\begin{lstlisting}
  def filter(f: T => Boolean): Parser[T]
    = Parser{in => this(in) filter(f)}
\end{lstlisting}

Finally, we recover \code{accept} as the parser that filters the parser that consumes any input, by checking that the produced result is equal to the expected element.

\begin{lstlisting}
def acceptIf(p: Elem => Boolean): Parser[Elem] 
  = consumeFirst filter(p)
implicit def accept(e: Elem): Parser[Elem] = acceptIf(_ == e) 
\end{lstlisting}

\begin{ex}\label{ex:filter}
  Implement the corresponding \code{filter} method in \type{Result} and its subclasses. Note that this is not entirely trivial! Try out your implementation. Compare the resulting \code{accept} method to our original one -- where did it go wrong? (See p. \pageref{sol:filter} for the solution and an explanation.)
\end{ex}


\subsection{More Combinators}
To make it easier to define more advanced combinators, we add three more methods to \type{Parser}:

\begin{lstlisting}
  def ~> [U](p: => Parser[U]): Parser[U] = for(a <- this; b <- p) yield b
  def <~ [U](p: => Parser[U]): Parser[T] = for(a <- this; b <- p) yield a

  def ^^ [U](f: T => U): Parser[U] = map(f)
\end{lstlisting}

These methods allow sequencing parsers when we only care about the result of either the right or the left one. Because the following combinators heavily rely on  \code{map}, we define a shorthand for it: \lstinline!^^!.

\begin{lstlisting}[caption=More Advanced Combinators,label=lst:morecombi,float=htbp]
trait MoreCombinators extends SimpleParsers {
  def success[T](v: T): Parser[T] 
    = Parser{in => Success(v, in)(Failure("unknown failure", in))}
  
  def opt[T](p: => Parser[T]): Parser[Option[T]] 
    = ( p ^^ {x: T => Some(x)} 
      | success(None)
      )

  def rep[T](p: => Parser[T]): Parser[List[T]] 
    = rep1(p) | success(List())

  def rep1[T](p: => Parser[T]): Parser[List[T]] 
    = rep1(p, p)
  
  def rep1[T](first: => Parser[T], p: => Parser[T]): Parser[List[T]] 
    = first ~ rep(p) ^^ mkList

  def repsep[T, S](p: => Parser[T], q: => Parser[S]): Parser[List[T]]  
    = rep1sep(p, q) | success(List())
    
  def rep1sep[T, S](p: => Parser[T], q: => Parser[S]): Parser[List[T]]
    = rep1sep(p, p, q)
    
  def rep1sep[T, S](first: => Parser[T], p: => Parser[T], q: => Parser[S]): Parser[List[T]] 
    = first ~ rep(q ~> p) ^^ mkList
    
  def chainl1[T](p: => Parser[T], q: => Parser[(T, T) => T]): Parser[T] 
    = chainl1(p, p, q)
    
  def chainl1[T, U](first: => Parser[T], p: => Parser[U], q: => Parser[(T, U) => T]): Parser[T] 
    = first ~ rep(q ~ p) ^^ {
        case (x, xs) => xs.foldLeft(x){(_, _) match {case (a, (f, b)) => f(a, b)}}      }

  def acceptSeq[ES <% Iterable[Elem]](es: ES): Parser[List[Elem]] = {
      def acceptRec(x: Elem, pxs: Parser[List[Elem]]) = (accept(x) ~ pxs) ^^ mkList
      es.foldRight[Parser[List[Elem]]](success(Nil))(acceptRec _ _)
  }
  
  private def mkList[T] = (_ : Pair[T, List[T]]) match {case (x, xs) => x :: xs }
}
\end{lstlisting}

The combinators in listing \ref{lst:morecombi} implement optionality, repetition (zero or more, or one or more), repetition with a separator, and chaining (to deal with left-recursion). These combinators were inspired by Hutton and Meijer's excellent introduction, in which they explain them in more detail \cite{Hutton96:combis}. We will not discuss their implementation, but we will use them later in this paper.

We can leverage these combinators to further shorten our oxo-parser as follows (additionally, we improve the output):

\begin{lstlisting}
object OXOParser extends StringParsers with MoreCombinators {
  def oxo  = acceptSeq("oxo") ^^ {x => x.mkString("")}
  def oxos = rep1sep(oxo, ' ')
  
  def main(args: Array[String]) = println((oxos <~ eoi)(args(0)))
}
\end{lstlisting}


\subsection{Controlling Backtracking}
To manage the way in which a parser performs backtracking, we introduce a different type of unsuccessful result, \type{Error}, whose \code{append} method does not try the alternative. This effectively disables backtracking. In order to trigger this behaviour, we add the \code{dontBacktrack} method to \type{Parser}, which turns failures into errors. The implementation of \code{dontBacktrack} is more subtle than simply turning \code{Failure}s into \code{Error}s, as a \code{Success} must also carry with it that subsequent \code{Failure}s must be turned into \code{Error}s.

We explain the usage of the \lstinline|~!| combinator in section \ref{sec:errorbt}.

\begin{lstlisting}[caption=Error,label=lst:error,float=htbp]
case class Error(msg: String, next: Input) extends Result[Nothing] {
  def map[U](f: Nothing => U)                        = this
  def flatMap[U](f: Nothing => Result[U])            = this
  def flatMapWithNext[U](f: Nothing => Input => Result[U]) = this
  def filter(f: Nothing => Boolean): Result[Nothing] = this
  def append[U](alt: => Result[U])                   = this
  
  def explain(ei: String) = Error(ei, next)
}  
\end{lstlisting}

\begin{lstlisting}[caption=Disabling Backtracking,label=lst:dontbacktrack,float=htbp]
    def dontBacktrack: Parser[T] = ... /* a Parser whose Failures become Errors.
    This behaviour propagates through all other parsers that follow this one. */

    def ~! [U](p: => Parser[U]): Parser[Pair[T, U]] 
      = dontBacktrack ~ p
\end{lstlisting}

%TODO
%    def flatMapWithNext[U](f: T => Input => Result[U]) 
%      = f(result)(next)    
% (in Success) probably needs to be changed to propagate its zero to flatMapWithNext's result

% \clearpage      
\subsection{Error Reporting}
\subsubsection{Better Messages}
Until now, the user of our library could not easily influence the error message in case a parser failed. To solve this, we add three more methods to \type{Parser}:

\begin{lstlisting}
  def explainWith(msg: Input => String): Parser[T] = Parser{in =>
    this(in) explain msg(in)
  }

  def explain(msg: String): Parser[T] = Parser{in =>
    Parser.this(in) explain msg
  }

  def expected(kind: String): Parser[T] 
    = explainWith{in => ""+ kind +" expected, but \'"+ first(in) +"\' found."}
\end{lstlisting}

As usual, this requires a modest change to \type{Result}. This is the implementation for \mbox{\type{Failure}:}

\begin{lstlisting}
  def explain(ei: String) = Failure(ei, next)
\end{lstlisting}

A parser can now be customised with an appropriate error message by calling one of these new methods. For example, here is an improved version of \code{accept}:

\begin{lstlisting}
  implicit def accept(e: Elem): Parser[Elem] = acceptIf(_ == e).expected(e.toString)
\end{lstlisting}


\subsubsection{Failing Better}\label{sec:errorbt}
Besides better error messages, we can also improve the location where parsing breaks down. Generally, the further down the input, the more informative the input will be. By disabling backtracking when we know it will not help, the parser is prevented from going back to an earlier point in the input, thus improving the error message.

%TODO: explain how \lstinline|~!| can be used to do this.
For example, consider a simplified parser for a single (Scala-style) member declaration: 

\begin{lstlisting}
def member = ("val" ~! ident | "def" ~! ident)  
\end{lstlisting}

As soon as we have encountered the \code{val} keyword, but fail to parse an identifier, there is no point in going back to look for the \code{def} keyword. Thus, if a branch of the ordered choice signals an error, the subsequent alternatives are pre-empted.


% \subsection{}
% refactoring input so we can keep track of an environment
% 
% \begin{lstlisting}[caption=Encapsulating dealing with Input,label=lst:lab,float=htbp]
% trait SimpleInput { self: SimpleInput with SimpleResults =>
%   // The type of the elements consumed by this parser
%   type Elem
%   type Input
%   implicit def InputIsParseInput(in: Input): ParseInput
%   trait ParseInput {
%     def first: Elem
%     def rest: Input
% 
%     def result[T](r: T): Success[T]
%     def failure: Failure
%     def error: Error
%   }
% }
% \end{lstlisting}
