<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">

<HTML>

<HEAD>
  <TITLE>Elkhound Manual</TITLE>
  <meta http-equiv="Content-Type" content="text/html; charset=US-ASCII">
  <style type="text/css">
    H1 { font-size: 150% }
    H2 { font-size: 125% }
    H3 { font-size: 100% }
    P.title { font-size: 175% }
    a.toc:link { text-decoration: none }
    a.toc:visited { text-decoration: none }
  </style>
</HEAD>

<body>

<center>
<p class="title"><b>Elkhound Manual</b></p>
</center>

<p>
This page describes the input format for grammars for the Elkhound
parser generator, and the features of that generator.

<p>
Related pages:
<ul>
<li><a href="index.html">Elkhound overview</a>
<li><a href="tutorial.html">Elkhound tutorial</a>
</ul>

<p>
If you'd like to look at a simple grammar while reading this
description, see <a
href="examples/arith/arith.gr">examples/arith/arith.gr</a>, a parser
for simple arithmetic expressions.

<p>
To find out how to run the tool, run <tt>./elkhound</tt> without
arguments (or search for "usage:" in 
<a href="gramanl.cc">gramanl.cc</a>).


<h1>Contents</h1>
                          
<p>
<!-- BEGIN CONTENTS -->
<!-- automatically generated by insert-html-toc; do not edit the TOC directly -->
<ul>
  <li><a class="toc" href="#lexical">1. Lexical structure</a>
  <li><a class="toc" href="#context_class">2. Context Class</a>
  <li><a class="toc" href="#terminals">3. Terminals</a>
  <ul>
    <li><a class="toc" href="#token_types">3.1 Token Types</a>
    <li><a class="toc" href="#token_dup_del">3.2 dup/del</a>
    <li><a class="toc" href="#token_classify">3.3 classify</a>
    <li><a class="toc" href="#token_prec_assoc">3.4 Precedence/associativity specifications</a>
    <li><a class="toc" href="#lexerint">3.5 Lexer Interface</a>
  </ul>
  <li><a class="toc" href="#nonterminals">4. Nonterminals</a>
  <ul>
    <li><a class="toc" href="#dup">4.1 dup</a>
    <li><a class="toc" href="#del">4.2 del</a>
    <li><a class="toc" href="#merge">4.3 merge</a>
    <li><a class="toc" href="#keep">4.4 keep</a>
    <li><a class="toc" href="#precedence">4.5 precedence</a>
    <li><a class="toc" href="#forbid">4.6 forbid_next</a>
  </ul>
  <li><a class="toc" href="#options">5. Options</a>
  <ul>
    <li><a class="toc" href="#useGCDefaults">5.1 useGCDefaults</a>
    <li><a class="toc" href="#defaultMergeAborts">5.2 defaultMergeAborts</a>
    <li><a class="toc" href="#expected_stats">5.3 Expected conflicts, unreachable symbols</a>
    <li><a class="toc" href="#allow_continued_nonterminals">5.4 allow_continued_nonterminals</a>
  </ul>
  <li><a class="toc" href="#ocaml">6. OCaml</a>
  <li><a class="toc" href="#prec_assoc">7. Precedence and Associativity</a>
  <ul>
    <li><a class="toc" href="#prec_assoc_meaning">7.1 Meaning of prec/assoc specifications</a>
    <li><a class="toc" href="#prec_assoc_attach">7.2 Attaching prec/assoc to tokens and productions</a>
    <li><a class="toc" href="#prec_assoc_resolution">7.3 Conflict resolution with prec/assoc specifications</a>
    <li><a class="toc" href="#prec_assoc_example_arith">7.4 Example: Arithmetic grammar</a>
    <li><a class="toc" href="#prec_assoc_example_else">7.5 Example: Dangling else</a>
    <li><a class="toc" href="#prec_assoc_further">7.6 Further directions</a>
  </ul>
</ul>
<!-- END CONTENTS -->

<a name="lexical"></a>
<h1>1. Lexical structure</h1>

<p>
The grammar file format is free-form, meaning that all whitespace is
considered equivalent.  In the C tradition, grouping is generally
denoted by enclosing things in braces ("{" and "}").  Strings are
enclosed in double-quotes ("").

<p>
Grammar files may include other grammar files, by writing
include("other_file_name").

<p>
Comments can use the C++ "//" syntax or the C "/**/" syntax.

<a name="context_class"></a>
<h1>2. Context Class</h1>

<p>
The parser's action functions are all members of a C++ context
class.  As the grammar author, you must define the context class.
The class is introduced with the "<tt>context_class</tt>" keyword,
followed by ordinary C++ syntax for classes (ending with "<tt>};</tt>");


<a name="terminals"></a>
<h1>3. Terminals</h1>

<p>
The user must declare of all the tokens, also called terminals.  A
block of terminal declarations looks like:
<pre>
  terminals {
    0 : TOK_EOF;
    1 : TOK_NUMBER;              // no alias
    2 : TOK_PLUS     "+";        // alias is "+" (including quotes)
    3 : TOK_MINUS    "-";
    4 : TOK_TIMES    "*";
    5 : TOK_DIVIDE   "/";
    6 : TOK_LPAREN   "(";
    7 : TOK_RPAREN   ")";
  }
</pre>

<p>
Each statement gives a unique numeric code (e.g. 3), a canonical
name (e.g. TOK_MINUS), and an optional alias (e.g. "-").  Either the
name or the alias may appear in the grammar productions, though the
usual style is to use aliases for tokens that always have the same
spelling (like "+"), and the name for others (like TOK_NUMBER).

<p>
Normally it's expected the tokens will be described in their own
file, and the <a href="make-tok">make-tok</a> script will create
the token list seen above.

<a name="token_types"></a>
<h2>3.1 Token Types</h2>

<p>
In addition to declaring the numeric codes and aliases of the tokens,
the user must declare types for semantic values of tokens, if those
values are used by reduction actions (specifically, if their occurence
on a right-hand-side includes a label, denoted with a colon ":").

<p>
The syntax for declaring a token type is
<blockquote>
  <tt>token(</tt><i>type</i><tt>) </tt><i>token_name</i><tt>;</tt>
</blockquote>
or, if specifying terminal functions,
<blockquote>
  <tt>token(</tt><i>type</i><tt>) </tt><i>token_name</i><tt> {</tt><br>
  &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<i>terminal_functions</i><br>
  <tt>}</tt>
</blockquote>

<p>
The terminal functions are explained in the next sections.

<a name="token_dup_del"></a>
<h2>3.2 dup/del</h2>

<p>
Terminals can have <tt>dup</tt> and <tt>del</tt> functions, just like
nonterminals.  See <a href="#dup">below</a> for more information.

<a name="token_classify"></a>
<h2>3.3 classify</h2>

<p>
In some situations, it is convenient to be able to alter the classification
of a token after it is yielded by the lexer but before the parser sees it,
in particular before it is compared to lookahead sets.  For this purpose,
each time a token is yielded from the lexer, it is passed to that token's
<tt>classify()</tt> function.  <tt>classify</tt> accepts a single argument,
the semantic value associated with that token.  It returns the token's
new classification, as a token id.  (It cannot change the semantic value.)

<p>
The main way it differs from simply modifying the lexer is that the
<tt>classify</tt> function has access to the parser context class, whereas
the lexer presumably does not.  In any case, it's something of a hack, and
best used sparingly.

<p>
As a representative example, here is the <tt>classify</tt> function from
<a href="c/c.gr">c/c.gr</a>, used to implement the lexer hack for a C parser:
<pre>
  token(StringRef) L2_NAME {
    fun classify(s) {
      if (isType(s)) {
        return L2_TYPE_NAME;
      }
      else {
        return L2_VARIABLE_NAME;
      }
    }
  }
</pre>

<a name="token_prec_assoc"></a>
<h2>3.4 Precedence/associativity specifications</h2>

<p>
Inside a block that begins with "<tt>precedence {</tt>" and ends with
"<tt>}</tt>", one may list directives of the form
<blockquote>
  <i>dir</i> &nbsp;&nbsp;&nbsp; <i>num</i> &nbsp;&nbsp;&nbsp; <i>tok...</i> &nbsp;&nbsp;&nbsp; <tt>;</tt>
</blockquote>
where <i>dir</i> is a precedence direction, <i>num</i> a precedence
number, and <i>tok...</i> a sequence of tokens (either token names or
aliases).  The effect is to associate <i>dir</i>/<i>num</i> with
each listed token.  The meaning of such specifications is explained
in Section&nbsp;7, below.


<a name="lexerint"></a>
<h2>3.5 Lexer Interface</h2>

<p>
The terminals specification given in the grammar file is a description
of the output of the lexer.  This section describes the mechanism by which
the lexer provides that output.

<p>
The parser interacts with the lexer through the API defined in the
<a href="lexerint.h"><tt>lexerint.h</tt></a> file.  The lexer must
implement (inherit from) LexerInterface.  When the parser asks the
lexer for a token by invoking its <tt>NextTokenFunc</tt>, the
lexer describes the current token by filling in the three data fields 
of LexerInterface:
<ul>
<li><tt>type</tt>: The token code as defined in the <tt>terminals</tt> list
    (Section&nbsp;3).
<li><tt>sval</tt>: The semantic value, a value of the type given in the
    Token Types specification (Section&nbsp;3.1) for the token whose
    code is in <tt>type</tt>.
<li><tt>loc</tt>: The source location of the start of the current token.
    You only need to fill this in if you plan on using the value of
    <tt>loc</tt> from within reduction action functions.
</ul>
The parser will then do its work by reading these fields, and when it
is ready for the next token, will invoke the <tt>NextTokenFunc</tt>
again.

<p>
<blockquote>
<b>Rationale:</b> The rationale for this very imperative style of
interface is performance.  It's faster to share storage between the
parser and lexer than to have the parser make a separate copy of this
information.  Since token retrieval is in the inner loop of the
parser, saving just a few instructions can be significant.
</blockquote>

<p>
The actual mechanism for invoking <tt>NextTokenFunc</tt> is a little
convoluted.  The parser first calls <tt>getTokenFunc()</tt>, a virtual
method, which returns a function pointer.  This pointer is stored in
the parser when it first starts up.  The parser then invokes this
function pointer each time it wants a new token, passing a pointer
to the LexerInterface as an argument.

<p>
This means that the lexer must define <em>two</em> functions, a
static method (or a nonmember function) that does token retrieval,
and a virtual method that just returns a pointer to the first
function.

<p>
<blockquote>
<b>Rationale</b>: Again, performance is the main consideration.  The
cost of a virtual method dispatch is two memory reads plus an
indirect function call (with data dependencies separating each step),
whereas calling a function pointer requires
only the indirect call.  Originally, Elsa used a virtual function
dispatch, and switching to this approach produced a significant
speedup (I no longer remember the exact numbers, I'd guess it was
5-10%).
</blockquote>

<p>
Finally, there are two functions, <tt>tokenDesc</tt> and
<tt>tokenKindDesc</tt>, that are used for diagnostic purposes.
When there is a parse error, the parser will call these functions
to obtain descriptions of tokens for use in the error message.


<a name="nonterminals"></a>
<h1>4. Nonterminals</h1>

<p>
Following the terminals, the bulk of the grammar is one or more
nonterminals.  Each nonterminal declaration specifies all of the
productions for which it is the left-hand-side.

<p>
A simple nonterminal might be:
<pre>
  nonterm(int) Exp {
    -&gt; e1:Exp "+" e2:Exp        { return e1 + e2; }
    -&gt; n:TOK_NUMBER             { return n; }
  }
</pre>

<p>
The type of the semantic value yielded by the productions is given
in parentheses, after the keyword "<tt>nonterm</tt>".  In this case,
<tt>int</tt> is the type.  The type can be omitted if productions
do not yield interesting semantic values.

<p>
In the example, Exp has two productions, <tt>Exp -&gt; Exp "+"
Exp</tt> and <tt>Exp -&gt; TOK_NUMBER</tt>.  The "<tt>-&gt;</tt>"
keyword introduces a production.

<p>
Right-hand-side symbols can be given names, by putting the name
before a colon (":") and the symbol.  These names can be used in
the action functions to refer to the semantic values of the
subtrees (like Bison's <tt>$1</tt>, <tt>$2</tt>, etc.).  Note that
action functions <tt>return</tt> their value, as opposed to (say)
assigning to <tt>$$</tt>.

<p>
There are four kinds of nontermial functions, described below.

<a name="dup"></a>
<h2>4.1 dup</h2>

<p>
Because of the way the GLR algorithm operates, a semantic value
yielded (returned) by one action may be passed as an argument
to <em>more than one</em> action.  This is in contrast to Bison,
where each semantic value is yielded exactly once.

<p>
Depending on what the actions actually do, i.e. what the semantic
values actually mean, the user may need to intervene to help
manage the sharing of semantic values.  For example, if the
values form a tree where memory is managed by reference counting,
then the reference count of a value would need to be increased
each time it is yielded.

<p>
The <tt>dup()</tt> nonterminal function is intended to support the
kind of sharing management alluded to above.  Each time a semantic
value is to be passed to an action, it first is passed to the
associated <tt>dup()</tt> function.  The value returned by
<tt>dup()</tt> is stored back in the parser's data structures,
for use the next time the value must be passed to an action.
Effectively, by calling <tt>dup()</tt>, the parser is announcing,
"I am about to surrender this value to an action; please give me
a value to use in its place next time."

<p>
Common <tt>dup()</tt> strategies:
<ul>
<li>Return <tt>NULL</tt>.  This is the default.  This expresses the
    intent that the value <em>not</em> be passed more than once (since
    any subsequent actions will receive <tt>NULL</tt> arguments).
<li>Return the argument.  This is the default if the
    <tt>useGCDefaults</tt> option is specified.  This is useful when
    the semantic values can be shared arbitrarily without special handling.
<li>Increment a reference count, make a deep copy, etc.  Various
    possibilities exist depending on the particular sharing
    management in use.
</ul>

<a name="del"></a>
<h2>4.2 del</h2>

<p>
A natural counterpart to <tt>dup()</tt>, <tt>del()</tt> accepts values
that are not going to be passed to any more actions (this happens
when, for example, one of the potential parsers fails to make further
progress).  It does not return anything.

<p>
Common <tt>del()</tt> strategies:
<ul>
<li>Print a warning.  This is the default, since an unhandled
    <tt>del()</tt> may be the cause of a memory leak, depending
    on the memory management strategy in use.
<li>Do nothing.  This is the default if the <tt>useGCDefaults</tt>
    option is specified.
<li>Decrement a reference count, deallocate, etc.  Depending on
    the programmer's intended memory management scheme, the
    value passed to <tt>del()</tt> can be treated appropriately.
</ul>

<a name="merge"></a>
<h2>4.3 merge</h2>

<p>
An <em>ambiguity</em> is the condition when a single sequence of
tokens can be parsed as some nonterminal in more than one way.
During parsing, when an ambiguity is encountered, the semantic
values from the different parses are passed to the nonterminal's
<tt>merge()</tt> function, two at a time.

<p>
Merge accepts two competing semantic value arguments, and returns a
semantic value that will stand for the ambiguous region in all future
reductions.  Both the arguments and the return value have the type of
the nonterminal's usual semantic values.

<p>
If there are more than two parses, the first two will be merged, the
result of which will be merged with the third, and so on until they
are all merged.  At each step, the first argument is the one that
may have resulted from a previous <tt>merge()</tt>, and the second
argument is not (unless it is the result of merging from further
down in the parse forest).

<p>
Common <tt>merge()</tt> strategies:
<ul>
<li>Print a message to the effect that the ambiguity is unexpected,
    and return one of the arguments arbitrarily.  This is the
    default behavior.
<li>Abort the program.  This is a more severe response than the
    first one, and is the default if the <tt>defaultMergeAborts</tt>
    option is specified.
<li>Examine the two semantic values, and apply some disambiguation
    criteria to choose which to retain.  Then return only the
    retained value.  Note that <tt>merge()</tt> is being given exclusive
    right to access both values; if it chooses to only return one of
    them for future reductions, then the other should probably be
    treated similarly to calling <tt>del()</tt> (though in fact, literally
    calling <tt>del()</tt> is not possible in the current implementation).
<li>Create some explicit representation of the ambiguity.  This is how
    the Elsa C++ parser handles most of the ambiguities in its C++ grammar,
    due to the type/variable ambiguity.
</ul>

<p>
<b>Note that ambiguity is different from a reduce/reduce conflict.</b>
A reduce/reduce conflict happens when
the top few symbols of the parse stack can be reduced by two different
rules.  It is not (necessarily) an ambiguity, as the reduced input
portions are different.  In <tt>merge()</tt>, the merging fragments arise from
reductions applied to the *same* sequence of ground terminals.

<p>
For example, the following unambiguous(!) grammar
<pre>
  S -&gt; B a c | b A a d
  B -&gt; b a a
  A -&gt; a a
</pre>
has a reduce/reduce conflict because after seeing "baa" with "a" in the
lookahead, the parser cannot tell whether to reduce the topmost "aa" to A,
or the entire "baa" to B, since it only has one token of lookahead and
can't see whether the token after the lookahead is "c" or "d".

<p>
On the other hand, the ambiguous grammar
<pre>
  S -&gt; A b | a b
  A -&gt; a
</pre>
contains no reduce/reduce conflicts.  The only conflict is a shift/reduce,
as after seeing "a" with "b" as lookahead the parser cannot tell whether
to reduce "a" to A, or shift the "b" and later reduce the combination
directly as S.

<p>
Conceptually, if you imagine a nondeterministic LALR parsing algorithm,
conflicts are split (choice) points and ambiguities are join points.  You
cannot have a join without a split, but there is no easy way (in fact it's
undecidable) to compute a precise relationship between splits and possible
future joins.  Both shift/reduce and reduce/reduce are split points,
whereas <tt>merge()</tt> is a join.


<a name="keep"></a>
<h2>4.4 keep</h2>

<p>
Sometimes, a potential ambiguity can be prevented if a semantic value
can be determined to be invalid in isolation (as opposed to waiting to
see a competing alternative in <tt>merge()</tt>).  To support such
determination, each nonterminal can have a <tt>keep()</tt> function,
which returns <tt>true</tt> if its semantic value argument should be
retained (as usual) or <tt>false</tt> if its argument should be
suppressed, as if the reduction never happened.

<p>
If <tt>keep</tt> returns <tt>false</tt>, the parser does <em>not</em>
call <tt>del()</tt> on that value; it is regarded as disposed by
<tt>keep</tt>.

<p>
Common <tt>keep</tt> strategies:
<ul>
<li>Always return <tt>true</tt>.  This is the default.
<li>Look at the argument, and return <tt>false</tt> if for some reason
    it should be discarded, particularly if it will otherwise lead to
    an ambiguity.  This is the intended usage.
<li>As a variation of the above, the reduction action itself can make
    the determination of suitability, and return a special value like
    <tt>NULL</tt> if it wants to cancel the reduction.  Then,
    <tt>keep()</tt> simply checks for the special value.  In fact,
    since <tt>keep()</tt> is a somewhat expensive option in terms of
    performance, I am considering eliminating <tt>keep()</tt> in favor
    of a built-in <tt>NULL</tt> check.
</ul>


<a name="precedence"></a>
<h2>4.5 precedence</h2>

<p>
A rule can be annotated with an explicit precedence specification,
for example:
<pre>
  nonterm N {
    -&gt; A B C     precedence("+")   { /*...*/ }
  }
</pre>

<p>
This specification has the effect of assigning the rule
"<tt>N -&gt; A B C</tt>" the same precedence level as the terminal "+".
See <a href="#prec_assoc">Section 7</a> for more information on what
this does.


<a name="forbid"></a>
<h2>4.6 forbid_next</h2>

<p>
A rule can also be annotated with one or more <em>forbidden
lookahead</em> declarations, for example:
<pre>
  nonterm N {
    -&gt; A B C     forbid_next("+") forbid_next("*")  { /*...*/ }
  }
</pre>

<p>
This specification means that the rule "<tt>N -&gt; A B C</tt>" can
not be used to reduce if the next symbol is either "+" or "*".


<a name="options"></a>
<h1>5. Options</h1>

<p>
A number of variations in parser generator behavior can be requested
through the use of the <tt>option</tt> syntax:
<blockquote>
  <tt>option </tt><i>option_name</i><tt>;</tt>
</blockquote>
or, for options that accept an argument:
<blockquote>
  <tt>option </tt><i>option_name</i> <i>option_argument</i><tt>;</tt>
</blockquote>
The <i>option_name</i> is an identifier from among those listed below,
and <i>option_argument</i> is an integer.

<p>
Option processing is implemented in 
<a href="grampar.cc"><tt>grampar.cc</tt></a>,
function <tt>astParseOptions</tt>.
The various options are described in the following sections.

<a name="useGCDefaults"></a>
<h2>5.1 useGCDefaults</h2>

<p>
The command
<pre>
  option useGCDefaults;
</pre>
instructs the parser generator to make the tacit assumption that sharing
management is automatic (e.g. via a garbage collector), and hence set the
default terminal and nonterminal functions appropriately.

<p>
In fact, most users of Elkhound will probably want to specify this
option during initial grammar development, to reduce the amount
of specification needed to get started.  The rationale for not making
<tt>useGCDefaults</tt> the global default is that users should be aware
that the issue of sharing management is being swept under the carpet.

<a name="defaultMergeAborts"></a>
<h2>5.2 defaultMergeAborts</h2>

<p>
The command
<pre>
  option defaultMergeAborts;
</pre>
instructs the parser generator that if the grammar does not specify a
<tt>merge()</tt> function, the supplied default should print a message
and then abort the program.  This is a good idea once it is believed
that all the ambiguities have been handled by <tt>merge()</tt> functions.

<a name="expected_stats"></a>
<h2>5.3 Expected conflicts, unreachable symbols</h2>

<p>
Nominally, the parser generator expects there to be no shift/reduce
and no reduce/reduce conflicts, and no unreachable (from the start
symbol) symbols.  Of course, the whole point of using GLR is to allow
conflicts, but it is still generally profitable to keep track of how
many conflicts are present at a given stage of grammar development,
since a sudden explosion of conflicts often indicates a grammar bug.

<p>
So, the user can declare how many conflicts of each type are expected.
For example,
<pre>
  option shift_reduce_conflicts 40;
  option reduce_reduce_conflicts 30;
</pre>
specifies that 40 S/R conflicts and 30 R/R conflicts are expected.  If
the parser generator finds matching statistics, it will suppress
reporting of such statistics; if there is a difference, it will be
reported.

<p>
Similarly, one can indicate the expected number of unreachable
syhmbols (this usually corresponds to a grammar in development, where
part of the grammar has been deliberately disabled by making it
inaccessible):
<pre>
  option unreachable_nonterminals 3;
  option unreachable_terminals 2;
</pre>

<a name="allow_continued_nonterminals"></a>
<h2>5.4 allow_continued_nonterminals</h2>

<p>
By default, Elkhound will complain if there is more than one
definition of a given nonterminal.  However, if you say
<pre>
  option allow_continued_nonterminals;
</pre>
then Elkhound will treat input like
<pre>
  nonterm(type) N {
    -&gt; A ;
  }
  nonterm(type) N {
    -&gt; B ;
  }
</pre>
as equivalent to
<pre>
  nonterm(type) N {
    -&gt; A ;
    -&gt; B ;
  }
</pre>


<p>
The nonterminal types, if specified, must be identical.  

<p>
Since Elkhound just concatenates the bodies, specification functions 
(like <tt>merge()</tt>) must be given at most once across all
continuations, and they apply to the whole (concatenated) nonterminal.

<p>
This feature is mostly useful for automatically-generated grammars,
particularly those created by textually combining elements from
two or more human-written grammars.


<a name="ocaml"></a>
<h1>6. OCaml</h1>

<p>
By default, Elkhound generates a parser in C++.  By specifying
"<tt>-ocaml</tt>" on the command line,
the user can request that instead the parser generate OCaml code.
Please see <a href="ocaml/">ocaml/</a>, probably starting with
the example driver <a href="ocaml/main.ml">ocaml/main.ml</a>.

<p>
The lexer interface for OCaml is given in 
<a href="ocaml/lexerint.ml">ocaml/lexerint.ml</a>.  It is similar
to the C++ interface except I have not yet (2005-06-13) implemented
support for source location propagation, so there is no <tt>loc</tt>
field.  Also, since OCaml does not have function pointers, it uses
an ordinary virtual dispatch to get the next token.

<a name="prec_assoc"></a>
<h1>7. Precedence and Associativity</h1>

<p>
Precedence and associativity ("prec/assoc") declarations are a
technique for statically resolving conflicts.  They are often
convenient, but can also be confusing.

<p>
Precisely understanding prec/assoc requires some familiarity of the
details of LR parsing, in particular the difference between "shift"
and "reduce".  The following description assumes the reader is
familiar with these concepts.  If you are not, Section&nbsp;2 of the
<a href="http://www.cs.berkeley.edu/~smcpeak/elkhound/elkhound.ps">Elkhound
technical report</a> contains a brief description.

<a name="prec_assoc_meaning"></a>
<h2>7.1 Meaning of prec/assoc specifications</h2>

<p>
A prec/assoc specification is a precedence number, and an associativity
direction.  Precedence numbers are integers, where larger integers
have "higher precedence".  An associativity direction is one of
the following:
<table border="2">
  <tr>
    <th>Name</th>
    <th>Mnemonic</th>
    <th>Description</th>
  </tr>
  <tr>
    <td>left</td>
    <td>left associative</td>
    <td>Resolve shift/reduce by reducing.</td>
  </tr>
  <tr>
    <td>right</td>
    <td>right associative</td>
    <td>Resolve shift/reduce by shifting.</td>
  </tr>
  <tr>
    <td>nonassoc</td>
    <td>non associative</td>
    <td>"Resolve" shift/reduce by deleting <em>both</em> possibilities,
        consequently making associative uses into parse-time syntax errors.</td>
  </tr>
  <tr>
    <td>prec</td>
    <td>specify only precedence</td>
    <td>This directive asserts that a grammar does not have a conflict that
        would require using the associativity direction; if it does,
        it is a parser-generation-time error.</td>
   </tr>
   <tr>
     <td>assoc_split</td>
     <td></td>
     <td>Do not resolve the conflict; the GLR algorithm will split the
         parse stack as necessary.</td>
   </tr>
</table>

<a name="prec_assoc_attach"></a>
<h2>7.2 Attaching prec/assoc to tokens and productions</h2>

<p>
Each token can be associated with a prec/assoc specification.
By default, a token has no prec/assoc specification.  However, a
specification can be attached via the <tt>precedence</tt> block of the
<tt>terminals</tt> section of the grammar file (see Section 3.4, above).

<p>
Each production can be associated with a precedence number.
If a production includes any terminals among its RHS elements, then
the production inherits the precedence number of the <em>rightmost</em>
terminal in the RHS; if the rightmost terminal has no prec/assoc
specification, then the rule has no precedence.  The precedence
number can also be explicitly supplied by writing
<blockquote>
  <tt>precedence (</tt> <i>tok</i> <tt>)</tt>
</blockquote>
at the end of the RHS (just before the action).  This will use the
precedence of <i>tok</i> instead of the rightmost terminal.


<a name="prec_assoc_resolution"></a>
<h2>7.3 Conflict resolution with prec/assoc specifications</h2>

<p>
The resulting token and rule prec/assoc info is used during parser
generation (only), to resolve shift/reduce and reduce/reduce conflicts.
"Resolving" a conflict means removing one of the possible actions from the
parse tables.  It is implemented in <tt>GrammarAnalysis::resolveConflicts()</tt>.

<p>
When a shift/reduce conflict is encountered between token and a rule, if
the token has higher precedence, then a shift is used, otherwise a reduce
is used.  If they have the same precedence, then the associativity direction
of the token is consulted, using the resolution procedures described in
the table above.

<p>
When a reduce/reduce conflict is encountered, the highest precedence rule
is chosen.

<p>
In both cases, if either conflicting entity does not have any prec/assoc
specification, then no resolution is done, and the GLR algorithm will
split the parse stack at parse time as necessary.


<a name="prec_assoc_example_arith"></a>
<h2>7.4 Example: Arithmetic grammar</h2>

<p>
The canonical example of using prec/assoc is to give a grammar for
arithmetic expressions, using an ambiguous grammar (for convenience)
and prec/assoc to resolve the ambiguities.  The grammar is typically
(e.g., <a href="examples/arith/arith.gr">arith.gr</a>) something like
<pre>
  nonterm Expr {
    -&gt; Expr "+" Expr ;
    -&gt; Expr "-" Expr ;
    -&gt; Expr "*" Expr ;
    -&gt; Expr "/" Expr ;
    -&gt; TOK_NUMBER ;
  }
</pre>
with a precedence specification like
<pre>
  precedence {
    left 20 "*" "/";
    left 10 "+" "-";
  }
</pre>

<p>
Consider parsing the input "1 + 2 * 3".  This should parse like
"1 + (2 * 3)".  The crucial moment during the parsing algorithm
comes when the parser sees the "*" in its lookahead.  At that moment, 
the parse stack looks like
<pre>
  Expr(2)                       <-- top of stack (most recently shifted)
  +
  Expr(1)                       <-- bottom of stack
</pre>

<p>
If the parser were to <em>reduce</em>, via the rule "<tt>Expr -&gt; Expr + Expr</tt>",
then the stack would become
<pre>
  Expr(Expr(1)+Expr(2))         <-- top/bottom of stack
</pre>
which means the final parse would be "(1 + 2) * 3", which is wrong.

<p>
Alternatively, if the parser were to <em>shift</em>, the stack becomes
<pre>
  *                             &lt;-- top of stack
  Expr(2)
  +
  Expr(1)                       &lt;-- bottom of stack
</pre>
Then after one more shift, we have
<pre>
  Expr(3)                       &lt;-- top of stack
  *
  Expr(2)
  +
  Expr(1)                       &lt;-- bottom of stack
</pre>
and the only choice is to reduce via "<tt>Expr -&gt; Expr * Expr</tt>", leading to
the configuration
<pre>
  Expr(Expr(2)*Expr(3))         &lt;-- top of stack
  +
  Expr(1)                       &lt;-- bottom of stack
</pre>
and one more reduce yields
<pre>
  Expr( Expr(1) + Expr(Expr(2)*Expr(3)) )    &lt;-- top/bottom
</pre>
which is the desired "1 + (2 * 3)" interpretation.

<p>
The parsing choice above is a <em>shift/reduce conflict</em>; the parser
must choose between shifting "*" and reducing via "<tt>Expr -&gt; Expr + Expr</tt>".
Now, the prec/assoc spec says that "*" has higher precedence than "+",
and the rule is (by default) given the precedence of the latter.
Consequently, the shift is chosen, so that the "*" will ultimately
be reduced before the "+".

<p>
One can do similar reasoning for the case of "1 + 2 + 3", which should
parse as "(1 + 2) + 3", and does so because "+" is declared to be left-associative:
reduces are preferred to shifts (reduce early instead of reducing late).

<a name="prec_assoc_example_else"></a>
<h2>7.5 Example: Dangling else</h2>

<p>
As another example of using prec/assoc to resolve grammar ambiguity,
we can consider the classic "dangling else" problem.  Many languages,
such as C, have two rules for the "if" statement:
<pre>
  nonterm Stmt {
    -&gt; "if" "(" Expr ")" Stmt ;
    -&gt; "if" "(" Expr ")" Stmt "else" Stmt ;
    ...
  }
</pre>
This grammar is ambiguous; for example, the input
<pre>
  if (P)   if (Q)   a=b;   else   c=d;
</pre>
could be parsed as
<pre>
  if (P) {
    if (Q)
      a=b;
    else        // associated with inner "if"
      c=d;
  }
</pre>
or as
<pre>
  if (P) {
    if (Q)
      a=b;
  }
  else          // associated with outer "if"
    c=d;
</pre>

<p>
As with all uses of prec/assoc, it is possible to resolve the
ambiguity by modifying the grammar.  (Actually, I'm not 100% sure
about this fact, especially because of LALR vs LR.  But it seems
to be true in practice.)  However, in this case, doing so
means duplicating the entire Stmt nonterm, creating one case for when
it is the first thing in the then-block of an "if" statement with an
"else" clause (in which case the version missing an "else" is not
allowed), and another case for when it is not.  Such savagery to
the grammar is usually unacceptable.

<p>
There is an easy fix with prec/assoc, however: simply declare
"else" to be right-associative, and (explicitly) give both "if" rules
the precedence of the "else" token:
<pre>
  terminals {
    precedence {
      right 100 "else";     // precedence number irrelevant
    }
  }
  ...
  nonterm Stmt {
    -&gt; "if" "(" Expr ")" Stmt                   precedence("else");
    -&gt; "if" "(" Expr ")" Stmt "else" Stmt       precedence("else");
    ...
  }
</pre>
At the crucial moment, the lookahead is "else" and the stack is
<pre>
  Stmt(a=b;)                          &lt;-- top
  ")"
  Expr(Q)
  "("
  "if"
  ")"
  Expr(P)
  "("
  "if"                                &lt;-- bottom
</pre>
The conflict is between shifting "else" and reducing via
"<tt>Stmt -&gt; if ( Expr ) Stmt</tt>".  Since the precedence of both
the token and the rule is the same, the associativity direction
of the token is consulted.  That direction is "right", which means
to reduce late, i.e. shift.  After shifting "else", then the rest of the input,
and reducing "c=d;", we have
<pre>
  Stmt(c=d;)                          &lt;-- top
  "else"
  Stmt(a=b;)
  ")"
  Expr(Q)
  "("
  "if"
  ")"
  Expr(P)
  "("
  "if"                                &lt;-- bottom
</pre>
Finally, the algorithm reduces via
"<tt>Stmt -&gt; if ( Expr ) Stmt else Stmt</tt>", yielding
<pre>
  If(Q, Stmt(a=b;), Stmt(c=d;))       &lt;-- top
  ")"
  Expr(P)
  "("
  "if"                                &lt;-- bottom
</pre>
and one more reduce yields
<pre>
  If(P,   If(Q, Stmt(a=b;), Stmt(c=d;))  )
</pre>
which corresponds to binding the "else" to the innermost "if".

<a name="prec_assoc_further"></a>
<h2>7.6 Further directions</h2>

<p>
As illustrated in the "dangling else" example, prec/assoc specifications
need not involve "operators" with a classic "precedence order".  Arguably,
such cases constitute a hack, wherein the grammar designer takes advantage
of knowledge of the parsing algorithm in use.  It would perhaps be better
if there were some kind of parsing-algorithm-neutral specification technique
that could subsume LR prec/assoc, but be translated into LR prec/assoc
if that is the algorithm in use.  But I do not know of such a technique.
(At one point I played with using attribute grammars and planned some
static analysis to figure out when an attribute grammar's inherited
attributes could be replaced with an equivalent prec/assoc spec, but it turned
out to be hard so I abandoned the attempt.)

<p>
Not every use of prec/assoc is to resolve an actual grammatical
ambiguity.  It is possible for the grammar to be unambiguous, but
still have a conflict, in which case a prec/assoc specification
may be a good way to resolve it, if performance is a concern.

<p>
I think the best advice is: if you are not very familiar with LR
parsing, ignore prec/assoc.  Either write an unambiguous grammar, or
use Elkhound's <tt>merge</tt> feature to do disambiguation; in either
case, the conflicts only affect performance and so can be ignored.
If you <em>are</em> familiar with LR parsing, then use your judgment
as to whether a given situation is right for prec/assoc.  They can
seem appealing since they yield fast parsers, but it can be difficult 
to determine the actual language accepted by an LR parser with prec/assoc.


<p>
  <a href="http://validator.w3.org/check/referer"><img border="0"
      src="http://www.w3.org/Icons/valid-html401"
      alt="Valid HTML 4.01!" height="31" width="88"></a>
</p>

</body>

</HTML>
