\documentclass[conference]{IEEEtran}

\usepackage{k}
\usepackage{graphicx}

\renewcommand{\textfraction}{0.05}
\renewcommand{\topfraction}{0.95}
\renewcommand{\bottomfraction}{0.95}
\renewcommand{\floatpagefraction}{0.35}
\setcounter{totalnumber}{5}

\ifCLASSINFOpdf
\else
\fi





% *** MATH PACKAGES ***
%
%\usepackage[cmex10]{amsmath}
% A popular package from the American Mathematical Society that provides
% many useful and powerful commands for dealing with mathematics. If using
% it, be sure to load this package with the cmex10 option to ensure that
% only type 1 fonts will utilized at all point sizes. Without this option,
% it is possible that some math symbols, particularly those within
% footnotes, will be rendered in bitmap form which will result in a
% document that can not be IEEE Xplore compliant!
%
% Also, note that the amsmath package sets \interdisplaylinepenalty to 10000
% thus preventing page breaks from occurring within multiline equations. Use:
%\interdisplaylinepenalty=2500
% after loading amsmath to restore such page breaks as IEEEtran.cls normally
% does. amsmath.sty is already installed on most LaTeX systems. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/amslatex/math/





% *** SPECIALIZED LIST PACKAGES ***
%
%\usepackage{algorithmic}
% algorithmic.sty was written by Peter Williams and Rogerio Brito.
% This package provides an algorithmic environment fo describing algorithms.
% You can use the algorithmic environment in-text or within a figure
% environment to provide for a floating algorithm. Do NOT use the algorithm
% floating environment provided by algorithm.sty (by the same authors) or
% algorithm2e.sty (by Christophe Fiorio) as IEEE does not use dedicated
% algorithm float types and packages that provide these will not provide
% correct IEEE style captions. The latest version and documentation of
% algorithmic.sty can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/algorithms/
% There is also a support site at:
% http://algorithms.berlios.de/index.html
% Also of interest may be the (relatively newer and more customizable)
% algorithmicx.sty package by Szasz Janos:
% http://www.ctan.org/tex-archive/macros/latex/contrib/algorithmicx/




% *** ALIGNMENT PACKAGES ***
%
%\usepackage{array}
% Frank Mittelbach's and David Carlisle's array.sty patches and improves
% the standard LaTeX2e array and tabular environments to provide better
% appearance and additional user controls. As the default LaTeX2e table
% generation code is lacking to the point of almost being broken with
% respect to the quality of the end results, all users are strongly
% advised to use an enhanced (at the very least that provided by array.sty)
% set of table tools. array.sty is already installed on most systems. The
% latest version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/required/tools/


%\usepackage{mdwmath}
%\usepackage{mdwtab}
% Also highly recommended is Mark Wooding's extremely powerful MDW tools,
% especially mdwmath.sty and mdwtab.sty which are used to format equations
% and tables, respectively. The MDWtools set is already installed on most
% LaTeX systems. The lastest version and documentation is available at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/mdwtools/


% IEEEtran contains the IEEEeqnarray family of commands that can be used to
% generate multiline equations as well as matrices, tables, etc., of high
% quality.


%\usepackage{eqparbox}
% Also of notable interest is Scott Pakin's eqparbox package for creating
% (automatically sized) equal width boxes - aka "natural width parboxes".
% Available at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/eqparbox/





% *** SUBFIGURE PACKAGES ***
%\usepackage[tight,footnotesize]{subfigure}
% subfigure.sty was written by Steven Douglas Cochran. This package makes it
% easy to put subfigures in your figures. e.g., "Figure 1a and 1b". For IEEE
% work, it is a good idea to load it with the tight package option to reduce
% the amount of white space around the subfigures. subfigure.sty is already
% installed on most LaTeX systems. The latest version and documentation can
% be obtained at:
% http://www.ctan.org/tex-archive/obsolete/macros/latex/contrib/subfigure/
% subfigure.sty has been superceeded by subfig.sty.



%\usepackage[caption=false]{caption}
%\usepackage[font=footnotesize]{subfig}
% subfig.sty, also written by Steven Douglas Cochran, is the modern
% replacement for subfigure.sty. However, subfig.sty requires and
% automatically loads Axel Sommerfeldt's caption.sty which will override
% IEEEtran.cls handling of captions and this will result in nonIEEE style
% figure/table captions. To prevent this problem, be sure and preload
% caption.sty with its "caption=false" package option. This is will preserve
% IEEEtran.cls handing of captions. Version 1.3 (2005/06/28) and later 
% (recommended due to many improvements over 1.2) of subfig.sty supports
% the caption=false option directly:
%\usepackage[caption=false,font=footnotesize]{subfig}
%
% The latest version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/subfig/
% The latest version and documentation of caption.sty can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/caption/




% *** FLOAT PACKAGES ***
%
%\usepackage{fixltx2e}
% fixltx2e, the successor to the earlier fix2col.sty, was written by
% Frank Mittelbach and David Carlisle. This package corrects a few problems
% in the LaTeX2e kernel, the most notable of which is that in current
% LaTeX2e releases, the ordering of single and double column floats is not
% guaranteed to be preserved. Thus, an unpatched LaTeX2e can allow a
% single column figure to be placed prior to an earlier double column
% figure. The latest version and documentation can be found at:
% http://www.ctan.org/tex-archive/macros/latex/base/



%\usepackage{stfloats}
% stfloats.sty was written by Sigitas Tolusis. This package gives LaTeX2e
% the ability to do double column floats at the bottom of the page as well
% as the top. (e.g., "\begin{figure*}[!b]" is not normally possible in
% LaTeX2e). It also provides a command:
%\fnbelowfloat
% to enable the placement of footnotes below bottom floats (the standard
% LaTeX2e kernel puts them above bottom floats). This is an invasive package
% which rewrites many portions of the LaTeX2e float routines. It may not work
% with other packages that modify the LaTeX2e float routines. The latest
% version and documentation can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/sttools/
% Documentation is contained in the stfloats.sty comments as well as in the
% presfull.pdf file. Do not use the stfloats baselinefloat ability as IEEE
% does not allow \baselineskip to stretch. Authors submitting work to the
% IEEE should note that IEEE rarely uses double column equations and
% that authors should try to avoid such use. Do not be tempted to use the
% cuted.sty or midfloat.sty packages (also by Sigitas Tolusis) as IEEE does
% not format its papers in such ways.





% *** PDF, URL AND HYPERLINK PACKAGES ***
%
%\usepackage{url}
% url.sty was written by Donald Arseneau. It provides better support for
% handling and breaking URLs. url.sty is already installed on most LaTeX
% systems. The latest version can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/misc/
% Read the url.sty source comments for usage information. Basically,
% \url{my_url_here}.

 \usepackage{algpseudocode}
\usepackage[options]{algorithm}


% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\newenvironment{mylisting}
{\begin{list}{}{\setlength{\leftmargin}{1em}}\item\scriptsize\bfseries}
{\end{list}}


\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{Automating Abstract Syntax Tree construction for Context Free Grammars}


% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
\author{
\IEEEauthorblockN{Daniel Ionu{\c t} Vicol and Andrei Arusoaie}
\IEEEauthorblockA{Faculty of Computer Science\\
University of Alexandru Ioan Cuza, Ia{\c s}i\\
}}

% conference papers do not typically use \thanks and this command
% is locked out in conference mode. If really needed, such as for
% the acknowledgment of grants, issue a \IEEEoverridecommandlockouts
% after \documentclass

% for over three affiliations, or if they all won't fit within the width
% of the page, use this alternative format:
% 
%\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1},
%Homer Simpson\IEEEauthorrefmark{2},
%James Kirk\IEEEauthorrefmark{3}, 
%Montgomery Scott\IEEEauthorrefmark{3} and
%Eldon Tyrell\IEEEauthorrefmark{4}}
%\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\
%Georgia Institute of Technology,
%Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html}
%\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\
%Email: homer@thesimpsons.com}
%\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\
%Telephone: (800) 555--1212, Fax: (888) 555--1212}
%\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}}




% use for special paper notices
%\IEEEspecialpapernotice{(Invited Paper)}




% make the title area
\maketitle


\begin{abstract}
In most of the compilers or programming languages tools, parsers are used for transforming human readable code into Parse Trees or Abstract Syntax Trees (AST). A popular method to create parsers is to use a parser generator. Advanced parser generators (e.g. ANTLR, SDF) are able to generate directly ASTs if the grammar is annotated with AST generation rules. These annotations are typically  done manually, by adding to each grammar rule a constructor and associating an AST component to each constructor. This might be inconvenient for people with little experience in writing grammars or for those who already have a grammar for their language.

In this paper, we present a generic method for inferring such AST generation rules and a tool which automatically generates  the annotated grammar. Assuming you have a grammar for a language, some input programs and their corresponding ASTs, the tool will infer the rules for constructing the AST. If the input programs cover the whole range of the language syntax constructs then the parser corresponding to the generated annotated grammar is able to parse and transform into an AST any program of the given language.\\

% In order to test our method we used the tool in the context of the \K framework~\cite{rosu-serbanuta-2010-jlap,k-primer-2012-v25}, which is a framework designed for defining programming languages. 
%In \K, one can define its own language by giving both the syntax and semantics of the language; the \K tool generates a parser and an interpreter which can parse and execute programs.
% Because the builtin \K parser has some limitations, the framework allows the use of external parsers that generate the \K specific AST. We used our tool to generate external parsers for a few languages defined in \K.

\end{abstract}
% IEEEtran.cls defaults to using nonbold math in the Abstract.
% This preserves the distinction between vectors and scalars. However,
% if the conference you are submitting to favors bold math in the abstract,
% then you can use LaTeX's standard command \boldmath at the very start
% of the abstract to achieve this. Many IEEE journals/conferences frown on
% math in the abstract anyway.

% no keywords




% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle


\section{Introduction}
Ever since the idea of encoding programs on punched cards appeared, the programs were stored in a specific manner such that they can be {\it ``understood''} by a computer. Nowadays, when creating new programming languages, people use {\it parsers} to transform computer programs in a standard representation (the most common representations are treelike data structures) before running them.
Parsing is an important and difficult problem in computer science in general, researchers spending decades to create algorithms and develop parser generators having as input high level grammars ~\cite{conf/pldi/ParrF11}. There are some important approaches (PEGs~\cite{Ford:2004:PEG:964001.964011}, GLR~\cite{Tomita:91}, LR~\cite{Knuth:1965:LR}, LL~\cite{Rosenkrantz:1969:PDT:800169.805431}) which today are successfully used for parsing the most popular programming languages. There are also some advanced parser generators which come with tools for processing the data structures resulted after parsing, the most relevant being Rascal~\cite{rascal-klint} and Stratego~\cite{StrategoDoc06} (incorporated into Spoofax~\cite{KatsVisser2010}, based on SDF~\cite{Heering:1989:SDF:71605.71607}). Rascal allows manipulation of the Parse Tree using pattern directed function definition and invocation while Stratego is a strategy language able to apply strategies over an Abstract Syntax Tree (AST). Both of them offer a toolset for ``program transformation''. Program transformations are important because they are meant to improve reliability, productivity, and analysis of software. The tools enumerated above achieve this by post-processing the tree resulted after parsing, which consists in writing some tool specific patterns/rules/strategies which map each piece of syntax into a specific construct. \\
Most of the existent tools have a straightforward approach when generating an AST from a program: the grammar production rules are annotated manually by the user with information about tree construction which is used at the moment of parsing.\\
In this paper we address the problem of creating ASTs in a different manner: we try to infer AST construction rules having a non-annotated Context Free Grammar of a language and some pairs $(P,P_{AST})$, where $P$ is the program and $P_{AST}$ is its corresponding AST. We propose an algorithm which given the grammar of a language, some programs and their corresponding ASTs, is able to automatically infer the annotations and generate the annotated grammar. The contributions of our research is listed below:
\begin{itemize}
\item We developed an algorithm able to infer the Abstract Syntax Tree construction rules based on a Context Free Grammar, and a set of pairs: $(P, P_{AST})$, where $P_{AST}$ is the AST corresponding to  {\it P}.
\item We implemented the algorithm creating a tool which generates the annotated grammar and the corresponding parser using ANTLR. 
\item We successfully tested the generated parser on several programming languages.
\end{itemize}
One major advantage of this approach is that users should never write rules or strategies which requires learning new languages. The idea of generating the AST knowing the form of the AST was inspired by the \K framework ~\cite{rosu-serbanuta-2010-jlap}. In \K, people can formally define programming languages using rewriting rules. Programs are \K terms which are nothing else than ASTs with a specific format. \K users know the ASTs corresponding to their programs and using the tool we present here they can generate a parser for their language which outputs directly \K terms from programs. More details about \K and the way it treats programs are discussed in Subsection \ref{ex}.

The rest of the paper is organized as follows: Section \ref{pre} contains some basic notions about parsing and context free grammars which will be used in this paper. In Section \ref{automating:AST} we explain why we need such a tool describing a motivating example in the context of \K framework, and we fully describe and explain our  algorithm by example. The implementation is presented in Section \ref{implementation} and the test results are shown in Section \ref{eval}. We end up by concluding in section \ref{conclusions}.

\section{Prerequisites}
\label{pre}

This section includes a short introduction in formal languages an compilers, focusing on those concepts that are essential to understand the remainder of the paper.

\subsection{Grammars, Parsers and Parser Generators}
A {\it parser} is a component which checks an input string, typically a program, for correct syntax and builds a data structure called {\it parse tree}. A parser can be created from scratch or it can be generated by a {\it parser generator}. Parser generators are programs able to ``understand'' a {\it grammar} and to generate a parser. Grammars are sets of rules meant to describe the syntax of a language by specifying the constructs allowed by that language. Formally, a grammar is defined as a tuple $G=(V,\Sigma,\delta,S)$ where:

\begin{itemize}
\item $V$ is a finite set of non-terminals, where a non-terminal  can be seen as a variable representing a syntactic category. Each non-terminal represents a sub-language defined by $G$.
\item $\Sigma$ is a finite set of terminals, disjoint from V. $\Sigma$ is also called the alphabet of G.
\item $\delta:V \rightarrow (V \cup \Sigma)^\star$ is a finite set of grammar productions, where $\_^\star$ represents the Kleene closure.
\item $S$ is the start non-terminal.
\end{itemize}

A grammar as defined above is a Context-Free Grammar (CFG)~\cite{DBLP:conf/stoc/AhoU69} because all production rules can be written in the form $V \rightarrow "w"$, where $w$ is a string of terminals and non-terminals. There are four types of grammars according to Chomsky hierarchy \cite{chomsky}. We will only refer to Context Free Grammars in this section.\\
There are two main types of CFGs: LL grammars and LR grammars. LL grammars are a subset of the context-free grammars, which imply restrictions on grammars in order to simplify the parsing. The main restriction imposed by LL grammars is that the grammar should not contain left recursion. LL grammars can be implemented by using top-down predictive,  parsing algorithms of complexity $O(n)$ ($n$ being the length of the input). LL parsers scan the input left-to-right and the derivation extreme left of the input.  These have an input buffer, a push-down stack, a parsing table and an output buffer. The output buffer contains the program which has to be parsed, followed by the \$ delimiter. The stack contains a sequence of terminals or non terminals, with another delimiter (\#) which marks the end of the stack. Initially the input pointer is the first symbol of the input, and the stack contains the starting non terminal(S) above \#.\\
The most popular parser generators which accept as input LL grammars are: ANTLR~\cite{conf/pldi/ParrF11, DBLP:conf/pldi/2011}, Coco/R~\cite{coco}, and Parsec~\cite{Leijen01parsec:a}. \\
LR grammars represent another subset of context free grammars. Unlike LL grammars, the main restriction is that the grammar should not contain right recursion. These grammars are implemented using bottom-up parsing algorithms. LR parsers scan the input left-to-right and try to find the inverse of the sequence of productions used in extreme right derivation. Like LL parsers, LR ones have a buffer input, a push-down stack, a parsing table and the output buffer. Similarly, the input ends with \$. The stack contains a sequence $q_mX_{m}q_{m_i-1} ... X_1q_0$  at every moment of the parsing and $X_i$ is a symbol of the grammar while $q_i$ is a state symbol(which summarizes the information from the stack below). Some of the parser generators use in general a derivation of LR called LALR which was invented to address practical difficulties of implementing a Canonical LR Parser. Here we mention a few examples: GoldParser~\cite{goldparser}, YACC, and Bison. 


\subsection{Parse Trees vs. Abstract  Syntax Trees}
Parse Trees (PT) are trees which represent the syntactic structure of a string input, according to a given grammar. Usually, the internal nodes are labelled with names of non-terminals of the grammar, while the leaf nodes are labelled by terminals. Abstract Syntax Trees  (AST) represent the abstract syntactic structure ignoring details that appear in the original syntax. Each node of the tree denotes a construct which occurs in the source code. An AST captures the essential structure of the input in a tree form, while omitting unnecessary syntactic details. ASTs can be distinguished from  parse trees by the fact that they omit tree nodes representing punctuation marks such as semicolon (used as statements delimiter) or commas (used to separate function arguments). \\

\section{Automating Abstract Syntax Tree construction}
\label{automating:AST}

In this section we will make use of a small imperative language to explain the steps of the algorithm we developed. We give a precise description of each step and we illustrate its effect on the example.

\subsection{Motivation}
\label{ex}
The \K framework allows people to formally define their own languages by giving the syntax and the semantics of the language. There are some real languages defined in \K, the most important being the Executable Formal Semantics of C~\cite{ellison-rosu-2012-popl}. In the context of the \K framework, we tried to automate the parsing of programs step. The \K parser has some limitations because it is an experimental feature and the way it generates the parser from the syntax of the language is not yet well-defined. Because of this limitation, the \K tool allows people to connect their own parser of the language which must output a specific AST from programs. When executing the program using the semantics, \K applies the semantic rules over that specific AST and {\it consumes} the program. \\
Suppose one has an existing parser of a specific language. It can be quite tricky to output the \K specific AST (we will call this operation {\it kast}) from a program because the parser has to be modified and this could be difficult for people not familiar with parsing.\\
What we want to do here is to automatize this transformation ({\it kast-ing}) by knowing only some programs and their transformations. We found a method and we developed an algorithm capable to infer the transformation by analyzing the grammar of the language and a set of pairs $(P,P_{AST})$ where $P$ is a program and $P_{AST}$ is the AST of $P$.

\subsection{Example}
IMP is a small imperative language, used as canonical example in many research papers about programming languages. It contains arithmetic expressions which include the domain of arbitrarily large integer numbers, Boolean expressions, assignment statements, conditional statements, while loop statements, and sequential composition of statements. All variables used in an IMP program are expected to be declared at the beginning of the program and can hold only integer values. The syntax of IMP is given using Backus-Naur Form (BNF) is shown in Figure~\ref{imp:syntax}.

\begin{figure}
\begin{center}
\begin{tabular}{|l l l|}
	\hline
	Int & $::=$ & the domain of (unbounded)\\
	&& integer numbers \\
	Bool & $::=$ & the domain of Booleans\\
	Id & $::=$ & standard identifiers \\
	$AExp$ & $::=$ & Int \\
	 & \hspace{7pt}$|$ & Id \\
	 & \hspace{7pt}$|$ & $AExp$ + $AExp$\\
	 & \hspace{7pt}$|$ & $AExp$ / $AExp$\\
	$BExp$ & $::=$ & Bool \\
	 & \hspace{7pt}$|$ & not $BExp$ \\
	 & \hspace{7pt}$|$ & $AExp$ <= $AExp$\\
	 & \hspace{7pt}$|$ & $BExp$ and $BExp$\\
	 $Stmt$ & $::=$ & skip\\
	 & \hspace{7pt}$|$ & $Id$ $:=$ $AExp$\\
	 & \hspace{7pt}$|$ & $Stmt$ ; $Stmt$\\
	 & \hspace{7pt}$|$ & $if$ $BExp$ then $Stmt$ else $Stmt$\\
	 & \hspace{7pt}$|$ & while $BExp$ $do$ $Stmt$\\
	 $Ids$ & $::=$ & List\{Id\}\\
	$Pgm$ & $::=$ & var $Ids$ ; $Stmt$ \\
 	\hline
\end{tabular}
\end{center}
\caption{Syntax of IMP, using algebraic BNF~\cite{McCracken:2003:BF:1074100.1074155}}
\label{imp:syntax}
\end{figure}

A complete semantics of IMP can be found in \cite{rosu-serbanuta-2010-jlap}. It is based on rewriting and has been developed using the \K framework.
In Figure~\ref{sample} we have an IMP program in the left and its corresponding AST in the right. Having a CFG for IMP and pairs $(P,P_{AST})$, where $P$ is a program and $P_{AST}$ is the AST of $P$, as shown in Figure~\ref{sample} we are able to generate an annotated grammar such that the parser generated from this grammar will get as input a program and will output directly an AST. In the context of our previous example, we can generate a parser able to recognize IMP programs and output the corresponding \K specific AST.

\begin{figure*}
\centering
\begin{tabular}{|l | l|}

\hline

\begin{minipage}[b]{0.55\linewidth}
\begin{tabbing}
\tt var\=\ \tt a,b;\\
\tt a := a + 45; \\
\tt if not true\\
\>\tt then a:=3;\\
\>\tt else b:=5;\\
\tt b := 5/3 ;\\
\tt while a <= a and false do \\
\>\tt a:=3+4;\\
\end{tabbing}
\end{minipage}

& 

\begin{minipage}[b]{0.55\linewidth}
\begin{tabbing}
\tt var\_;\_(\_.\_(id(a),id(b)),\\
\tt \_;\_(\_:=\_(id(a),\_+\_(id(a),int(45))),\\
\tt \_;\_(if\_then\_else\_( not\_( bool( true )),\\
\tt \_;\_(\_:=\_(id(a),int(3))),\\
\tt \_;\_(\_:=\_(id(b),int(5)))),\\
\tt \_;\_(\_:=\_(id(b),\_/\_(int(5),int(3))),\\
\tt \_;\_(while\_do\_(\_and\_(\_<=\_(id(a),id(a)),bool(false)),\\
\tt \_;\_(\_:=\_(id(a),\_+\_(int(3),int(4))))))))))\\
\end{tabbing}
\end{minipage}\\

\hline
\end{tabular}
\caption{Left: an IMP program $P$. Right: the textual representation of the corresponding AST of $P$. The construct ``var\_;\_'' represents a label which in this case is applied to other labels ``\_,\_'' and ``\_;\_''. The underscores (`\_') represent the position of the arguments in the current label.}
\label{sample}
\end{figure*}

\subsection{The algorithm}
The goal of our algorithm is to enrich the grammar of a given language with annotations such that the parser for the annotated grammar is able to generate the ASTs. We can infer these annotations (which are constructors for the AST associated to production rules) automatically by finding similarities between the AST and parse tree. Of course, one may argue that these annotations can be added manually and it should not be so difficult. The problem is that sometimes the algorithm also appends grammar rules to the original grammar. Advanced languages have very complex grammars and for that lots of examples (pairs as shown in Figure~\ref{sample}) must be created. Since writing whole programs might take a lot of time we decided to allow users to specify only parts of the program and their corresponding ASTs. For that, we remove the starting symbol of the grammar and we put it back such that it points to a production rule which reduces that specific part of the program. In order to provide this facility we have to be able to remove and append rules in the grammar dynamically.\\
Another reason for adding rules in the grammar is determined by our algorithm in the node elimination step which will be discussed later in the paper.

From now on, in the paper we will consider $G$ as being the grammar of a given language, $P$ a program and $P_{AST}$ the corresponding AST of $P$.\\
In order to infer annotations, the algorithm first parses the program getting the parse tree and then keeps editing the parse tree until it has the same structure as the given AST. Once both trees become structurally similar, the algorithm simply maps to each production rule in the grammar a label from the AST. In this section, words {\it annotation} and {\it constructor} will be used to specify the label of a production rule. The name of the annotation or constructor of a production rule will become the name of the node generated in the parse tree by that specific rule.

The main steps of the algorithm (steps 2-7 have to be applied to each pair $(P, P_{AST})$) are:
{\it 
\begin{enumerate}
\item Pre-processing $G$
\item Parse $P$ and store the parse tree in $PT$
\item Load $P_{AST}$
\item Remove nodes corresponding to ``non-productive'' production rules from $PT$
\item Remove the terminal nodes from the $P_{AST}$
\item Adjust $PT$ using the edit distance between subtrees of $PT$ and subtrees $P_{AST}$ until they have the same number of nodes
\item Traverse both trees and map $P_{AST}$ labels to grammar production rules using making use of $PT$ nodes.
\end{enumerate}
}

\subsubsection*{Preprocessing}
The first step of the algorithm is the preprocessing of the language grammar. Here, a parser for the grammar itself is being used to store all the productions of the grammar in list. For optimization reasons the list is ordered by a comparison criteria between production rules: if $r_i$ and $r_j$ are production rules, and $i<j$, then $r_i$ and $r_j$ are swapped if $r_j$ is triggered by $r_i$. This is very helpful when detecting start production rule and the start non-terminal.\\
Having this data structure, we can adjust the grammar by annotating each production with a constructor which is a string formed by concatenation of the rule name, a delimiter, and a specific number such that the constructor name is unique. The rule name is given by the rule non-terminal name, the delimiter can be any char (e.g. `\_'), and as a number we can use the index of the production.

After preprocessing, each production rule is annotated as below:

\begin{center}
\small{
{\tt
\begin{tabular}{l l l l}
	$AExp$ & $::=$ & Int & [AExp\_0]\\
	 & \hspace{7pt}$|$ & Id & [AExp\_1]\\
	 & \hspace{7pt}$|$ & $AExp$ + $AExp$ & [AExp\_2]\\
	 & \hspace{7pt}$|$ & $AExp$ / $AExp$ & [AExp\_3]\\
\end{tabular}
}}
\end{center}
The label for production $AExp$ $::=$ Int is [AExp\_0] because the production non-terminal is $AExp$ and this production is the ``first'' production of $AExp$.
Since each production has a label, then the resulted parsing tree is the Parse Tree because it will capture all the rules applied when parsing the input program.

\subsubsection*{Parse the program}
From the annotated grammar we can generate a parser which produces a parse tree where the nodes are labeled with constructors attached to production rules. An important fact here is that each production rule applied is represented in the parse tree by a node. For instance, given the program {\tt a + 45} the parse tree is the following:
\begin{center}
\small{
{\tt
\begin{tabular}{l l l}
AExp\_2&\\
&\hspace{-1cm}AExp\_1&\\
&&\hspace{-1cm}Id\_0(a)\\
&\hspace{-1cm}AExp\_0&\\
&&\hspace{-1cm}Int\_0(45)\\
\end{tabular}
}
}
\end{center}


\subsubsection*{Load the AST}
This step consists in parsing the AST of the program using a generic parser for ASTs. For this parser we assume that all the ASTs have the form {\tt constructor(list\_of\_ASTs)}, where {\tt constructor} is the root node and {\tt list\_of\_ASTs} is a list of subtrees and terminals. The result of parsing the AST is stored in a treelike data structure. An example of a valid AST is shown in the right hand side of Figure \ref{sample}. 

\subsubsection*{Remove ``non-productive'' production rules}
This step consists in ``cleaning'' the parse tree such that it becomes ``more abstract''. First, all the constructors corresponding to rules of the form {\tt N ::= N'}, where {\tt N} and {\tt N'} are non-terminals, are collected in a list {\tt Labels}. Then, the parse tree is traversed and all the nodes which have constructors in {\tt Labels} list are replaced by their children nodes. These nodes are produced by so-called ``non-productive'' rules because, in general, they only reduce a non-terminal to another non-terminal not affecting the structure of the AST. \\

The parse tree for the program {\tt a + 45} contains two nodes generated by ``non-productive'' rules: {\tt AExp\_1} and {\tt AExp\_0}. The corresponding production rules of these nodes are simply reducing the $AExp$ non-terminal to $Id$ and $Int$ non-terminals. 
The transformed parse tree is shown below:

\begin{center}
\small{
{\tt
\begin{tabular}{l l}
AExp\_2&\\
&\hspace{-1cm}Id\_0(a)\\
&\hspace{-1cm}Int\_0(45)\\
\end{tabular}
}
}
\end{center}



This removal of nodes causes some issues and raises some questions about the abstraction level of the AST. Somehow, we restrict the AST abstraction level such that it contains nodes induced only by ``productive'' rules. Let us consider $A_1$ and $A_2$ two different ASTs for $P$. If $A_1$ contains more nodes than $A_2$ it means that $A_2$ is {\it ``more'' abstract} than $A_1$ in the sense that it contains less information. If $A_2$ is the abstract tree that contains nodes generated only by ``productive'' rules, then $A_1$ probably contains nodes which correspond to ``non-productive'' grammar rules, which, according to our algorithm, will be eliminated! Because the elimination step is essential in the algorithm, we cannot drop it, so we have developed a mechanism which recovers the nodes if necessary. The idea is to keep the original parse tree, and in the previous last step of the algorithm, when adjusting the transformed parse tree, if no corresponding nodes from the AST are found in the parse tree, then we use the original parse tree to create nodes previously deleted from parse tree.

Another issue here is raised by the fact that CFGs can be written using an extension of BNF which allows constructions like ``*'' of ``+''. An example of such a production rule is shown below:
\begin{center}
{\tt N ::= N' (+ N')*}\\
\end{center}
The ``*'' says that the language accepts {\tt N'}, {\tt N' + N'}, {\tt N' + N' + N'}, and so on, the number of occurrences of {\tt + N'} being zero or more.
Since we can modify the grammar, we split the rule above into two equivalent ones:
\begin{center}
{\tt N ::= N' } and {\tt N ::= N' (+ N')+}
\end{center}
Here, ``+'' marks one or more occurrences of {\tt + N'}. The generation of these production rules makes the detection of ``non-productive'' rules easier.
 
 
\subsubsection*{Remove terminal nodes from AST}
This step consists in a traversal of the AST and remove all the terminals - leaves of the tree. Terminals are not important because only intermediary nodes from AST can be mapped to grammar rules. The result of applying this step to the AST listed in Figure~\ref{sample} is shown in Figure~\ref{AST}.

\begin{figure}
\centering
\begin{tabular}{|l|}
\hline
\begin{minipage}[b]{\linewidth}
\small{
\begin{tabbing}
\tt var\_;\_(\_.\_(id,id),\\
\tt \_;\_(\_:=\_(id,\_+\_(id,int)),\\
\tt \_;\_(if\_then\_else\_( not\_( bool),\\
\tt \_;\_(\_:=\_(id,int)),\\
\tt \_;\_(\_:=\_(id,int))),\\
\tt \_;\_(\_:=\_(id,\_/\_(int,int)),\\
\tt \_;\_(while\_do\_(\_and\_(\_<=\_(id,id),bool),\\
\tt \_;\_(\_:=\_(id,\_+\_(int,int)))))))))\\
\end{tabbing}
}
\end{minipage}\\
\hline
\end{tabular}
\caption{AST without terminals}
\label{AST}
\end{figure}

\subsubsection*{Adjust the parse tree}
After preprocessing the parse tree, both trees have similar structures but not the same number of nodes. They can differ in a node by the number of children (caused by node elimination) step or by the order of children. In order to get rid of these differences we will use an algorithm for computing the edit distance between two trees proposed in \cite{DBLP:journals/siamcomp/ZhangS89}: the edit distance between $s_1$ and $s_2$ is the minimum number of edit operations required to transform $s_1$ into $s_2$. The basic operations are: delete, insert, and modify. For optimization reasons, since all the nodes have to be modified (re-labelled) we can set the cost for ``modify'' operation to 0. The edit distance is given by function  $ed:N_{AST} \times N_{PT} \to N$. Together with the edit distance function, let us make the following assumptions:
\begin{itemize}
\item Let $d:N_{AST} \times N_{PT} \to N$  be a function representing the distance between nodes. $d$ will be defined by the function $FindMappings$ and will also represent all possible mappings between nodes from $P_{AST}$. $N_{AST}$ and $N_{PT}$ represent the set of nodes of $P_{AST}$ and $PT$ respectively.
\item $P_{AST}$ is the root node of the AST of program $P$; $C$ represents the set of children of node $P_{AST}$; $c \in C$ 
\item $PT$ is the root node of the parse tree of program $P$ after step $4$ (non-productive nodes elimination); $C'$ represents the set of children of node $PT$; $c' \in C'$
\item $PT_i$ is the root node of the initial parse tree of program $P$; $C''$ represents the set of children of node $PT_i$; $c'' \in C''$
\item Since $PT$ is obtained from $PT_i$ by eliminating non-productive nodes we can consider that there is a function $f$ which for each node from $PT$ returns the corresponding node in $PT_i$. 
\item $parent(c)$ represents the parent node of $c$ and $h(c)$ represents the height of the tree having root $c$.
\end{itemize}

\begin{figure}[t]
\begin{minipage}[h]{0.9\linewidth}
\begin{algorithmic}
% \State{This function will define function $d$ and will modify $PT$ }
\Function{FindMapping}{$P_{AST}$, $PT$, $PT_i$}
\For{$c \in C$,$c'\in C'$} 
\State{$c'':=f(c')$}
\If{$h(c) = h(c')$} \State{$d(c,c') := ed(c,c')$}
\Else ($h(c) < h(c'')$)
\If {$parent(c'') \not \in N_{PT}$}
% \hspace{2cm}\State{The parent of node c'' will be inserted as being the parent of c' in PT, without inheriting links from $PT_{i}$. Note that $PT_{i}$ remains unchanged.}
\hspace{2cm}\State{Let $n$ be a new node such that}
\hspace{2cm}\State{$label(n) = label(parent(c''))$}
\hspace{2cm}\State{$parent(n) := parent(c')$}
\hspace{2cm}\State{$parent(c') := n$}
% \hspace{2cm} \Comment{Since the corresponding node for c is n, we compute $d(c,n)$}
\hspace{2cm}\State{$d(c, n) := ed(c, n)$}
\hspace{2cm}\State{$f(n) := parent(c'')$}
\Else
\State{$d(c,c') := ed(c, c')$}
\EndIf
\EndIf
\EndFor
\EndFunction
\end{algorithmic}
\end{minipage}
 \caption{Adjusting $PT$ according to $P_{AST}$ and $PT_i$. Note that $C$ is the set of children of $P_{AST}$, $C'$ the set of children of $PT$, and $f$ a function mapping nodes from $PT$ to $PT_i$.}
\label{fg:fig}
\end{figure}

Function $FindMappings$ shown in Figure ~\ref{fg:fig} computes the distance between nodes and in the same time appends to $PT$ the nodes which are present in the $P_{AST}$ but they were eliminated because they were corresponding to non-productive rules. Therefore, these nodes are added back using the initial parse tree $PT_i$. It seems that deleting and then adding back nodes is more efficient than detecting directly in $PT_i$ the nodes which have no correspondent in the $P_{AST}$. The main difficulty in working directly with $PT_i$ is given by the fact that the solution space is going to explode when multiple choices are available. For instance, if $c$ is a node in $P_{AST}$ and we find $S=\{c'' ; h(c) = h(c'')\}$ the set of available solutions then, in the next call of $FindMapping$ we will have to take into account all possible combinations $(c, c'')$ for every $c \in C$ which will run in exponential time. In our approach we consider that nodes added in $PT_i$ by non-productive rules will not be present in the $PT$. But if they do, then we add the missing nodes back. In this way the following restriction $h(c) \leq h(c')$ will always be true. More than that, the algorithm will append in constant time a node in $PT$ and this operation will be repeated no more than $m \times n$, where $m$ and $n$ are the number of nodes of $P_{AST}$ and $PT$. \\
Function $FindMapping$ processes two nodes at a time $P_{AST}$ and $PT$. For each child node $c \in C$ we compute the distance to $c' \in C'$ if the height of the trees rooted by $c$ and $c'$ is the same. If not, since we stated above that  $h(c) \leq h(c')$ there is only one brach to consider:  $h(c) < h(c')$. In this case, we have to analyze $PT_i$ to check if the corresponding node of $c'$ from $PT_i$ ($c''=f(c')$) can be used to adjust $PT$, that is, adding a node from $PT_i$ to $PT$ which node is the parent of $c''$ and was generated by a non-productive rule. In other words, if $parent(c'')$ is not in $PT$ but it exists both in $PT_i$ and $P_{AST}$ we add it to $PT$ by inserting $parent(c'')$ as being the child of $parent(c')$ and the parent of $c'$. Note that $c'$ goes one level down the tree and it will be considered again when calling of $FindMapping(c,parent(c'),PT_i)$. In this way, if there are more than one non-productive nodes on a branch in $P_{AST}$ all corresponding nodes from $PT_i$ of them will be added to $PT$. On the other hand, if $h(c) > h(c'')$ then no nodes can be added to $PT$ which means that computing $d(c,c')$ is infeasible.\\
In the context of the \K framework, the $P_{AST}$ is generated only from productions which contain terminals. This means that this step  is not supposed to modify the $PT$ because $P_{AST}$ will not contain any non-productive nodes. Even so, this step cannot be eliminated completely because it may still adjust the parse tree changing the order of children. 


%Just to give reader a glipmse of the effect of this we will use the following IMP program:
%\begin{center}
%\begin{tabular}{l}
%\begin{minipage}[b]{0.55\linewidth}
%\small{
%\begin{tabbing}
%\tt whil\=\tt e a <= 5 do ( \\
%\>\tt a := a + 1\\
%\tt )\\
%\end{tabbing}}
%\end{minipage}
%\end{tabular}
%\end{center}
%Suppose now, that the user wants to transform this program into an AST which emulates a C-like {\tt do-while} statement:
%\begin{center}
%\small{
%{\tt
%\begin{tabular}{l l}
%do\_while\_(&\\
%&\hspace{-1cm}\_:=\_(Id(a),\_+\_(Id(a),Int(1))),\\
%&\hspace{-1cm}\_<=\_(Id(a),Int(5))\\
%)&\\
%\end{tabular}
%}}
%\end{center}
%The parse tree of the piece of code above, after eliminating the nodes induced by ``non-productive'' will look like:
%\begin{center}
%\small{
%{\tt
%\begin{tabular}{l l}
%Stmt\_4(&\\
%&\hspace{-1cm}BExp\_2(Id\_0(a),Int\_0(5))\\
%&\hspace{-1cm}Stmt\_1(Id\_0(a),AExp\_2(Id\_0(a),Int\_0(1))),\\
%)&\\
%\end{tabular}
%}}
%\end{center}
%
%The trees are simultaneously traversed in pre-order, the nodes {\tt do\_while\_} and {\tt Stmt\_4} being the first nodes visited. Let $d_{\small {\_:=\_,\textit{BExp}\_2}}$ be the distance between subtrees having roots {\tt \_:=\_} and {\tt BExp\_2} computed using the edit distance function described in \cite{DBLP:journals/siamcomp/ZhangS89}. Using the same notation policy, the algorithm will compute for node {\tt \_:=\_} the distances $d_{\_:=\_,\textit{BExp}\_2}$, $d_{\_:=\_,\textit{Stmt}\_1}$, and for node {\tt \_<=\_} the distances $d_{\small {\_<=\_,\textit{BExp}\_2}}$, $d_{\small {\_<=\_,\textit{Stmt}\_1}}$. As we can observe $d_{\_:=\_,\textit{BExp}\_2}$ > $d_{\_=\_,\textit{Stmt}\_1}$ and $d_{\_<=\_,\textit{BExp}\_2}$ $\leq$ $d_{\_<=\_,\textit{Stmt}\_1}$; this means that we found a non-ambiguous correspondence between all nodes from AST to all nodes from transformed parse tree: {\tt \_:=\_} corresponds to {\tt Stmt\_1} and {\tt \_<=\_} corresponds to {\tt BExp\_2}. Note that even if the order of the children is not the same we still computed the needed distance function on nodes of interest.
%%Note that in case the number of children of {\tt do\_while\_} is not equal to number of children of node {\tt Stmt\_4}. When consulting the original parse tree is taken into consideration: it searches for node labeled {\tt Stmt\_4} in the parse tree and searches recursively for a group of children which is the best match for children of {\tt do\_while\_}. The best match is considered to be the set of children with the minimum sum of editing distances to children of {\tt do\_while\_}. In such a case, the parent of this children will be added back as a direct child of {\tt Stmt\_4}.\\
%% The result of applying this step should produce two trees which are isomorphic: the have the same number of nodes and the same number of children for each node. If this step fails it means that it is not possible to annotate the grammar such that it produces the right AST. These situations are reported and the user should modify the AST accordingly.

%\begin{tabular}{l l l}
%
%\end{tabular}
\subsubsection*{Detect all the mappings between AST and grammar rules}
Each node in the parse tree is linked to the grammar rule which generated it. Also, we have the link to nodes in the AST as explained in the step above. It means that we can easily assign to each grammar production rule an AST node. This assignment consists in an annotation which will be considered when generating the parser. According to the previous example some production rules from the syntax of IMP will be mapped to annotations as follows: 
\begin{itemize}
\item $Stmt$ $::=$ while $BExp$ do $Stmt$ $\mapsto$ {\tt do\_while\_}
\item $Stmt$ $::=$ Id $:=$ $AExp$ $\mapsto$ {\tt \_:=\_} 
\item $BExp$ $::=$ $AExp$ $<=$ $AExp$ $\mapsto$ {\tt \_<=\_}
\item ...
\end{itemize}

An additional check can be performed here: if all the production rules from the grammar have assigned an AST node then the user can be notified that the grammar is now complete.

For the IMP language, the algorithm is able to infer annotations using only the the pair $(P,P_{AST})$ shown in Figure~\ref{sample}. For complex languages the algorithm must get more pairs as input. After exploring all the pairs, if there is an ambiguity, that is the algorithm finds more than one possible annotations for a production rule, then it chooses the annotation having the highest frequency. The motivation of using frequencies is to make the algorithm fault tolerant, because in practice, it is very likely to make a mistake when writing the AST of a program by hand. If the frequencies are equal (e.g. for a production rule there are only two possible annotations) then the algorithm cannot decide which of them is right. In this case, an error is reported to the user specifying the pairs which introduced the ambiguity. 
At the intuition level, the rules generated by the algorithm itself are actually the same rules in the grammar but having annotations, which means that the generated output is sound. Also, every rule generated by the algorithm can be discovered through the algorithm since it will be annotated or left as it is.

\section{Implementation}
\label{implementation}
In this section we will present the implementation of the solution described in Section \ref{automating:AST}. Mostly, we used Java as programming language and the ANTLR parser generator which accepts as input annotated LL grammars.
\subsection{ANTLR}
As described in \cite{conf/pldi/ParrF11}, the ANTLR parser generator is based on a top-down parsing strategy, called $LL(^{*})$. The input for ANTLR is a CFG grammar, augmented with syntactic and semantic predicates and embedded actions. Syntactic predicates are given as a part of the grammar, while semantic predicates and embedded actions must be given in the host language. ANTLR is compatible with numerous languages, but we chose Java in our implementation. ANTLR imposes a restriction related to left-recursive grammars: they are not allowed because the parser generator can fall into an infinite loop. 
Figure \ref{antlr:grammar} contains a sample of the ANTLR grammar of IMP language.
There are two advantages when using ANTLR in our implementation:
\begin{itemize}
\item ANTLR allows annotations to rules such that we can easily generate an AST using annotations
\item ANTLR is portable and in the same time faster than other parser generators
\end{itemize}
\begin{figure}[h]
\begin{center}
	\begin{tabular}{|l c c c r|}
		\hline
		$aExp$ & $:$ & $simpleE$ & $'+'$ &  $aExp$ \\
	   		& $|$ & $simpleE$ & $'/'$ & $aExp$ \\
	   		& $|$ & $simpleE$ & & \\
			& $;$ &&&\\
		\hline
	\end{tabular}
\end{center}
\caption{ANTLR grammar rules for arithmetic expressions - sample}
\label{antlr:grammar}
\end{figure}


\begin{figure}
\begin{center}
\includegraphics[scale=0.4]{arhitecture.pdf}
\end{center}
\caption{The architecture of the tool}
\label{arhitectura}
\end{figure}



\subsection{The Java implementation}
Since this subsection is dedicated to discuss the implementation details of the tool we will start by describing the needed data structures. Each grammar production rule will be represented by an unique name and will contain its own textual representation from the grammar, the set of non terminals, the set of its productions, the set of possible {\it rewritings}, a mapping of these rewriting rules to their number of occurrences and a list with the labels for each production.\\
A {\it rewriting} will encapsulate the index of the sub-rule used, the AST to be generated for the production and the Parsing Tree which has been mapped to the given AST.\\
For storing the Parsing Trees and the ASTs, we can set the same treelike data structure, which will encapsulate the label of the current node, its parent, its list of children and the leftmost child (information needed for calculating the editing distance between the two trees).

\begin{table*}[!Htp]
\caption{Time to collect and analyze data}
\centering
\begin{tabular} {c c c c c c c c }
\hline
Language&Example&No. of covered&Parse Tree&AST&Time to remove& Tree &Analyze\\
&NO.&productions&generation&loading&``non-productive'' rules &synchronization&trees\\ [0.5ex]
\hline\hline
IMP & \#1 & 22 & 78 & 3 & 1 & 50 & 1 \\[0.5ex]
\hline
 & \#1 & 18 & 79 & 1 & 0 & 51 & 1 \\
 & \#2 & 20 & 100 & 2 & 0 & 66 & 2 \\
SIMPLE\_UNTYPED & \#3 & 30 & 120 & 2 & 3 & 53 & 4 \\
 & \#4 & 12 & 47 & 0 & 0 & 20 & 0 \\
 & \#5 & 25 & 73 & 1 & 2 & 37 & 1 \\[0.5ex]
\hline
	&\#1 & 40 & 240 & 5 & 9 & 74 & 5\\
	&\#2 & 25 & 200 & 4 & 6 & 60 & 3\\
	&\#3 & 62 & 453 & 7 & 9 & 133 & 7\\
JAVA-CORE&\#4 & 36 & 250 & 4 & 10 & 58 & 5\\
	&\#5 & 23 & 233 & 5 & 6 & 42 & 4\\
	&\#6 & 11 & 121 & 0 & 2 & 25 & 0\\
	&\#7 & 5 & 40 & 0 & 0 & 4 & 0\\[0.5ex]
\hline
\end{tabular}
\label{eval:analyze}
\end{table*}

The main architecture of the tool is shown in Figure~\ref{arhitectura}.  It consists of four main components each component having associated a few steps from the algorithm described in Section \ref{automating:AST}. 
The {\it Preprocessor} component is responsible for the first step of the algorithm: analyses the input grammar and generates a parser. This parser, given a program, outputs the program's corresponding parse tree as discussed in Section~\ref{automating:AST}.\\
In order to analyze the grammar of the language for finding out its defining rules, 
the program makes use of a modified variant of the parser defined for the ANTLR grammar. 
It encapsulates a list of rules, and when reaching the definition a rule in the 
ANTLR grammar, it adds to this list the data about the rule (name of the rule, non terminals used by it, and all its productions). 
As example, when reaching the rule shown in Figure \ref{antlr:grammar} this parser will add the rule ``aExp'' with the list of non terminals $[simpleE,aExp]$ and the productions list $[simpleE '+' aExp,simpleE '+' aExp,simpleE]$ to its stored rule list.
This list will subsequently be ordered by the following relations: 
\begin {itemize}
\item{a rule is ``greater" than another one if the later can be found in its list of rules}
\item{a rule is ``smaller" than another one if the list of the latter contains the rule}
\item{a rule is ``equal" to another rule if the first and the second relations are simultaneously true or none of them}
\end{itemize}	
This is not absolutely necessary, but it is a consequence of the idea of optimizing the detection of a starting rule for one part of a program.
For the next step of preprocessing, we need to add to the grammar the method for construction the Parse Tree that encodes each production used when parsing an input.
As ANTLR accepts embedded actions, we will insert Java source code in the original grammar, in order to obtain the desired Parse Tree. After these changes, ANTLR will be used to obtain the parser that will be used to parse an input program and generate the Parse Tree.
After preprocessing the input grammar, the component {\it Data collecting unit} parses each program using the parser generated in the previous step and creates the parse tree. It also parses the program corresponding AST and loads it in a treelike data structure. This component implements the steps 2 and 3 from the algorithm, its output consisting in two trees.\\
The {\it Analyzer} gets as input both trees, removes the ``non-productive'' rules from the parse tree and the terminals from the AST, and then applies step 6. The parse tree is changed using the basic operations on trees. This component is the most complex one since it implements steps 4,5, and 6 of the algorithm. The output of this component should be two isomorphic trees. At this level the conflicts in the input pairs are detected and reported to the user.\\
The last component extracts the labels from the AST nodes which correspond to nodes in the transformed parse tree and appends them as annotations to rules which generated the parser tree nodes. In the implementation, this component also generates the parser, not only the annotated grammar as shown in Figure~\ref{arhitectura}.
The source files and informations about the project can be found at \url{http://code.google.com/p/ast-generator} and some examples at \url{http://code.google.com/p/ast-generator/source/browse/svn/branches/examples}.



\section{Evaluation}
\label{eval}
In order to test the implementation, we annotated by hand three grammars with AST rewrite rules and checked if the grammars were correctly annotated. Tables \ref{eval:preprocess} and \ref{eval:analyze} contain test results for the languages we have chosen as benchmarks. The time measurement unit is one millisecond.

\begin{center}
\begin{table}[ht]
\caption{Time for loading language grammar}
\begin{tabular} {c  c  c  c  c}
\hline
Language & No of production rules & $T_g$ & $T_{PT}$ & $T_p$ \\
\hline
\hline
IMP &  23 & 15 & 5 & 6585 \\
%\hline
SIMPLE\_UNTYPED & 59 & 17 & 20 & 22577 \\
%\hline
CORE\_JAVA & 120 & 68 & 31 & 24649 \\
\hline

\end{tabular}
\label{eval:preprocess}
\end{table}
\end{center}

In table \ref{eval:preprocess}, $T_g$ represents the time to retrieve the grammar rules while $T_{PT}$ represents the time elapsed to compute the annotations by processing the Parse Tree. The last column labelled $T_p$ represents the time to generate the parser from the annotated grammar. As we can observe the tool is slowing down while the number of production rules in the grammar increases. This happens because we process each rules in the grammar.

In table \ref{eval:analyze} more detailed information about testing is displayed. For each language we have a number of examples which cover all the syntactical constructs. The table contains informations for each example:
\begin{itemize}
\item the number of production rules covered by the example
\item the time to parse the example
\item the time to load the AST
\item the time to remove ``non-productive'' rules
\item the time to adjust the parse tree until an isomorphism with the AST is obtained
\item the time to collect and generate the grammar annotations.
\end{itemize}
Summarizing the results shown in both tables we can observe that most of the time is taken to parse the examples and to generate the final parser.
Since the time for parsing and generating a parser cannot be improved because they are parser generator dependent we will analyze the time spent to synchronize the trees.
The most complex example, \#3 from JAVA-CORE, the one which covers 62 production rules from 120 total production rules of the grammar takes only 0.133 seconds to edit the parse tree. Note that example \#3 has 212 lines of regular Java code! If the examples are larger, that is, they cover more production rules then the time will increase for each such example, but in this case fewer examples are needed to infer the annotations. On the other hand, if the examples are small the time for treating each example will be smaller than a tenth of a second, but more examples needed could increase the total time. Observing the test results we can see that examples which cover around 25-30 production rules have acceptable running times and they cover enough productions rules such that the number of examples should be small. We ran all our tests on a regular computer, with a dual-core processor having the clock frequency 1.90GHz and the physical memory size of 2GB.
\section{Conclusions and Future work}
\label{conclusions}
The work presented in this paper is intended to be used in the context of the \K framework. \K is a framework for giving semantics to programming languages using rewrite rules. Rewrite rules are applied on terms, which are \K specific terms. Such a term is the \K configuration which holds the entire state (variables, functions, the program itself, etc.) of the program in cells. The program is itself a \K ground term.
%being a part of the configuration, usually stored in a cell labelled {\it <k>}. 
Since a \K term is actually an AST of the program then the tool we presented here becomes very useful, that is an user only needs to provide the program and its corresponding \K AST and we can generate a parser which is able to output directly the \K AST from any program.
The tool was tested successfully on several languages having different degrees of complexity: IMP - which has the simplest grammar we tested, SIMPLE - containing considerably more syntactical constructs, and a core of Java which is a real language.
Since the tests results were quite promising, in the near future we intend to test our tool on the whole Java language, C and several functional languages.
As a long-term plan we would like to get rid of the grammar totally, in the sense an user should not provide a grammar at all. There is already some relevant work related to grammar inference from programs described in \cite{Crepinsek:2005:EGP:1064165.1064171}.


% conference papers do not normally have an appendix


% use section* for acknowledgement
\section*{Acknowledgment}



The results presented in this paper would not have been possible without effective advices of Prof. Dr. Dorel Lucanu. We want to especially thank him for his efforts and fruitful ideas. We also want to mention that the work presented here is supported by Contract 161/15.06.2010, SMISCSNR 602-12516 (DAK).





% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{8}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}

% references section

% can use a bibliography generated by BibTeX as a .bbl file
% BibTeX documentation can be easily obtained at:
% http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is at:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
%\bibliography{IEEEabrv,../bib/paper}
%
% <OR> manually copy in the resultant .bbl file
% set second argument of \begin to the number of references
% (used to reserve space for the reference number labels box)
%\begin{thebibliography}{1}

%\bibitem{IEEEhowto:kopka}
%H.~Kopka and P.~W. Daly, \emph{A Guide to \LaTeX}, 3rd~ed.\hskip 1em plus
%  0.5em minus 0.4em\relax Harlow, England: Addison-Wesley, 1999.
%
%
%\end{thebibliography}
%\bibliographystyle{amsplain}
\bibliography{references}





% that's all folks
\end{document}


