%
% File coling2014.tex
%
% Contact: jwagner@computing.dcu.ie
%%
%% Based on the style files for ACL-2014, which were, in turn,
%% Based on the style files for ACL-2013, which were, in turn,
%% Based on the style files for ACL-2012, which were, in turn,
%% based on the style files for ACL-2011, which were, in turn, 
%% based on the style files for ACL-2010, which were, in turn, 
%% based on the style files for ACL-IJCNLP-2009, which were, in turn,
%% based on the style files for EACL-2009 and IJCNLP-2008...

%% Based on the style files for EACL 2006 by 
%%e.agirre@ehu.es or Sergi.Balari@uab.es
%% and that of ACL 08 by Joakim Nivre and Noah Smith

%\documentclass[11pt]{article}
%\usepackage{coling2014}
%\usepackage{times}
%\usepackage{url}
%\usepackage{latexsym}
%\usepackage{graphics,rotating}
%\usepackage{natbib}




\documentclass[11pt]{article}
\usepackage{acl2014}
\usepackage{times}
\usepackage{url}
\usepackage{latexsym}

\usepackage{graphics,rotating}
\usepackage{sidecap}
%\usepackage{natbib}

% You can expand the titlebox if you need extra space
% to show all the authors. Please do not make the titlebox
% smaller than 5cm (the original size); we will check this
% in the camera-ready version and ask you to change it back.


\title{%{\sc Text2Code: } 
% Automatic Semantic Interpretation of  \\Requirements Elicitation Documents}
Semantic Parsing Using Content and Context:\\ A Case Study from Requirements Elicitation}
%\author{Reut Tsarfaty\\ Weizmann Institute  \\ Rehovot, Israel
 %  \And  Eli Pogrebetzky \\ Interdisciplinary Center  \\ Herzliya, Israel
 %  \And    Jorden Valancy \\ Weizmann Institute \\Rehovot, Israel
  %  \And  Guy Weiss \\ Weizmann Institute  \\Rehovot, Israel
   %  \AND  Yaarit Natan \\ Weizmann Institute \\ Rehovot, Israel
    % \And    Smadar Szekely\\ Weizmann Institute  \\ Rehovot, Israel
     % \And       David Harel   \\ Weizmann Institute\\Rehovot, Israel}

\date{}

\begin{document}
\maketitle
\begin{abstract}
%We present a platform for the automatic semantic analysis of requirements-specification documents. Our target semantic annotation is based  on Live Sequence Charts (LSC),  a formal visual language that ediates  requirements elicitation and  code execution. Our source documents are written in an inherently-ambiguous fragment of English. Thus, parsing textual requirements into LSCs allow for the  automatic translation of textual requirements   into code artifacts.  Existing tools available for for this task rely on  extensive human interaction in order to support accurate disambiguation of the semantic parses. The present architecture, in contrast, uses sentence-level and discourse-level probabilistic modeling  to support the  analysis and disambiguation of individual sentences in  context. The probabilistic parameters are estimated in a data-driven fashion from a seed of real-world requirements and abundance of automatically generates examples. Our efficient decoder  reconstructs a complete formal model that obeys  the static (entities, properties) and dynamic (behavioral scenarios)  requirements of the  system specified in the document.
%We present a platform for  automatic semantic analysis of requirements-specification documents. The source requirements documents represent a discourse written in an inherently-ambiguous fragment of English. Our target semantic annotation employs Live Sequence Charts (LSC),  a formal visual language that mediates requirements elicitation and  code generation. Parsing the texts into LSCs thus allows one to   automatically translate  requirement texts into code artifacts.  Existing tools designed for this task currently  rely on  extensive human interaction in order to  accurately disambiguate the semantic parses. The present architecture, in contrast, uses sentence-level and discourse-level probabilistic modeling in order to support fully automatic  analysis and disambiguation of   sentences in  context. The probabilistic parameters are estimated in a data-driven fashion from a seed of real-world requirements and abundance of automatically generates examples. Our  decoder constructs a complete formal model that obeys  the static (entities, properties) and dynamic (behavioral scenarios)  requirements of the  specified system, in high accuracy.
We present a model for  automatic  semantic analysis of requirements elicitation documents. 
%The requirements  are  written  in an inherently-ambiguous fragment of  English, and are to be interpreted as a single program. 
Our target semantic representation employs {\em live sequence charts} (LSC),  a multi-modal visual language % which was introduced
 %for specification and programming in a scenario-based fashion 
 %. Parsing the requirements into LSCs thus provides a key step towards the
for scenario-based programming, which has a direct translation into executable code. 
%Currently existing tools for translating requirements to executable  code   rely on  extensive human interaction in order to   disambiguate the semantic parses.
 The architecture  we propose integrates  sentence-level and discourse-level processing in a generative probabilistic model for the  analysis and disambiguation of  individual sentences in  context.
% The probabilistic parameters are estimated in a data-driven fashion from a seed of real-world requirements and an abundance of synthetic examples.
We empirically show  that the joint model consistently outperforms a sentence-based model in terms of constructing a  system that reflects all the   static (entities, properties) and dynamic (behavioral scenarios) requirements in the document.
 % in high accuracy, and provides substantial improvement over the sentence-based model.
 
\end{abstract}

\section{Introduction}
Requirements elicitation is a process whereby a system analyst gathers information from a stakeholder about  a desired software to be implemented. The  knowledge collected by the analyst may be {\em static}, referring to the conceptual model (the entities, their properties, the possible values) or {\em dynamic}, referring to the behaviour  that the system should follow (who does what to whom, when, how, etc). 
%A meta model is used to specify models, and a meta ontology is used to specify ontologies
%We assume that LSC is a meta model for behaviour and some version of UML class diagrams are meta models for 
%\section{}
%\subsection{}
A stakeholder interested in the software typically has a  specific (static and dynamic) domain in mind, but he or she cannot necessarily prescribe any formal models or 
 code artifacts. The term {\em requirements elicitation}  we use here refers to a piece of discourse in natural language % (or in a restricted fragment thereof) 
 by means of which a stakeholder communicates their desiderata to the system analyst.
 
The role of a system analyst is to understand the different  requirements and transform them into formal constructs
 (formal diagrams or executable code). 
 Moreover, the analyst  needs to consolidate the different pieces of information to uncover a single shared domain.  
Studies in software engineering aim to develop intuitive symbolic systems  with which  human agents can encode requirements that would then be  unambiguously translated into a formal model \cite{fuchs95,bryant02}. However, it is repeatedly shown that the more natural and NL-like the formal system is, the harder it is to develop an unambiguous translation mechanism \cite{kuhn14cnl}. In this paper %we address this challenge from a different angle, namely, 
we accept the ambiguity of requirements descriptions as a premise, and aim to answer the following question: can we automatically recover a formal representation of the complete system, which {\em best} reflects the human-perceived interpretation of the document?

Recent advances in natural language processing, with an eye to  semantic parsing \cite{zettlemoyer05,liang11dcs,artzi13,liang2014semantics},  use different formalisms and various kinds of  learning signals. In particular, the model of \newcite{lei13}  induces  input parsers from format descriptions. However, rarely do these modeling techniques take into account document context.  The key idea we promote here is  that discourse context provides substantial disambiguating information for sentences automatic analysis. We suggest a novel model for integrated  sentence-level and discourse-level processing in a joint  generative probabilistic model, from which we can recover a representation of the specified system.  %and show that it consistently outperforms sentence-based modeling
%This gives rise to a novel challenge: 
%can we complete this requirements-to-system task automatically? That is, 
%given a piece of discourse, can we automatically
%how can we automatically induce a single formal program (diagram, code) that obeys all static and dynamic requirements from a  piece of discourse?
% with as little human intervention as possible (or none).\footnote{It is clear that no user intervention at all is not a viable scenario, as the customer / stakeholder would, at the very least need to okay the results of the process. This automatic approach we develop here has the advantage of significantly speeding up the elicitation-to-analysis cycle. The induced model can be returned to stakeholder along with the code artefact which would allow them to already execute sample scenarios and and point out gaps in the induction or in their own description.} 


Specifically,  we view the text-to-code  task as a structure prediction task, where we accept a piece of discourse as input and aim to automatically predict a  formal model of the static and dynamic domain as output. Our input is given in a simplified --- yet highly ambiguous --- fragment of English. The output, in contrast, is unambiguous and well-formed.  We represent the semantic analyses of requirements  via a sequence of  formal  constructs called {\em live sequence charts} (LSC) \cite{damm01lscs,harel03lscs} that represent the dynamic behavior of the system, tied to a single shared code-base ontology called  a   {\em system model}
 (SM). 
%We address ambiguity resolution of each individual  requirement using both sentence content and discourse context. We achieve this by developing 
%More specifically, we
% develop a    probabilistic generative model for joint  parsing and domain-knowledge  acquisition.

Our probabilistic mode instantiates {\em noisy channel} model, where the observed signal   is a  sequence of sentence describing  requirements, and the message to decode % represents a  snapshot of the
 is single SM that satisfies  all requirements.    Our  solution takes the form of a {\em hidden markov model} where  emission probabilities reflect the grammaticality and interpretability of individual textual requirements, and  transition probabilities  model the   overlap between SM snapshots that represent the shared  domain. Using efficient viterbi decoding, we search for the best sequence of  SM  snapshots that has most likely generated the  requirements. We empirically show that such integrated model consistently outperform a sentence-based model learned from the same data.
 %The resulting LSCs along with the SM can then automatically generate executable java code that implements the system.
 
 The reminder of this document is organized as follows. In Section~\ref{task} we describe the task, and spell out our assumptions concerning the input and the output. In Section~\ref{formal} we  our target  semantic representing, and a specially tailored  notion of {\em grounding}   requirements in a code-base. In Section~\ref{model}  we then develop the sentence-based and discourse-based models, and in Section~\ref{eval}  we  evaluate our model on several hand annotated case studies. In Section~\ref{conc}  we  summarize and conclude.
 %We empirically show that our discourse-based model systematically ourperforms the sentence-based analysis. %which is, in turn, a lot less time-consuming than interactive play-in. 

%LSCs that reflect multiple possible interpretations of the same functionality are marginalised out. 
%where the system snapshot at time \(i\) only depends on previous  states \(i-1, i-2...\), and the verbal signal   at time \(i\) only depends on the model snapshot. 
%Using efficient viterbi decoding we find a single union of SM snapshots that is the is most likely to have generated the observable textual requirements

%%%%%
%%%%% TO USE
%%%%%
 %Our implementation extends the {\em PlayGo} (PG) tool \cite{harel10pg}, which previously allowed  inputing (or, {\em playing in}, in the terminology of \cite{harel03lscs})  textual requirements in an interactive mode, where a human expert performs incremental disambiguation of the semantic parses 
% for interactive inputting of requirements \cite{gordon09} with two automatic modes of analysis:  {\em sentence-based}, in which our statistical parser interprets requirements one by one and  constructs LSCs and  SM snapshots independently, and  {\em discourse-based}, where an  efficient Viterbi decoder  searches for the SM-snapshots sequence that has most likely generated the LSCs for requirements,  proposing a single SM that represents the domain of the  document.

%In our evaluation of our models on four pre-annotated requirements specification case studies, 
%We finally observe that our seed  of real world examples has profound effects on accuracy, and present a  cost effective mechanism of post-editing followed by specially tailored natural language generation, for efficiently extending this seed set.




%Shallow syntactic parser such as part of speech taggers or chunkers will not suffice here. 
%This information to recover is semantic in nature, so part-of-speech-taggers and shallow parsers are insufficient for this task.

%but may be indicated by word-forms (morphological inflections)  syntactic forms (word-order, grouping)  information structure (the packing of subordinate sentences) and so on. Moreover, natural language is inherently ambiguous at all  of these levels. So even when we have parsers for explicitly uncovering these elements these parsers over-generate, that is, they can propose multiple analysis for a given sentence, and we need to be able pick out the single analyses that is most consistent with the human interpretation of the discourse.


\section{Parsing Requirements Elicitation Documents: Task Description}\label{task}

There is an inherent discrepancy between the  input and  the output of  software engineering processes.  The input, system requirements,  is usually specified in a natural,   informal, language. The output, the system itself,  is ultimately implemented in a  formal  unambiguous programming language.  Can we automatically recover a formal representation of a complete system from a set of requirements, which {\em best} reflects the human-perceived interpretation of the entire document?
%Recent work in software engineering discusses the challenge of bridging the gap between specifying systems and programming systems (\cite{harel08} and references therein). Several studies chose to address this challenge by developing intuitive symbolic systems  with which  human agents can encode requirements that are  unambiguously translated into a formal system or code artifacts \cite{fuchs95,bryant02}. An inherent limitation of this approach is that the more natural and NL-like the formal system is, the harder it is to develop an unambiguous translation mechanism \cite{kuhn14cnl}.
% In this paper    we take a  significant step towards  answer the following question. 
%and to employ a statistical parser for interpreting requirements documents in a fully automatic fashion.
%\paragraph{Setup}
%in the heart of the modeling solution lies the ability to automatically parse, disambiguate,  and consolidate shared information conveyed in disparate requirements in the document.

\paragraph{The Input.}
%Here we assume a simplified version of the requirements parsing task. First, 
In this work we assume a {\em scenario-based programming} paradigm  (a.k.a {\em behavioral programming} (BP) \cite{harel12bp})  in which the development is seen as an {incremental} process  whereby humans describe the expected behavior of the system by means of ``short-stories", formally called {\em scenarios}
\cite{harel01scenario}.
% The  internal structure of the specified system is   an artefact, rather than a pre-condition, of the behavioural requirements.  
%Moreover,
% In particular, we assume that a single requirement always corresponds to a single scenario.
%On a more technical level 
We further assume that a given requirements document describes exactly one system, and that each sentence in  the document describes a single, possibly complex,  scenario.
The requirements we aim to parse  are given in a simple form of English, specifically, the English fragment   described in Gordon and Harel \cite{gordon09}. Contrary to strictly formal specification languages, 
%of which translation into logical forms is completely deterministic,
 this simplified variant of  English allows for an open domain lexicon and exhibits  extensive  syntactic and semantic  ambiguity.\footnote{Formally, this variant may be viewed as a CNL of degree P2 E3 N4 S4 
with properties 
C,F,W,A \cite[pp 6-12]{kuhn14cnl}.}  %calling for a non-trivial interpretation function.

\paragraph{The Output.}
Our target semantic representation  employs  {\em live sequence charts} (LSC), a diagrammatic formal language
  %which was introduced
   for scenario-based  programming \cite{damm01lscs}. 
Formally, LSCs are an  extension of the well-known UML message sequence diagrams \cite{harel06} and they have a direct translation into executable code \cite{harel07s2a}.\footnote{
it can be shown that the execution semantics of the LSC language is embedded in a fragment of a branching temporal logic called CTL* \cite{Kugler05}.}
Using  LSC diagrams for software modelling 
%(rather than using the underlying CTL* logical core directly)
 enjoys the advantages of being  easily learnable \cite{harel09teaching}, intuitively interpretable \cite{eitan11} and  straightforwardly amenable to execution \cite{harel02playout}, 
 %navigation \cite{gordon10nav}, 
 and  verification \cite{harelkatz13}.
The LSC language is particularly suited for representing    natural language  requirements since its basic formal constructs, {\em scenarios},  nicely align with {\em events}, the primitive objects of Neo Davidsonian Semantics \cite{parsons90}.

%%%%%
%%%%% TO USE
%%%%%
% The LSC formalism is particularly suited for requirements specification as it allows for incrementally constructing a single, shared, ontology describing the domain. We refer to this ontology as the {\em system model} (SM). Every requirement  adds  facets, such as the entities, actions, properties and values, to the domain represented by the SM. Writing a   program that satisfies the requirements in the document minimally requires generating a SM that satisfies the shared  domain. 


\paragraph{Live Sequence Charts and Code Artifacts} A live sequence chart (LSC) is a diagram that describes a possible or necessary run of the specified system. In a single LSC diagram, entities are represented as vertical lines called {\em lifelines} and interactions between entities are represented using horizontal arrows between lifelines called {\em messages}. % connecting   a sender to a receiver. 
Messages may refer  to other entities (or properties of entities) as  arguments.  
%LSC representation captures dynamic aspects of the system behaviour: time 
Time in LSCs precedes from top to bottom, imposing a partial  order on the execution of messages.
% LSCs  allow for representing   modalities for the executed messages (i.e., whether a message may, must, or must not occur) using the format properties of the horizontal arrows: 
LSC messages  can be {\em hot} (red, ``must happen'') or {\em cold} (blue, ``may happen"). A message  may  have an execution status, which designates it as {\em monitored} (dashed arrow, ``wait for") or {\em executed} (full arrow, ``do"). The LSC language also encompasses conditions and control structures, and  it  further allows defining requirements in terms of   negation of charts. 
% 
An example for  a requirement represented as an LSC   is provided in Figure 1.  
The LSC in Figure 1(a) is the semantic representation of the requirement.
The SM in Figure 1(b) represents the code hierarchy  that is minimally required for executing it.

 %The SM for the example in Figure 1 is presented in Figure 2.
% Figure 1(a) shows the LSC representation of the sentence ``When the user clicks the button, the display  must change its color  to red.". 
%These elements address the structural (static) domain. 
 %\subsection{Live Sequence Charts}
%\subsection{Play-In and Play-Out}

%%%%  \\ \\ \\ \\ \\
%%%%
%%%%  Formalizing
%%%%
%%%%  \\ \\ \\ \\ \\


\section{Formal Settings}\label{formal}


%We are finally ready to  define our  text-to-code generation task. Specifically, we now formally articulate the input, output spaces   \(\mathcal{X,Y}\) that we  referred to at the beginning of this section as follows:
%\paragraph{Text-to-Code Generation}
  In the text-to-code generation task we aim to implement a function
\[f:\mathcal{D}\rightarrow M \]
where \(D\in\mathcal{D}\) is a piece of discourse  \(D=d_1,d_2...d_n\) consisting an ordered set of requirements
% in the requirements specification language \(d_i\in L_{req}\),
  and \(f(D)=M\in\mathcal{M}\) is a code-base hierarchy that grounds the    semantic representation of \(D\), i.e.,
\(M\triangleright  {\bf sem}(d_1,...,d_n) \)

%Our ultimate goal is to implement a prediction function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) where \(\mathcal{X}\) represents a set of requirements  documents and \(\mathcal{Y}\) is a set of actual programs. Both \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\) 
Both \(D, M\) are complex objects, so we first define these objects formally. We then describe the semantic interpretation function (\({\bf sem(.)}\)) and  we finally spell out the  definition of grounding \((\triangleright)\).
%We start out by identifying describing assumptions underlying \(D\). We continue by presenting the LSC formalism in set theoretic terms, so that later we can assign construct probability distributions over them. We finally define the notion of grounding \(\triangleright\) and present the code-base hierarchies which ground the semantic representation of the discourse in the realm of software engineering.

% the linguistic entities that are at play in the requirements document %and assumptions concerning
%and the  relation between them. We start  by making explicit our assumptions about the surface structures
 %with a recourse to the  principle of compsitionality. 
 %Last but not least, we define the 
 \paragraph{Surface Structures:}
Let \(\Sigma\) be a finite lexicon and let   \(\mathcal{L}_{req}\subseteq \Sigma^*\) be a language fragment for specifying requirements. 
We assume the sentences in \(\mathcal{L}_{req}\) have been  generated by a context free grammar  \(G=\langle \mathcal{N},\Sigma, S\in\mathcal{N}, \mathcal{R} \rangle\) where \(\mathcal{N}\) is a set of non-terminals, \(\Sigma\) is the aforementioned lexicon, \(S\in \mathcal{N}\) is a start symbol and \(\mathcal{R}\) is a set of context-free rules  \(\{A\rightarrow\alpha| A\in \mathcal{N}, \alpha\in(\mathcal{N}\cup\Sigma)^*\}\). 
For each utterance \(u \in \mathcal{L}_{req}\) we can find a sequential application  of  rules \(u= r_1\circ ... \circ r_k, \forall i : r_i\in\mathcal{R}\) that generates it.  These derivations are  graphically depicted  as parse trees, with   \(u\) defining the sequence of tree terminals in the leaves.  
We define \(\mathcal{T}_{req}\) to be  the set of trees strongly generated by   \(G\), and an auxiliary yield function   \(yield: \mathcal{T}_{req}\rightarrow \mathcal{L}_{req}\) returning the leaves of a tree. While each \(t\) defines a single yield, the converse does not hold --- different parse-trees can generate the same utterance. The task of analysing the surface structure of an   utterance \(u\in \mathcal{L}_{req}\) is modeled via  a function \(syn: \mathcal{L}_{req}\rightarrow \mathcal{T}_{req}\) that returns the correct, human-perceived, parse  of \(u\). 


% Like in any form of natural language, 
%In this work we assume syntax-based semantics, that is, we assume a function \(syn:  \mathcal{L}_{req} \rightarrow \mathcal{T}_{req} \) from requirements to  syntax trees  and a function \(sem:  \mathcal{T}_{req} \rightarrow \mathcal{C}\) from trees to  LSC diagrams.



\paragraph{Semantic Structures:}  Our target semantic representation of a  requirement  \(sem(d)=c\) is a diagrammatic structure called a {\em live sequence chart} (LSC). The LSC language is event-based, similarly to many  Neo-Davidsonian semantic theories, and therefore it is particularly suited as a formal semantic representation for natural language. 
%The LSC language  differs from   first-order predicate logic for representing NL semantics in that it has inherent temporal and aspectual dimensions in the representation of events.
%Let us first define the events of an LSC language, and then provide the formal definition of an LSC.

Let us  assume  that \(L\)  is  a dictionary of entities  (lifelines), \(A\) is a dictionary of actions, \(P\) is a dictionary of attribute names and \(V\) a dictionary of attribute values.
The set of simple events in the  LSC  formal system is defined as follows:
\[E_{active} \subseteq L\times A\times L \times (L\times P \times V)^*\times\]\[   \{hot,cold\} \times  \{executed,monitored\}   \]
%,  \{hot,cold\} a set of temperature values , and \{executed,monitored\} a  set of execution modes. We define an event as a triplet
%\[\forall e : e= \langle msg, t, x \rangle \vee e = \langle cond, t, x \rangle \]
%Where \(m\in L\times A\times L, t\in \{hot,cold\}, x\in \{executed,monitored\}   \). \(m\)
where \(e=\langle l_1, a, l_2, \{l_i:p_i:v_i\}_{i=3}^{k}, temp, exe\rangle\) and \(l_i\in L, a\in A, p_i\in P, temp\in  \{hot,cold\},\)  \( exe\in  \{executed,monitored\} \). Event \(e\) is  called a {\em message} in which an action \(a\) is carried over from a sender \(l_1\) to a receiver \(l_2\).\footnote{The LSC language distinguishes static lifelines from dynamically bound lifelines. 
For brevity, we omit this from the formal description of events, and simply assert that it may  be one of the properties of the relevant  lifeline.}  The set \(\{l_i:p_i:v_i\}_{i=3}^{k}\) depicts a set of attribute:value pairs     provided as arguments to action \(a\). The temperature \(temp\) marks the modality of the action (may, must), and the execution  \(exe\) distinguishes actions to be performed from actions  to be waited for.

An event \(e\) can also be a {stative} event, called a {\em condition}  in which
%, instead of carrying over an action from an entity onto another one, 
a logical expression is  evaluated over a set of   property:value pairs of entities in the system at some:
\[E_{stative} \subset Exp\times   (L\times P \times V)^*\times\]\[  \{hot,cold\} \times  \{executed,monitored\}   \]
Specifically, \(e=\langle exp, \{l:p:v\}_{i=0}^k, temp, exe\rangle\) is a stative event to be evaluated, where \(l_i\in L,p_i\in P, v_i\in V, temp\in  \{hot,cold\},\)  \( exe\in  \{executed,monitored\} \). The condition \(exp\) is a first-order logic formula using the usual  operators (\(\vee,\wedge,\rightarrow,\neg\)). 
%which we omit the complete language definition  for brevity. 
The set \(\{l:p:v\}_{i=0}^{k}\) depicts a (possibly empty) set of attribute:value pairs of objects in the  system that have to be evaluated. Executing  condition, namely, evaluating the logical expression specified by \(exp\) 
%on the arguments from \(\{l:p:v\}_{i=0}^{k}\), 
also has a modality (may/must) and an execution status (performed/waited for), just like active events.
%can also have a modality and an execution status,  
%. For brevity, we omit its formal description, but for all practical purposes it is interpreted analogously to \(m\).
 
The LSC language further allows us to define arbitrarily complex events by combining partially ordered sets of basic events with control structures.
\[E_{complex} \subset 
{N} \times  E_{stative} \times  \]\[\{ \langle E_c , < \rangle  | \langle E_c, < \rangle \textit{ is a poset }  \}\]
%We define a complex event as a triplet 
%\[\forall e : e= \langle \#, cond , \langle E, < \rangle \rangle 
 %\]
 \(N\) is a set of non-negative integers, \(E_{stative}\) is a set of stative events as described above, and  each element \(\langle E_c,<\rangle \) is a partially ordered set of events. This structure allows us to derive three kinds of control structures: 
\begin{itemize}
\item \(e= \langle \#, \emptyset , \langle E, < \rangle \rangle \) is a loop in which \(\langle E, < \rangle\) is executed \(\#\) times. 
\item \(e= \langle 0, cond , \langle E, < \rangle \rangle \) is a conditioned execution. If \(cond\) holds, \(\langle E, < \rangle\) is executed.
\item \(e= \langle \#, \{cond\}_{i=1}^{\#} , \{\langle E, < \rangle\}_{i=1}^{\#} \rangle \) is a switch: in case \(i\), if the condition \(i\) holds, \(\langle E,<\rangle_i\) is executed. 
\end{itemize}

   \paragraph{Definition 1 (LSC)} 
An LSC is a partially ordered set of events  \(c = \langle E, < \rangle \) where
\[\forall e\in E : e \in E_{active} \vee e\in E_{stative} \vee e \in E_{complex} \]
   
   
   
   \paragraph{Grounded Semantics:}
The  information represented in the LSC   provides the receipt for a rigorous construction of the code-base that will implement the program. This codebase is said to {\em ground} the  semantic representation. 
%The execution of the software will generate system-runs, which are in turn the denotation of the LSC semantic representation. 
 In this work we suggest to explicitly model this code-base as the requirements context.
  If our target programming language is an OO language such as Java, then the code-base will include the entities, methods, and properties that are minimally required for executing the LSCs. 
  We refer to this construct as a system model, or in short SM, as defined below.
%In this work we focus on the static (entity base) denotation of an LSC chart, leaving the notion of dynamic denotation (that is, time based) for further research. 

   
 \paragraph{Definition 2: (SM)} 
Let  \(L_m\) be a set of implemented objects, \(A_m\) a set of implemented methods , \(P_m\) a set of arguments and \(V_m\) argument values. 
Additionally we define  auxiliary functions  \(methods:A_m\rightarrow L_m\)
, \( props: P_m\rightarrow L_m\)    and \(values: V_m\rightarrow L_m \times P_m\) for identifying: the entity which contains the implementation of the method, the entity that contains the property, and the entity attribute that bears that value, respectively. A system model is a tuple \(m\) representing the implemented architecture.
%depicting properties (that is, attribute values feature structures).
\[m=\langle L_m, A_m, P_m, V_m, methods, props, values\rangle \]

Analogously to   {\em interpretation}  functions  in logic and natural language semantics we assume here an {\em implementation} function denoted \([[  . ]]\) which maps each formal entity in the LSC semantic representation to its instantiation in the code base.
Using this function we define a notion of grounding that captures the fact that a certain code-base  satisfies the  requirements of  an LSC  \(c\). 
%Intuitively it means that the requirements represented by the semantic representation can be executed in the  domain \(m\). 

   
\paragraph{Definition 3(a): (Grounding)} Let \(\mathcal{M}\) be the set of   system models and let \(\mathcal{C}\) the set of all definable LSC charts. We say that \(m\) grounds  \(c=\langle E, < \rangle\) and write \(m\triangleright c\)  iff %off the following conditions hold.
%\(\forall m \in \mathcal{M}, c \in \mathcal{C}\)  : \(m\triangleright c=\langle E, < \rangle \Leftrightarrow \)
\(\forall e\in E:\) \(m\triangleright e\).
Where:
%Specifically we say that  \(m\) {\em satisfies}  \(c\) if and only if the events and their participants are an integral part of the sorted ontology.  Formally:
\begin{itemize}
%\item  \(m\triangleright l\) iff \([[l]]\in L_m\)
\item  if \(e\in E_{eventive}\) then: \\ \(m\triangleright e\) \(\Leftrightarrow\) 
\begin{itemize}
\item[] \([[l_1]], [[l_2]]\in L \) and
\item[] \( [[a]]\in methods([[l_2]])  \) and
\item[] \(\forall i : \langle l:p:v\rangle_i \Rightarrow [[l]]\in L_m \& [[p]] \in props[[l]] \& v\in values([[l]],[[p]]) \)
\end{itemize}
\item  if \(e\in E_{stative}\) then: \\ \(m\triangleright e\) \(\Leftrightarrow\)  \\
 \(\forall i : \langle l:p:v\rangle_i \Rightarrow [[l]]\in L_m \& [[p]] \in props[[l]] \& v\in values([[l]],[[p]]) \)
 

\item \ if \(e=\langle \#, e_{s}, \langle E_c,<\rangle\)  \(\in E_{complex}\) then: \\ \(m\triangleright e\) \(\Leftrightarrow\) 
\(m\triangleright e_s\) \& \(m\triangleright E_c\) 
%\item[] \(\forall i : \langle l:p:v\rangle_i\in cond \Rightarrow [[l]]\in L_m \& [[p]] \in props[[l]] \& v\in values([[l]],[[p]]) \)

\end{itemize}

We have thus far defined how an SM   grounds the semantics of  an  LSC. In the real-world, however,  a requirements document is interpreted as a complete whole, conveying a  single shared domain that satisfies all dynamic requirements.  
Let us assume a requirements  document containing \(n\) requirements \(
{\bf d} = d_1, d_2,...,d_n\), \( d_i\in \mathcal{L}_{req}\). 
We assume that  \( {\bf sem}({\bf d})\)  is a discourse interpretation function that returns a single semantic representation for the document where identical elements across sentences share the same reference. %which of course requires its own definition, as it encompasses a process of synthesizing different requirements. 
%
% abstraction and inheritance and so on. 
%
%We assert that the semantics of \(sem(u_1,u_2,...,u_n)\) returns a single system model \(M\in \mathcal{M}\) such that the 
%There may be  a sequence of LSCs, that are grounded in a single system model.
We finally assume that \(\sqcup\) is a unification operation, returning the formal unification of two system SMs if such exists, and an empty SM model otherwise. We can now define our notion of a grounding.
% the semantics of the discourse.

\paragraph{Definition 3(b): (Grounding)} Let \({\bf d}\) be a requirements document and let \(M=\langle {\bf m}, \sqcup \rangle \) be a sequence of models and a unification operation. We say that  \( M \triangleright {\bf sem}({\bf d})\) iff    \(\forall i : m_i\triangleright sem(d_i) \) and   \(((m_1 \sqcup m_2) .... \sqcup m_n)\triangleright {\bf sem}(d_1, ...., d_n) \).

The discourse semantic interpretation provided by  \({\bf sem}\) can be as simple as asserting that all elements that have the same string name refer to the same element (entity, action, etc), and it can be as complex as taking into account synonyms 
 (``{\em clicks} the button" and ``{\em presses} the button"), anaphora (``when the user clicks the button, {\em it} changes  colour"),  binding ("when the user clicks {\em any} button, {\em this} button is highlighted"), and so on.
 In this work, we assume % \({\bf sem}({\bf u})\) implies  a 
a simple discourse interpretation function where entities, methods, properties and values that are referred to using the same string  are identical. 

This simple assumption  already holds a substantial amount of disambiguating  information concerning individual requirements. For example, assuming we have seen a "click" method over a "button" object in sentence \(i\), it may  help us disambiguate future  attachment ambiguity, favoring structures where a ``button" attached to "click"  over other attachment alternatives.  Our goal is then to model  discourse-level context for supporting  accurate analysis of individual requirements.
% probabilistically.  We start out by describing a simple sentence based probabilistic model, and then extend into to a joint sentence and discourse based modeling strategy.

\section{Probabilistic Modeling}\label{model}


%A more stringent notion of satisfiability takes into account dynamic aspects of the sets of events,  in which we see that the set of events indeed results in an executable scenario. For a formal definition of the  formal language that capture dynamic aspects of the execution  refer to \cite{pnueli}.
 
% In this work we define a basic mechanism for learning to automatically recover such grounded semantics over simple unification \(\sqcup\). We leave the extension of   \(\sqcup\) for treating cross-sentential anaphora, ellipsis,  synonyms, and different kind of ontological commitments concerning abstraction for future research. The modelling principles we propose, however, are compatible with such future extensions.

%%%%  \\ \\ \\ \\ \\
%%%%
%%%%  Modeling
%%%%
%%%%  \\ \\ \\ \\ \\




%Let \(D\in\mathcal{D}\) be a document \(D = d_1,..., d_n\)  consisting of \(n\) requirements, each requirement is a sentence in the language defined above   \(\forall i: d_i \in L_{req}\).  We  view the process of mapping elicited requirements  to a single SM representation as a structure prediction task, in which we map   a   discourse unit  \(D\in\mathcal{D}\) to a  single overall  system model that satisfies all requirements  \(M\in\mathcal{M}\).

%\footnote{Our system model (to be distinguished from our probabilistic model) may also be viewed as an ontology, a set of concepts, inter-relations and constraints that describe the  static domain   \cite{zapata-losada}. In our case this is simply a tree structure but one may stipulate that the ontology is specified using a formal description logic such as OWL \url{http://owlapi.sourceforge.net}.}  
%\[f: \mathcal{D} \rightarrow \mathcal{M}\]
%We propose two  modes of interpretation of this document: a sentence based and a discourse based mode.

\subsection{Sentence-Based Modeling}
%Let \(D\in\mathcal{D}\) be a document \(D = d_1,..., d_n\)  consisting of \(n\) requirements, each requirement is a sentence in the language defined above, that is,  \(\forall i: d_i \in L_{req}\). 
%Our first probabilistic model for parsing requirements documents assumes we parse individual sentences separately.
The task of our sentence-based model is  to learn a function that maps each requirement sentence to its correct LSC diagram and SM snapshot. 


Given a discourse \(D=d_1...d_n\) we think of each \(d_i\) as having been generated by a context free grammar.
The syntactic analysis of \(d_i\) may be ambiguous, 
%that is, we can find more than one tree that derives it. 
%In the sentence-based mode, 
so we first implement a syntactic analysis function \(syn:\mathcal{L}_{req}\rightarrow \mathcal{T}_{req} \) using a probabilistic model that selects the most likely syntax tree \(t\) of each \(d\) individually. 
%\[ t^* = argmax_{t\in \mathcal{T}_{req}} P(t|d)\]
We  can simplify the \(syn(d)\), with \(d\)   constant with respect to the maximisation:
\[
\begin{array}{ccl}
syn(d) & = & argmax_{t\in \mathcal{T}_{req}} P(t|d)\\
 & = &  {argmax}_{t\in\mathcal{T}_{req}} \frac{P(t,d)}{p(d)}    \\ 
& = &  {argmax}_{t\in\mathcal{T}_{req}} P(t,d)    \\ 
  \end{array}
\]
Since every tree \(t\) for \(d\) must include \(d\) in the leaves we can simplify further:
\[
\begin{array}{ccl}
syn(d)  &   = &  {argmax}_{t\in\{t| t\in\mathcal{T}_{req}, yield(t)=d\}} P(t)    \\ 
  \end{array}
\]
%& = &  {argmax}_{t\in\{t| t\in\mathcal{T}_{req}, yield(t)=d\}} P(t)    \\ 
 % \end{array}
%\]
In order to define a probability model over trees \(P(t)\)  we define a probability distribution  over the rules \(P:\mathcal{R}\rightarrow [0,1]\) such that for all rules of \(G\) it holds that \(\sum_\alpha P(A\rightarrow \alpha) = 1\).  Because of context-freeness we get that \(P(t)=\prod_{r\in der(t)} P(r)\).
where \(der(t)\) returns the rules that derive \(t\). 
The resulting probability distribution \(P:\mathcal{T}_{req} \rightarrow [0,1]\) defines a probabilistic language model for the requirements language \(\mathcal{L}_{req}\). 
%\[\sum_{d\in \mathcal{L}_{req}} P(d) = \sum_{d \in \mathcal{L}_{req}} \sum_{t \in \mathcal{T}_{req}, yield(t)=d} P(t) = 1\]

%\paragraph{Compositionality:}

Syntactic parse trees are complex entities, assigning structures to the seemingly flat sequences of words.
% It is widely accepted in linguistic theory that the role of these structures is to facilitate the mapping of utterances onto deeper,  structured, representation of meaning. 
The principle of compositionally asserts that the meaning of a complex syntactic entity is a function of the meaning of its parts and their mode of combination. 
We  assume a function \(sem: \mathcal{T}\rightarrow \mathcal{C}\) mapping trees to semantic constructs  in the LSC language.
The semantics of a tree \(t\in\mathcal{T}_{req}\) is derived compositionally from the interpretation of the rules in \(\mathcal{R}\). it. 
%By augmenting each syntactic \(A\rightarrow \alpha\in \mathcal{R}\) with an interpretation function, we define how the meaning of the daughters in \(\alpha\) is to be combined  to deliver the meaning of    \(A\).   
We overload the \(sem\) notation to define \(sem: \mathcal{R} \rightarrow \mathcal{C}\) as a function  assigning rules to LSC constructs, with \(\hat\circ\) merging the different posets of events.
%\semantic interpretation operations that define how the meaning of daughters should be combine to yield the meaning of the parent. 
We illustrate how \(sem\)  
map  trees to LSC constructs in the appendix.
%,  and retain \(\hat\circ\) for the interpretation composition.
 %and the operator \(\circ\) as a sequential application of these combination operation.  
Our sentence-based compositional semantics is summarized as:
% where \(u\) is an utterance describing a single requirement and \(c\) is a chart representing the utterance semantics.
 %follows. We illustrate this process in Figure 3 below.
\[sem(u)=sem(syn(u))=sem(r_1\circ ... \circ r_n) =\]\[ sem(r_1) \hat\circ   ...  \hat\circ  sem(r_n) = c_1\hat\circ ... \hat\circ c_n = c \]



%We propose to view the process of mapping each elicited requirements  to a single SM snapshot as a prediction task, in which we map   a   sentence in English  \(d\in\Sigma^*\) to a snapshot \(m\in\{m_i \sqsubseteq M  | M\in \mathcal{M}\} \). 
%\[f:\Sigma^* \rightarrow \{m_i \sqsubseteq M  | M\in \mathcal{M}\} \]
%We assume an intermediate syntactic representation between the surface form of sentences \(d\) and their semantic content \(m\). In particular, we assume that  each syntactic analysis is mapped to a single semantic interpretation. The is, we assume a deterministic mapping \(sem(t)=c  \)  (The converse doesn't hold of course, that is, a single semantic analysis can be expressed via different syntactic parse trees). 

%\paragraph{Grounding:}
 %In the sentence-based model we assume that suffices it to find the most probably  tree and then assign a single LSC  interpretation \(c\) to it. 
 Once \(c\) is determined, one can easily construct an implementation for every entity, action  and property in the chart. Then, by design, we get a system model \(m\) that grounds \(c\), that is,   \(m\triangleright c\).
 %
To construct the  system model for the entire discourse we simply return 
 \(f(d_1,...,d_n)= \sqcup_{i=1}^n m_i\) where \(\forall i : m_i \triangleright sem(syn(d_i))      \)

%\paragraph{Training:} To train the probabilistic syntactic parameters we employ a treebank PCFG learned from annotated examples.
%\paragraph{Decoding:} To provide the 1-best parse of the requirement we use  a standard CKY decoder. The runtime complexity of the decoder is \(O(n^3|G|^3)\)

\subsection{Discourse-Based Modeling}
%\subsection{Modeling}

%Let \(\mathcal{D}\) be a set of documents and \(\mathcal{M}\) be a set of system models. In this paper, we aim to implement a function \[f: \mathcal{D} \rightarrow \mathcal{M} \] Such that \(f(D) = M\), and for each \(d\in D: M\models sem(d)\)


We  assume a  given document \(D\in\mathcal{D}\) and aim  to find the most probable  system model   \(M\in\mathcal{M}\) that satisfies  the requirements. We assume that  \(M\) reflects a single domain that  the stakeholders have in mind, and we are given ambiguous natural language evidence as elicited  discourse in which they conveys it.
We instantiate this view as a {\em noisy channel} model \cite{shannon48}, which also provided the foundation for    NLP applications such as speech recognition
\cite{bah83} and machine translation \cite{brown93}. 

According to the noisy channel model,   when a signal is received it does not uniquely identify the message being sent.  A probabilistic model is then used to decode the original message. In our case, the signal is the discourse  and the message is the overall system model.  In formal terms, we want to find a model \(M\) that maximises the following:
\[f(D)=  {argmax}_{M\in\mathcal{M}} P(M|D)\]
We can simplify further using Bayes law, where \(D\) is constant with respect to the maximisation.
\[
\begin{array}{ccl}
f(D)& = &  {argmax}_{M\in\mathcal{M}} P(M|D)   \\
  &  =  & {argmax}_{M\in\mathcal{M}} \frac{P(D|M)\times P(M)}{ P(D) }   \\
  &  =  &    {argmax}_{M\in\mathcal{M}} P(D|M)\times P(M)  
  \end{array}
\]
 We would thus   like to estimate two types of probability distributions, \(P(M)\)  over the source and \(P(D|M)\) over the channel.
 
Both \(M\) and \(D\) are  structured objects with complex internal structure. In order to assign distribution to events involving  such complex structures it is customary to break the complex events down into simpler, more basic events.  We  know that   \(D=d_1, d_2, ... , d_n\) is composed of \(n\) individual sentences, each  representing a certain  aspect of the   model \(M\).
We  assume a sequence of  {\em  snapshots} of \(M\) that correspond to the timestamps \(1...n\), that is: \(m_1, m_2, ... , m_n\). The complete SM is given by the union of its different snapshots, i.e.,  \(M= \bigsqcup_i m_i\). We  then rephrase  \(P(M)\)    and \(P(D|M)\) as:
\[
\begin{array}{rcl}
 P(D|M) & = &P(d_1....d_n|m_1...m_n) 
 \\
 P(M) & = & P(m_1...m_n)
 \end{array}
\]
%We get two types of events
%\begin{itemize}
%\item The   probability of sentence \(d_i\) depends on the   sentences so far and the  system model snapshots.
%\item The  probability of a snapshot \(m_i\) depends on the sequence of previous snapshots.  
%\end{itemize}
These events may be seen as points in a  high dimensional space. In actuality, they are too complex and would be too hard to estimate directly. 
%We therefore assume two dimensionality reduction ("history-based") functions \(\phi, \psi\)  that select only relevant aspects of the conditioning context.
We  then define two independence assumptions. First, we  assume that a system  model snapshot at time \(i\) depends only on a fixed \(k\)  previous snapshots (a stationary distribution). Secondly, we assume that each sentence \(i\) depends only on the SM snapshot  at time \(i\). We now get:
\[
\begin{array}{ccl}
 P(d_1...d_{n}| m_1...m_n) & \approx &   \prod_i P(d_i|m_i)  \\
 P(m_1...m_{n}) & \approx &  \prod_i P(m_i|m_{i-1}... m_{i-k}) \\
  \end{array}
\]
Assuming a bi-gram transition model, our objective function is now represented as follows:
%\[P(m_1...m_n,d_1....d_n) \approx \prod_i P(m_{i}|m_{i-1}) P(d_i|m_i)\]
\[f(D)=   {{argmax}_{M\in\mathcal{M}}}  {\prod_{i=1}^n P(m_{i}|m_{i-1}) P(d_i|m_i)}\]

\subsection{Training and Inference}
Our model is in essence a Hidden Markov Model in which state-transition probabilities model transitions between SM snapshots and emission probabilities model the verbal description of each state. 
Ti implement this, we need to implement two different algorithms. A decoding algorithms that searches through all possible state sequences, and  training algorithms that can automatically learn the values of the, still rather complex, parameters  \(P(m_{i}|m_{i-1}),P(d_i|m_i)\) from corpora.
\[f(D)=  \underbrace{{argmax}_{M\in\mathcal{M}}}_{decoding} \underbrace{\prod_{i=1}^n P(m_{i}|m_{i-1}) P(d_i|m_i)}_{training}\]



\paragraph{Training:}
We assume a supervised training setting in which we are given  a set of examples annotated by a human expert. For instance,  these can be requirements an analyst has formulated and encoded them using an LSC editor,  disambiguating the analyses by hand. 

We  are provided with a set of pairs \(\{D_i,M_i\}_{i=1}^n\) containing \(n\) documents, where each of the pairs is represented as a set \(\{d_j,t_j,m_j\}_{j=1}^{n_i}\).
For all \(j\), \(t_j = syn(m_j)\) and \(m_j \triangleright sem(t_j)\). The union of system model snapshots yields the entire model \(\sqcup m_{i_j} = M_i\), that satisfies the set of requirements \(M_i\triangleright {\bf sem}(d_{i1},...,d_{k_i})\). 
%We assume a bi-gram   parameteric decomposition:
%\[P(m_1...m_n,d_1....d_n) \approx \prod_i P(m_{i}|m_{i-1}) P(d_i|m_i)\]
%\[f(D)=   {{argmax}_{M\in\mathcal{M}}}  {\prod_{i=1}^n \frac{P(m_{i},m_{i-1})}{P(m_{i-1})} \times \frac{P(d_i,t_i,m_i)}{P(m_i)}}\]
 %Thus we have to estimate three sets of parameters:
 
 
\paragraph{(i) Emission Parameters}
Our emission parameters  \(P(d_i|m_{i})\) represent the probability of a verbal description given an SM snapshot which grounds the semantics of the sentence. A semantic representation may result from different syntactic representations of the sentence. We calculate this probability by marginalizing out the syntactic trees that are grounded by the same SM snapshot.
\[P(d|m) = % \frac{P(d,m)}{P(m)} =
 %\sum_{t\in\{t|leaves(t)=d\}} P (t | m) =  
 \frac{ \sum_{t\in\{t|leaves(t)=d, sem(t)\triangleright m\}} P(t)}{ \sum_{t\in\{t|t\in\mathcal{T}_{req}, sem(t)\triangleright m\}} P(t)} \]
% {Estimating Transition Parameters.}
 %\(\hat{P}(d,t,m)= \sum_{\{t|yield(t)=d,m\models sem(t)\}} \hat{P}(t)= \prod_{r\in t} \hat{P}(r)\). 
%Now, we wish to also estimate the probability of a given system model \(m_i\). In principle, we can sum up over all trees and all sentences that are interpreted to yield this model \[  P(m)=   \sum_{t\in \{t\in{T}_{req}; sem(t)=m\}} P(t) \]
The probability of  \(\hat{P}(t)\) is estimated using a generative model we learn via a treebank PCFG \cite{charniak96} based on all pairs \(\langle d_j,t_j\rangle \) in the annotated corpus. We estimate rule probabilities using maximum-likelihood estimate and use simple  smoothing for unknown lexical rules using rare-words distributions.  



\paragraph{(ii) Transition Parameters}

Our transition parameters \(P(m_i|m_{i-1})\) represent the amount of overlap between the  SM snapshots. We look at the current and the previous system model, and aim to estimate how likely the current model is given the previous one.  There are different assumption that may underly this probability distribution,  reflecting different principles of human communication.  
We first define a generic estimator  as follows:
\[\hat{P}(m_i|m_j) = \frac{gap(m_i,m_j)}{\sum_{m_j} gap(m_i, m_j)}\]
where \(gap(.)\)  quantifies of the information sharing between SM snapshots. Regardless of  our  implementation of \(gap\),
it can be easily shown that P is a conditional probability distribution where \(P: \mathcal{M}\times \mathcal{M} \rightarrow [0,1]\) and, for all \(m_i, m_j,:\sum_{m_j}  P(m_i|m_j)= 1\). In practice \(\mathcal{M}\) is  a  restricted universe considered via inference, below.
% (see below).

We define different \(gap\) implementations, reflecting different assumptions about the discourse.
The first assumption is that different SM snapshots refer to the same conceptual world, so there should be a large overlap between them. We call this  the {\bf max-overlap} assumption. A second assumption is that in collaborative communication a new requirement will only be stated if it provides new information, akin to \newcite{grice}. %  in this case,  the greater the expansion of the model is, the most likely the transition is. 
This is the {\bf max-expansion} asstiomption. 
An additional  assumption  prefers ``easy'' transitions over ``hard'' ones, this is  the {\bf min-distance} assumption. The different \(gap\) calculations  are listed in Table~\ref{trans}.
% the more \(m_{i-1}\) to \(m_i\), the less likely this transition is. 

%We approximate this distribution using a similarity metric based on tree-edit distance between the two snapshots, normalised by the worst case scenario of completely disjoint trees. 

%\[P(m_i|m_{i-1}... m_{i-k}) = \frac{P(m_i,m_{i-1}...,m_{i-k})}{P(m_{i-1}... m_{i-k})} \]
%As opposed to many sequence models in NLP, thes events \(m_i,d_i\) are complex events as well. How can we define them formally? How can we assign probability distributions to them?
\begin{table}
\centering
\begin{tabular}{|rc|}
\hline
Transition & \(gap(m_{curr},m_{prev})\) \\ 
\hline\hline
{\bf max-overlap}  &\(\frac{ |m_{curr} \cap m_{prev}|}{|m_{prev}|}\) \\
{\bf max-expansion} & \(\frac{ |m_{curr} - m_{prev}|}{|m_{prev}|+|m_{curr}|}\) \\
{\bf min-distance} &  \(1-\frac{ted(m_{prev},m_{curr})}{|m_{prev}|+|m_{curr}|}\) \\
\hline
\end{tabular}
\caption{Quantifying the gap between  snapshots.}\label{trans}
\end{table}



\paragraph{Inference}

An input   document contains \(n\) requirements. %We  assume  a standard CKY parser which  can find in polynomial time the 
Our decoding algorithm considers the N-best syntactic analyses for each requirement. %The probabilistic model we assume is thus an HMM 
At each time step \(i=1...n\)  we assume N states representing the semantics of the  \(N\) best syntax trees. Thus, setting \(N=1\)  is equal to a sentence-based model where, in which for each sentence we simply select the most likely tree according to the probabilistic grammar, and construct a semantic representation for it.

For each document of length \(n\), we assume that our entire universe of system models \(\mathcal{M}\) is composed of \(N\times n\) snapshots
% \(\mathcal{M}\), 
reflecting  \(N\times n\) most likely  analyses of sentences as provided the probabilistic syntactic model.
%This is a simplifying assumptions that we make. 
(As shall be seen shortly, even with this  simplifying assumption concerning the size of our universe \(\mathcal{M}\), our discourse-based model provides substantial improvements over a sentence-based model).\footnote{This restriction is akin to pseudo-likelihood estimation, as in \newcite{arnold91}. In pseudo-likelihood estimation, instead of normalizing   over the entire set of elements, one uses a subset that reflects only the possible outputcomes. Here, instead of summing SM probabilities over all possible all sentences in the language, we sum  up  alternative SM analyses of the observed sentences in the document only. This estimation could also be addressed via, e.g.,  sampling methods.}


Our discourse based model is an HMM where each requirement is an observed signal, and each i=1..N is a state representing the SM that grounds the \(i\_th\) best tree. Because of the Markov independence assumption our setup satisfies the {\em optimal subproblem} and {\em overlapping problem} properties and we can use efficient viterbi decoding to exhaustively search through the different state sequences, and find the most probable sequence that has generated the sequence of requirements according to our probabilistic model. 
%Our decoder than returns the sequence of snapshots that maximizes the values of our overall objective function \[f(D)=   {{\textit{argmax}}_{M\in\mathcal{M}}}  {\prod_{i=1}^n P(m_{i}|m_{i-1}) P(d_i|m_i)}\]
%\paragraph{Sentence-Based Inference}
%\paragraph{Discourse-Based Inference}

%\paragraph{Runtime Complexity:}
The overall complexity of the decoder for a requirements document with \(n\) sentences of which  max length is \(l\), a grammar \(G\) of size \(|G|\), and a constant \(N\) is given by the following expression: \[O(n \times N^2 + n\times  l^3 \times |G|^3 + l^2 \times n\times N  )\]
We can break down this expression as follows: (i) In  \(O(n\times l^3 \times |G|^3)\) we generate N best trees for each one of the \(n\) requirements,
(ii) In \(O(l^2\times N\times n) \) we create the universe  \(\mathcal{M}\) based on the N best trees each  of the \(n\) requirements, and
(iii) In \(O((N\times n)^2 \times n) = O(N^2 \times n^3)\) we traverse the grid time/state using the HMM decoder.

%More intuitively, in each time step \(i=1...n\) for each model in the universe, we consider the entire set of models in the universe as possible previous states, yielding the latter expression. Since both G and N and are finite constants,  our algorithmic run time complexity is polynomial in the length \(m\) of the utterance and the number of requirements \(n\) in the document.
%IThat is, in each of the \(i\) time-steps we review all states at time \(i\) and at time \(i+1\) (both bound by N), we exhaustively parse the requirements of length \(m\) using \(G\) and we construct, for each state, a semantic representation for each of the N analyses in square time in the length of the length of the sentence (marked here using the averaged length  \(m\)). 



%%%
%%% TO USE?
%%% 
%\subsection{Overall Architecture}
%The system we present extends the PG tool, and eclipse-based platform for playing-in software requirements, with additional action. Previously the system allowed entering requirements that are supported by the writing guidelines of \cite{} in an interaction mode, where ambiguities that arise are resolved by a human agent through extensive interaction.   In this work, we replace to human-extensive work with a statistical model that achieves precisely this goal. The resulting system has thus three input modes:
%\begin{itemize}
%\item {\em Interactive}: The system uses a chart parser to incrementally parse the sentence, using the formal grammar rules. Whenever the system encounters an ambiguity, it issues a query to the user. Based on the reply the system constructs the system model and LSC sequences. This is our baseline system, and works exactly as presented Gordon and Harel \cite{}. 
%\item {\em Sentence-based} The system uses a CKY parser trained on the data presented in \ref{eval}. The most likely syntax tree for each requirements sentence is translated, using deterministic rules, to an LSC sequence and SM. The resulting SM is a  union of all the snapshots.
%\item {\em Discourse-based}  The system uses a sequence-based viterbi decoder  presented in \ref{eval}, and provides a single system model and a sequence of LSCs at once.
%\end{itemize}
%As we have seen in Section \ref{eval} results may not be 100\% accurate as there are difficult ambiguous cases that our system cannot tackle yet. The PlayGo tool, within which the system is embedded, allows for interactive post editing of the resulting sequences using an intuitive visual editor, so a human user may easily post edit these scenarios to the desired one. We extended the PG system which a specifically tailored language generation component, that automatically transforms LSC sequences into textual requirements that contain all relevant SM information. The description of the generation code is out of the scope of this paper. However, using this Post-Editing/Generation cycle the platform allows non-programmers to effectively and efficiently obtain more, and more accurate, semantically annotated requirements data.



\section{Experiments}\label{eval}
{\bf Goal.}
%TODO: tables for training data, graphs for 1-best parsing,  for N-best parsing,  for Oracle parsing.
We set out to evaluate the accuracy of representing and grounding the semantics of requirements  documents in the two new modes of  analysis. Our evaluation methodology is as standardly assumed in machine learning and in NLP. Given a set of annotated examples, we divide them into disjoint training set and test set. We train our statistical model on a set of training documents and predict the semantic analysis of the test requirements documents.   We then compare the predicted structures to the gold semantic analyses of the test documents, in order to empirically quantify our prediction error.

\paragraph{Metrics.}
Our formal semantic LSC objects in the system are formally synset trees. Therefore, we can use standard tree evaluation metrics, such as the ParseEval scores implemented using evalb. In order to evaluate the accuracy of the lsc representations we use four kind of scores
\begin{itemize}
%\item {\bf Precision:}  number of  correctly predicted nodes , normalized by  size of  predicted tree.
%\item {\bf Recall:} number of  correctly predicted nodes , normalized by  size of  gold tree.
\item[] {\bf POS:} the percentage of part-of-speech tags predicted correctly.
\\ {\bf LSC-F1:} the harmonic means of precision and recall on the  tree. 
\\ {\bf LSC-EM:} 1 if the predicted tree is  an exact match to the gold tree,  0 otherwise.
\end{itemize}

In  the case of SM trees, as opposed to the LSC trees, we can not assume identity of the yield  between the gold and parse trees so we cannot use ParseEval. Therefore, we use a distance based metrics in the spirit of \newcite{tsarfaty12}.
To evaluate the accuracy of the system model representation we use two kinds of scores:
\begin{itemize}
\item[] {\bf SM-F1:} the normalized edit distance between the gold and predicted SM trees, subtracted from a unity.
\\ {\bf SM-EM:} 1 if the predicted SM is an exact match with the gold SM, and 0 otherwise.
%, normalized by the sizes of both gold tree and parse tree. 
\end{itemize}

\paragraph{Data.}
We have  a small seed  of correctly annotated  requirements-specification case studies that describe simple reactive systems in the LSC language. 
%The requirements are specified one scenario per line, and are provided
Each requirement is annotated with the correct LSC sequence, and  the specified systems are grounded in a java implementation. We use  the set of  case studies provided by  \cite{gordon09}. 
%The scenarios we have are Phone, WristWatch, Chess, Vending Machine. 
%These system resulted from case studies in liberated software engineering (cf.\ \cite{harel}). % studies, course and demos in \cite{harel,zvika,etc}. 
Table~\ref{sample} lists  the case studies and asic statistics concerning these data.

%The annotation of requirements documents of complete systems is a complex and time consumer manner, and typically require a programmer expert in order to be conducted accurately.  On the other hand, with modest amount of about 100 annotated scenarios altogether,
This annotated seed is extremely small, and  it is hard to generalize from it to unseen examples. In particular, we are not guarantees to have observed all possible structures that are theoretically permitted by the assumed grammar.
To cope with that, we create a synthesis set of examples using the grammar  of \cite{gordon09} in generation mode, and randomly generate trees 
\(t\in\mathcal{T}_req\). We assume a grammar with a fixed functional lexicon and an open set of nouns, verbs and adjectives. 

The grammar \(G\)  we use to generate the synthetic examples clearly {\em over-generates}. That is, is creates many trees that do not have a sound human interpretation. In fact, according to our semantic interpretation function only 3000 our of 10000 generated examples have a sound semantic interpretation grounded in an SM. Nonetheless, they allow us to smooth the syntactic distribution that are observed from the seed alone.
%that have not been observed in our small training data. 
%We try different ways to interpolate the real-world examples seed and the synthetic examples 100/0, 99/1, 50/50. Preliminary experiments on our development set as shown in show that  99/1 provide superior results.

%How many synthetic examples should we generate for training our sentence-based model? To determine this, we conduct another experiment in which we vary the generated examples train-set size. The curve in Table~\ref{synthetic} shows that increase in accuracy as we increase the generated test set size. We see that at around 10,000 our curve rounds off. We thus select this model as our baseline.





\paragraph{Results.}

\begin{table}
\center
\scalebox{0.8}{
\begin{tabular}{|l||c|c|}
\hline 
System & \#Scenarios &  avg sentence length \\
 % & & \\
\hline
Phone & 21 & 24.33 \\
WristWatch & 15 & 29.8 \\
Chess & 18&  15.83\\
Baby Monitor &14 &  20\\
\hline
Total  & 68 &  22.395\\
\hline
\end{tabular}}
\caption{Seed Gold-Annotated  Requirements}\label{sample}
\end{table}

%To evaluate our sentence-based model, we performed a cross-fold validation experiment in the same settings as the baseline described above. In each fold, we hold out one document, and report two scores: the macro-average on the lsc trees accuracy, and the accuracy on the overall system model. The results of this experiment is given in table \ref{}. While the documents vary in length, we prefer not to split and mix documents in order not to break the document coherence.


%To evaluate the discourse based model, we perform a set of experiments in which each requirements is first assigned its N-best syntactic representation, and then we construct a document-based model to parse it. Table \ref{} shows the accuracy results for an Oracle experiment where we vary the value of N (1,2,4,8,16, 32,64,128). We see that we get a substantial improvement with growing N, which indicates that the correct parse is present in the list more often than in smaller Ns. In order to recover it without the Oracle function, we choose to rely on context-based information. Here we see a substantial improvement on all metrics for  \(N \leq 64\), with a slight drop at \(N=128\).




%We evaluate our HMM discourse-based model on N=128, again in a cross-ford settings, and show the results in table \ref{}. We see that, in order discourse-based model, results on both LSC accuracy and SM accuracy significantly improve, relative to the sentence-based model.
%We performed a qualitative error analysis to view the kind of errors that are resolved using the discourse based context. In table X we can see three trees, a gold tree, a tree predicted using the sentence based model and a tree predicted by the discourse based model. The ambiguity in the syntactic structure has been resolved successfully in the discourse based model, taking the contextual information into account.


Our first set of experiments aims to evaluate  a sentence-based PCFG parsing compares to the discourse model, building on our small seed and large set of synthetic examples. Table~\ref{phone} presents our results for parsing the {\em Phone} example, that acts as our development set. 


We first present the results of an Oracle experiment, wherein for every time stamp the system simply select the highest scoring tree (in terms of the LSC score defined above) that is available in the list. The initial ranking is provided given a PCFG learned from our seed set interpolated from the probabilities learned from the automatically generated corpus. This experiment provides an upper bound on the results we can get from each of the metric at each N value (where N has a direct effect on run-time complexity). Next we present the results of our discourse based model, where the grammar rules are estimated based on the arbitrarily generated trees, but incorporate transition probabilities that respect the max-overlap assumption. We see that already in this experiment, we see mild improvement for N>1 relative to the Oracle results, indicating that an extremely weak signal like overlap between the domain of neighboring sentences already helps discerning more apropriate parses in context. The third part of the table list the results, for all N, using the discourse based model, where the PCFG interpolates rules weights from the seed as well as synthetic examples. The transitions are again estimated to reflect the state overlap.   
 


In our next experiment we compare different implementation of the \(gap(m_i,m_j\) estimation. We estimate  probability distributions that reflect each of the assumptions  we discussed, and we add an additional condition called {\bf hybrid}, in which we interpolate the {\bf max-expansion} and {\bf max-overlap} estimates with equal weights -- aiming to capture both assumptions about the discourse semantics.  For all \(N\) the trend from the previous experiment repeated. Notably, the hybrid model provides a larger error reduction that its two components used separately, indicating that in order to capture discourse context it may very well be that we would need to balance out, possibly conflicting factors, of the interpretation.   
	


\begin{table} 
\center
\center
\scalebox{0.7}{
\begin{tabular}{|r|cccc|}
\hline 
Data Set & N=1 &  32 & 64 & 128 \\
\hline \hline
{\em Baby Monitor} & & &  &  \\
\hline
POS  &  94.29	 &  96.07	 & 96.07	 & 96.07	 \\
LSC-F1 &  91.50	& 94.96& 94.96 & 94.96	 \\ 
LSC-EM &  14.29&21.43 &21.43 & 21.43\\ 
SM-TED & 88.63 & 91.11 &  91.11& 91.11 \\ 
SM-EM &  28.57& 50.00 & 50.00 & 50.00 \\ 


\hline
{\em Chess} & & &  & \\
\hline
POS   & 92.63	  &  93.68	  & 93.68	   & 93.68	   \\
LSC-F1 &  95.79	  & 96.16 & 96.16	  &  96.16	  \\ 
LSC-EM &  5.56 &  11.11 &   11.11&  11.11 \\ 
SM-TED & 94.90 & 97.10& 97.10 & 97.10\\ 
SM-EM &  61.11& 66.67& 66.67& 66.67 \\ 

\hline
{\em Phone} &  & &  & \\
\hline
POS  & 91.59	  &  94.72	&  94.91 & 94.32 \\
LSC-F1 &  88.05	 & 92.15	 &   92.42	 &   92.07 \\ 
LSC-EM &  14.29 &   47.62&   47.62&  47.62 \\ 
SM-TED & 85.17 & 94.87 &  95.75& 93.83 \\ 
SM-EM & 14.29 & 57.14 & 57.14 & 57.14 \\ 

\hline
{\em WristWatch} & & & & \\
\hline
POS  &  34.23	 &  34.45	   &    34.45	    &    34.45	    \\
LSC-F1 & 50.06	   &   51.05	&  51.05	&  51.05	 \\ 
LSC-EM &   26.67&  26.67&  26.67&   26.67\\ 
SM-TED & 71.15 &  72.73&72.73 & 72.73 \\ 
SM-EM & 	26.67 &33.33 &33.33 &33.33 \\ 

\hline

\hline
\end{tabular}}
\caption{Cross-Fold Validation for N=1, 32, 64, 128.  For the Seed+Generated trained system, with the {\bf Hybrid} transition parameters.}\label{disc-based}
\end{table}




\begin{table}
\scalebox{0.76}{
\begin{tabular}{|r|ccccc|}
\hline
N=1 & POS   & LSC-F1 & LSC-EM & SM-TED & SM-EM \\
\hline
{\bf Gen-Only} &  85.52	 & 84.40	  &   9.52& 84.25 & 9.52 \\
{\bf Gen+Seed} & {\bf 91.59} & {\bf 88.05} & {\bf 14.29} & {\bf 85.17}& {\bf 14.29} \\
\hline
\end{tabular}}
\caption{Sentence-Based modeling: Accuracy results  on the {\em Phone} development set.}
\end{table}

\begin{table}
\centering


\scalebox{0.7}{
\begin{tabular}{|r|ccccccc|}
\hline 
System &      N=2 & 4 & 8 & 16 & 32 & 64 & 128 \\
 % & & EM / P / R / F \\
 \hline
 {\bf Oracle} &  & & & & & & \\
\hline
POS  &    91.98	&  93.54&   94.91 &   95.30	   &  96.09	 & 96.67	  &  {\bf 96.87}	  \\
%  P  &90.45	  &   91.02 &  92.79	 & 94.30	 &    95.26	&  95.80	 &    96.34	 &   96.76	  \\
%  R & 85.77 & 86.55 & 89.91& 92.11 &  93.53& 94.44 &  95.47& 96.64 \\
LSC-F1  &  	    88.73	 &   91.33	 & 93.19	 &   94.39&    95.11& 95.91 &    {\bf 96.70}\\
  LSC-EM  &     23.81&  42.86 & 61.90 &   61.90& 66.67 &  76.19&  {\bf 76.19} \\
  SM-TED  &   86.54 &  91.28& 94.28  & 94.88&  96.24& 97.51&  {\bf 98.80}\\
  SM-EM  &  23.81  &  42.86& 66.67&  71.43&  76.19&  76.19& 76.19 \\ 
 \hline
  {\bf Gen-Only}    & & & & & & & \\ 
  \hline
 POS    &  85.52	  &  86.30	 &    87.67	  &  88.45	         &   88.85	      &    88.85	   &     88.85	     \\
%   P  & & & & & & & & \\
% R  & & & & & & & & \\
  LSC-F1  &     84.40	  &     85.35	 &   86.31	 &  87.51	  &  88.81	   &   89.30	    &     89.51	      \\
  LSC-EM  & 9.52 & 9.52& 14.29 & 14.29 & 14.29 &  14.29&  14.29\\
  SM-TED  &  84.25	&  85.94& 89.14 &  91.90&  92.81	&  	93.31& 92.70 \\
  SM-EM  &   9.52 &  19.05& 33.33 & 33.33 & 33.33  & 	38.10 & 33.33 \\ 

  \hline
  {\bf Gen+Seed}   & & & & & & & \\ 
  \hline
 POS  &   	   91.78	   &   92.95&   93.54&   93.35	  & 94.32	 & 94.52&   93.93\\
%  P   & 90.45	   & 90.57	 & 91.94	&  92.40	& 92.51	   &   92.97	   &   93.12	 &   93.08	  \\
%R & 85.77 &  85.77&  88.49& 89.65&89.52 &90.69 &91.07 &  90.43\\
 LSC-F1   &     88.11 &  90.18	 &   91.00	&    90.99&  91.81	&  92.09	 &    91.73 \\
  LSC-EM  &      19.05 &    38.10&  42.86 &    42.86&  42.86 &  42.86 &      42.86 \\
  SM-TED  &     85.49& 90.78 & 93.59 & 93.02&94.81 & 95.69 &  93.76\\
  SM-EM  & 19.05 & 38.10 &  52.38& 52.38 &  52.38& 52.38 & 52.38 \\ 
 
\hline
\end{tabular}}
\caption{Discourse-Based modeling: Accuracy results  on the {\em Phone} dev set. The system (in {\bf bold}) specifies the model that selects the N-best candidates.  The Oracle experiment selects the highest  scoring LSC tree among the N-candidates, providing an accuracy upper bound. The Gen-Only relies on   synthetic examples, providing a lower bound.  Gen+Seed adds a seed of annotated examples, providing a strong baseline for the task.
% and provides a lower bound for all experiments.  
%using an interpolated grammar that is based on a real-world seed and generated requirements for estimating emissions, and the max-overlap metric for estimating transitions.
}\label{sent-based}
\end{table}


    
    
\begin{table}[t]
\centering
\scalebox{0.7}{
\begin{tabular}{|r|ccccccc|}
\hline
 Transitions &    N=2 & 4 & 8 & 16 & 32 & 64 & 128 \\
 \hline
 \hline
  {\bf No Trans}   & & & & & & & \\ 
  \hline

  POS    &    91.78& 92.56	 &93.35 &93.15 & 94.32	 & 94.52 &   93.93	 \\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1  &  	  88.11	  &  89.49	 &    90.67	 & 90.66	   &  91.81	  &92.09	    &  91.73	 \\
  LSC-EM  & 19.05 & 38.10& 42.86&  42.86& 42.86&  42.86&  42.86\\
  SM-TED  &  85.49 & 90.76& 93.68&93.11 & 94.81&95.69 &93.76  \\
  SM-EM  &   19.05 &  38.10&  52.38& 52.38 & 52.38 &  52.38& 52.38 \\ 

 \hline\hline
  {\bf Edit Dist}   & & & & & & & \\ 
  \hline	 


  POS    &   91.98& 92.76 &  93.54 &  93.35&  94.32	&    94.52&  93.93 \\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1  &   	      88.39	    &  89.77	  &  91.00	   &   90.99	  &   91.81	   &92.09	     &    91.73	   \\
  LSC-EM  &    23.81 &  42.86 &  47.62 &   47.62&    47.62& 47.62 &  47.62 \\
  SM-TED  &  86.54&91.71 & 94.38&93.81 &95.57 &96.43 & 94.53 \\
  SM-EM  &  23.81&  42.86&  57.14& 57.14 & 57.14 &  57.14&  57.14\\ 

  \hline
  {\bf Overlap}   & & & & & & & \\ 
  \hline


  POS    &   91.78	    &   92.95	&   93.54&    93.35 &    94.32  & 94.52		 &  93.93 \\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1  & 	    88.11	  & 90.18	  & 91.00	   & 90.99	    &  91.81	 &  92.09	   & 91.73	   \\
  LSC-EM  &      19.05 &   38.10&  42.86&   42.86&  42.86&  42.86&  42.86\\
  SM-TED  & 85.49 &90.78 & 93.59 &93.02 &94.81 & 95.69& 93.76 \\
  SM-EM  &   19.05& 38.10 &52.38 &52.38 &52.38 &52.38 &52.38 \\ 
  \hline
  {\bf Expand}  & & & & & & & \\ 
  \hline

  POS    &     91.98	 &  92.76	&   93.74 & 93.54 & 94.32 &94.52	   &  93.93 \\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1  &      88.39	 &   89.71	     &  91.00	 & 90.99	 &  91.68	 & 91.96 & 91.60	\\
  LSC-EM  &   23.81&42.86 &47.62 & 47.62&47.62 &47.62 &47.62 \\
  SM-TED  &  86.54 & 91.93& 93.75&93.18 &94.79 & 95.66&  93.75\\
  SM-EM  &   23.81 & 42.86 & 57.14 & 57.14 &  57.14&  57.14& 57.14\\ 


  \hline
  {\bf Hybrid}   & & & & & & & \\ 
  \hline

  POS    &     91.78& 92.95	  & 93.93	 &  93.74 & 94.72 & 94.91&  94.32	\\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1  &          88.11	& 90.18& 91.34	 &    91.33	 & 92.15	    &   92.42	 & 92.07	   \\
  LSC-EM  &    19.05& 38.10 & 47.62 &47.62 &47.62 &47.62 &47.62 \\
  SM-TED  &   85.49&90.78 &93.66 &93.09 &94.87 &95.75 &93.83 \\
  SM-EM  &   19.05& 38.10&  57.14& 57.14 &  57.14& 57.14 & 57.14 \\ 
\hline
        
\hline
 \hline
  {\bf  No Emit}    & & & & & & & \\ 
  \hline 
  
  POS       & 91.78	    &91.98	 & 92.37& 92.37&92.17 &   92.76	 & 93.15	 \\
%   Precision  & & & & & & & & \\
% Recall  & & & & & & & & \\
  LSC-F1   	  &88.11 &88.79	   & 89.12	&89.12	  &89.39	 &89.67	 & 89.89	 \\
  LSC-EM &  19.05& 19.05& 23.81&23.81 &  23.81&  23.81&  23.81\\
  SM-TED  &85.49 &	85.74 &85.82 & 85.82&85.87 & 86.85& 86.92\\
  SM-EM  & 19.05 & 19.05 & 23.81 & 23.81 &  23.81& 23.81 & 23.81 \\ 
\hline
\end{tabular}}

\caption{Experiments on the {\em Phone} development set. The system (in {\bf bold}) specifies the estimation procedure for  emission probabilities. In all experiments we use the  Generated+Seed grammar  based on a real-world seed and generated requirements.  }

%  The Oracle experiment selects the highest LSC score tree among the N-candidates, providing the accuracy upper bound for all metrics. The Generated-Only relies only on synthetic examples, and provides a lower bound for all experiments. }\label{sent-based}
\end{table}




Our final experiment is a cross-fold experiment in which we leave one document out as a test case and shuffle our training seed. We provide a snapshot of our results in Table \ref{}. Here we see that the discourse based model outperforms the sentence based model  \(N=1\) in all cases. Moreover, the drop in \(N=128\) seems to be incidental to this data set. While these test sets are all very small, the consistent improvement we get on semantically parsing the dynamic (LSC) and static (SM) domain, in all metrics, is consistent with our preliminary modeling target -- that modeling the interpretation process within the discourse context that takes place, has a substantial benefit for the overall automatic understanding of the text. 
 

\section{Conclusion}\label{conc}
\vspace{-0.03in}
The requirements understanding task presents an exciting challenge to  CL/NLP. We ought to automatically recover the entities in the discourse, the actions they  take, conditions, temporal constraints  and constraints on execution. 
 %(modalities such as may, must or forbidden), and so on.
Furthermore, it requires us to extract a single ontology that satisfies all individual requirements.
 The contribution of this paper is three-fold: we formalized the task, propose a  formalism for semantic representation and its grounding in code, and evaluate two models empirically on discourse-to-model prediction. We show consistent improvement of  discourse-based  over  sentence-based models, in four case studies. 
 %We present supervised statistical training and an efficient decoding procedure that seeks for an overall model that is most probable given all possible interpretations
At the same time, this work assumes  a restricted fragment of English. In the future we intend to apply this model to interpreting  requirements of a less restrictive NL fragment.  
%We further intend to explore the use of this discourse-based generative modeling  on other semantic tasks augmented with a knowledge-acquisition component.

\bibliography{text2code}
\bibliographystyle{acl}

\end{document}
 \appendix
 
 
\begin{figure}[t]
\scalebox{0.65}{\includegraphics{lsc-ex}}
\caption{An LSC representation of the sentence "When the user clicks the button then the display must set its color to red." In the diagram we see three different entities, a User, a Button and a Display. The Button has a method click()  and the display has a color property that may be accessed using a setter. The dynamics of the scenario requires that a clicking event by a user  is monitored and optional. In the case it does happen,  the color-setting event must occur.  
%These User, Button and Display entitied and their method address the structural (static) domain.  while 
}
\end{figure}

% \section{Semantic Interpretation of CFG rules}\
 
 
 \begin{table}
\scalebox{0.65}{
\begin{tabular}{|lcl|l|}
\hline 
A & \(\rightarrow\) & \( \alpha \)& A.sem  \\
\hline 
S  & \(\rightarrow\) &  S\(_{LSC}\)  &S\(_{LSC}\).sem \\
%S\(_{TOP}\) &   \(\rightarrow\) & MODEL &  {MODEL.sem} \\
S\(_{LSC}\) &  \(\rightarrow\) &  CLAUSE\(_{WHENEVER}\) &  {{\em fCreateLsc}(CLAUSE\(_{WHENEVER}\).sem)}    \\
S\(_{LSC}\)&  \(\rightarrow\) &  CLAUSE\(_{FORBID}\) RB\(_{THEN}\) CLAUSE & {{\em fCreateForbiddenLsc}(CLAUSE.sem)}  \\
S\(_{LSC}\) &  \(\rightarrow\) &  CLAUSE\(_{FORBID}\)   RB\(_{THEN}\)   CLAUSE\(_{WHENEVER}\)   & {{\em fCreateForbiddenLsc}(CLAUSE\(_{WHENEVER}\).sem)} \\\hline
CLAUSE\(_{WHENEVER}\)&  \(\rightarrow\) &    RB\(_{WHEN}\) CLAUSE\(_{MAY}\)   RB\(_{THEN}\) CLAUSE\(_{MUST}\) & {{\em fAddPreMain}(CLAUSE\(_{MAY}\).sem, CLAUSE\(_{MUST}\).sem)} \\
CLAUSE\(_{FORBID}\) &  \(\rightarrow\) & DT\(_{THE}\) NN\(_{FOLLOWING}\) AUX\(_{CAN}\) RB\(_{NEVER}\) VB\(_{HAPPEN}\) &  \\
CLAUSE\(_{FORBID}\)  &  \(\rightarrow\) &  DT\(_{THE}\) NN\(_{FOLLOWING}\) COP\(_{IS}\) JJ\(_{FORBIDEN}\)  & \\
CLAUSE\(_{MAY}\)  &  \(\rightarrow\) &  CLAUSE & {CLAUSE.sem} \\
CLAUSE\(_{MUST}\)    & \(\rightarrow\) &  CLAUSE &   {CLAUSE.sem}  \\
CLAUSE\(_{LOOP}\)    & \(\rightarrow\) &  RB\(_{WHILE}\) S\(_{EXPRESSION}\)  RB\(_{THEN}\) CLAUSE & {{\em fCreateLoopWithCondition}(S\(_{EXPRESSION}\).sem, CLAUSE.sem)} \\
CLAUSE\(_{COND}\)    & \(\rightarrow\) & CLAUSE\(_{IF}\)   & { CLAUSE\(_{IF}\).sem} \\
CLAUSE\(_{COND}\)    & \(\rightarrow\) &  CLAUSE\(_{IF}\)   CLAUSE\(_{ELSEIF}\)   & {{\em fAddCharts}(IF-CLAUSE.sem, NULL.sem, ELSE-IF-CLAUSE.sem)} \\
CLAUSE\(_{COND}\)    & \(\rightarrow\) &  CLAUSE\(_{IF}\) CLAUSE\(_{ELSE}\) & {{\em fAddCharts}(IF-CLAUSE.sem, NULL.sem, ELSE-CLAUSE.sem)} \\
CLAUSE\(_{COND}\)     & \(\rightarrow\) &   CLAUSE\(_{IF}\)  CLAUSE\(_{ELSEIF}\)  CLAUSE\(_{ELSE}\)  & {{\em fAdd3Charts}(CLAUSE\(_{IF}\).sem, CLAUSE\(_{ELSEIF}\).sem, CLAUSE\(_{ELSE}\) .sem)} \\
CLAUSE\(_{IF}\)    & \(\rightarrow\) &  PREP\(_{IF}\) S\(_{EXPRESSION}\)  RB\(_{THEN}\) CLAUSE & {{\em fCreateCondAddChart}(S\(_{EXPRESSION}\).sem, CLAUSE.sem)} \\
CLAUSE\(_{ELSEIF}\)   & \(\rightarrow\) &  CLAUSE\(_{ELSEIF}\)  S\(_{EXPRESSION}\)  RB\(_{THEN}\) CLAUSE & {{\em fCreateElseCondAddChart}(S\(_{EXPRESSION}\) .sem, CLAUSE.sem)} \\
CLAUSE\(_{ELSEIF}\)   & \(\rightarrow\) &  CLAUSE\(_{ELSEIF}\)  CLAUSE\(_{ELSEIF}\)  & {{\em fAddCharts}(0.CLAUSE\(_{ELSEIF}\) .sem, null , 1.CLAUSE\(_{ELSEIF}\) .sem)} \\
CLAUSE\(_{ELSE}\)   & \(\rightarrow\) &  [RB\(_{THEN}\)] RB\(_{ELSE}\) [RB\(_{THEN}\)] CLAUSE & {{\em fCreateElseAddChart}(CLAUSE.sem)} \\
CLAUSE   & \(\rightarrow\) &  CLAUSE COORD CLAUSE &  {{\em fAddCharts}(CLAUSE[0].sem, COORD.sem, CLAUSE[1].sem)} \\
CLAUSE  & \(\rightarrow\) & CLAUSE\(_{COND}\)  &  {CLAUSE\(_{COND}\).sem} \\
CLAUSE  & \(\rightarrow\) &  CLAUSE\(_{ELSIF}\)  &{{\em fAddToLsc}(CLAUSE\(_{ELSE-IF}\).sem)} \\
CLAUSE  & \(\rightarrow\) & CLAUSE\(_{ELSE}\)  & {{\em fAddToLsc}(CLAUSE\(_{ELSE-CLAUSE}\).sem)} \\
CLAUSE   & \(\rightarrow\) &  CLAUSE\(_{LOOP}\) & {CLAUSE\(_{LOOP}\).sem} \\
CLAUSE   & \(\rightarrow\) &  CLAUSE\(_{WHENEVER}\) & {{\em fAddToLsc}(CLAUSE\(_{WHENEVER}\).sem)} \\

CLAUSE   & \(\rightarrow\) &  S\(_{MSG}\)  & {S\(_{MSG}\).sem} \\
CLAUSE   & \(\rightarrow\) &  S\(_{PROP-CHANGE}\)   & {S\(_{PROP-CHANGE}\)  .sem} \\
CLAUSE  & \(\rightarrow\) &   S\(_{EXPRESSION}\)  &  {{\em fAddToLsc}(S\(_{EXPRESSION}\).sem)} \\
CLAUSE   & \(\rightarrow\) &   S\(_{TIME-CHANGE}\)  &  {  CLAUSE\(_{TIME-CHANGE}\) .sem} \\
\hline
S\(_{MSG}\)    & \(\rightarrow\) &  NP\(_{SBJ}\) [MD] VB\(_{METHOD}\) NP\(_{OBJ}\) [PROP-VAL] 	& {{\em fCreateMessage}(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, METHOD.sem, PROP-VAL.sem)} \\
S\(_{MSG}\)    & \(\rightarrow\) &  NP\(_{OP}\) [MD] VB\(_{METHOD}\) [PROP-VAL]	 & {{\em fCreateMessage}(OP.sem, OP.sem, TEMP.sem, METHOD.sem, PROP-VAL.sem)} \\
S\(_{MSG}\)    & \(\rightarrow\) &  NP\(_{SBJ}\) [MD] VB\(_{METHOD}\) PROP-VAL PREP NP\(_{OBJ}\)	& {{\em fCreateMessage}(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, METHOD.sem, PROP-VAL.sem)} \\
S\(_{MSG}\)      & \(\rightarrow\) &  NP\(_{SBJ}\) [MD] VB\(_{METHOD}\) NP\(_{OBJ}\) NN\(_{PROP-NAME}\) & {{\em fCreateMessageWithPropArg}(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, METHOD.sem, NN\(_{PROP-NAME}\).sem)} \\
S\(_{MSG}\)    & \(\rightarrow\) & NP\(_{SBJ}\) [MD] VB\(_{METHOD}\) DET NN\(_{PROP-NAME}\) OF NP\(_{OBJ}\) &  {{\em fCreateMessageWithPropArg}(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, METHOD.sem, NN\(_{PROP-NAME}\).sem)} \\
S\(_{PROP-CHANGE}\)   & \(\rightarrow\) &  NP\(_{SBJ}\) [MD] SET-PROP NP\(_{OBJ}\) NN\(_{PROP-NAME}\) [PROP-VAL] & {fCreatePropChange(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem)}  \\

 

S\(_{PROP-CHANGE}\)   & \(\rightarrow\) &  NP\(_{OP}\) NN\(_{PROP-NAME}\) [MD] SET-PROP [PROP-VAL] 	& {{\em fCreatePropChange}(OP.sem, OP.sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem)} \\
S\(_{PROP-CHANGE}\)   & \(\rightarrow\) &  NP\(_{OP}\) [MD] SET-PROP [PROP-VAL] NN\(_{PROP-NAME}\) & 	{{\em fCreatePropChange}(OP.sem, OP.sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem)} \\
S\(_{PROP-CHANGE}\)   & \(\rightarrow\) &  NP\(_{OP}\) [MD] SET-PROP ITS NN\(_{PROP-NAME}\)	[PROP-VAL] & {{\em fCreatePropChange}(OP.sem, OP.sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem)} \\
S\(_{PROP-CHANGE}\)   & \(\rightarrow\) &  NP\(_{SBJ}\) NN\(_{PROP-NAME}\)1 [MD] SET-PROP [PREP] [DET] OBJECT PROP-NAME2 	& {{\em fCreatePropChangeEx}(NP\(_{SBJ}\).sem, NP\(_{SBJ}\).sem, TEMP.sem, PROP-NAME1.sem, OBJECT.sem, PROP-NAME2.sem)}\\
S\(_{PROP-CHANGE}\)  & \(\rightarrow\) &  NP\(_{SBJ}\) PROP-NAME1 [MD] SET-PROP [PREP] [DET] OBJECT PROP-NAME2 	&{{\em fCreatePropChangeEx}(NP\(_{SBJ}\).sem, NP\(_{SBJ}\).sem, TEMP.sem, PROP-NAME1.sem, OBJECT.sem, PROP-NAME2.sem)} \\
S\(_{PROP-CHANGE}\)   & \(\rightarrow\) & NP\(_{SBJ}\) [MD] VP\(_{SET-PROP-BY-VAL}\) NP\(_{OBJ}\) NN\(_{PROP-NAME}\) PROP-VAL  & {{\em fCreatePropChangeByVal}(NP\(_{SBJ}\).sem, NP\(_{OBJ}\).sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem, SET-PROP-BY-VAL.sem)} \\
S\(_{PROP-CHANGE}\)  & \(\rightarrow\) &  NP\(_{OP}\) NN\(_{PROP-NAME}\) [MD] SET-PROP-BY-VAL PROP-VAL & 	{{\em fCreatePropChangeByVal}(OP.sem, OP.sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem, SET-PROP-BY-VAL.sem)} \\
S\(_{PROP-CHANGE}\)  & \(\rightarrow\) &  NP\(_{OP}\) [MD] SET-PROP-BY-VAL PROP-VAL NN\(_{PROP-NAME}\)	& {{\em fCreatePropChangeByVal}(OP.sem, OP.sem, TEMP.sem, NN\(_{PROP-NAME}\).sem, PROP-VAL.sem, SET-PROP-BY-VAL.sem)}\\

S\(_{PROP-CHANGE}\)  & \(\rightarrow\) &  NP\(_{OP}\) [MD] VP\(_{SET-PROP-BY-VAL}\) PRP\(_{ITS}\) NN\(_{PROP-NAME}\) PREP\(_{BY}\) PROP-VAL 
&  {{\em fCreatePropChangeByVal}(NP\(_{OP}\).sem, NP\(_{OP}\).sem, TEMP.sem, PROP-NAME.sem, PROP-VAL.sem, SET-PROP-BY-VAL.sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &   S\(_{EXPRESSION}\)   CC\(_{AND}\) S\(_{EXPRESSION}\)   & {{\em fConcatExpressions}(0.S\(_{EXPRESSION}\).sem, 2.S\(_{EXPRESSION}\).sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &   NP\(_{OP}\) NN\(_{PROP-NAME}\) COMPARE PROP-VAL & {{\em fCreateExpression}(OP.sem, PROP-NAME.sem, NULL.sem, COMPARE.sem, NULL.sem, NULL.sem, PROP-VAL.sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &  NP\(_{OP}\) NN\(_{PROP-NAME}\) TEMP COMPARE PROP-VAL &  {{\em fCreateExpression}(OP.sem, PROP-NAME.sem, TEMP.sem, COMPARE.sem, NULL.sem, NULL.sem, PROP-VAL.sem)} \\
S\(_{EXPRESSION}\)    & \(\rightarrow\) &NN\(_{OP}\) NN\(_{PROP-NAME}\) COMPARE NN\(_{OP}\) PROP-NAME  & {{\em fCreateExpression}(0.OP.sem, 1.PROP-NAME.sem, NULL.sem, COMPARE.sem, 3.OP.sem, 4.PROP-NAME.sem, NULL.sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &   NP\(_{OP}\) NN\(_{PROP-NAME}\) MD COMPARE NN\(_{OP}\) NN\(_{PROP-NAME}\)& {{\em fCreateExpression}(0.OP.sem, 1.PROP-NAME.sem, TEMP.sem, COMPARE.sem, 4.OP.sem, 5.NN\(_{PROP-NAME}\).sem, NULL.sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &   NP\(_{OP}\) NN\(_{PROP-NAME}\) COMPARE NN\(_{ENTITY}\) & {{\em fCreateExpressionForObject}(0.OP.sem, 1.PROP-NAME.sem, NULL.sem, COMPARE.sem, OBJECT.sem)} \\
S\(_{EXPRESSION}\)  & \(\rightarrow\) &   NP\(_{OP}\) NN\(_{PROP-NAME}\) MD COMPARE NN\(_{ENTITY}\)  &{{\em fCreateExpressionForObject}(0.OP.sem, 1.PROP-NAME.sem, TEMP.sem, COMPARE.sem, OBJECT.sem)}
\\
S\(_{EXPRESSION}\)   & \(\rightarrow\) &S\(_{TIME-CHANGE}\) & {TIME-CHANGE.sem} \\

S\(_{TIME-CHANGE}\)    & \(\rightarrow\) & NP\(_{TIME-INTERVAL}\)  [MD] VB\(_{TIME-PASSED}\) & {fCreateTimeChange(TIME-INTERVAL.sem, TEMP.sem)} \\

\hline
\end{tabular}}
\caption{Sentence-Level Categories}
 \end{table}
 
 \begin{table}
\scalebox{0.65}{
\begin{tabular}{|lcl|l|}
\hline 
A & \(\rightarrow\) & \( \alpha \)& A.sem  \\
\hline 
NP\(_{OP}\) & \(\rightarrow\) &DET NN\(_{ENTITY}\)	& {{\em fCreateObject}(DET.sem, NN\(_{ENTITY}\).sem)} \\
NP\(_{OP}\)   & \(\rightarrow\) & DET NN\(_{ENTITY}\) PREP\(_{WITH}\) NN\(_{PROP-NAME}\) PROP-VAL & {{\em fCreateObjectWithCond}(DET.sem, OBJECT.sem, PROP-NAME.sem, PROP-VAL.sem)} \\
NP\(_{OP}\)   & \(\rightarrow\) & DET JJ\(_{PROP-VAL}\) NN\(_{PROP-NAME}\) NN\(_{ENTITY}\)	& {{\em fCreateObjectWithCond}(DET.sem, OBJECT.sem, PROP-NAME.sem, PROP-VAL.sem)} \\
NP\(_{OP}\)   & \(\rightarrow\) & DET\(_{INDEF}\)  NN\(_{CLASS-NAME}\) & {{\em fCreateObject}(DET-INDEFINITE.sem, CLASS.sem)} \\
NP\(_{GENERIC}\)   & \(\rightarrow\) & NN\(_{ENTITY}\) &  {{\em fCreateObject}(eInstance, OBJECT.sem)} \\
NP\(_{SBJ}\)   & \(\rightarrow\) & NP\(_{OP}\)  & {NP\(_{OP}\) .sem} \\
NP\(_{OBJ}\)   & \(\rightarrow\) & NP\(_{OP}\)  &  {NP\(_{OP}\) .sem} \\
NP\(_{OP}\)   & \(\rightarrow\) &NP\(_{GENERIC}\)& {\(_{GENERIC}\).sem} \\

NP\(_{PROP-NAME1}\)  & \(\rightarrow\) & NP\(_{PROP-NAME}\) & {PROP-NAME.sem} \\
NP\(_{PROP-NAME2}\)  & \(\rightarrow\) & NP\(_{PROP-NAME}\)  & {PROP-NAME.sem} \\


NP\(_{TIME-INTERVAL}\)    & \(\rightarrow\) &  NUMBER NN\(_{TIME-UNIT}\) & {{\em fCreateTimeInterval}(NUMBER.sem, TIME-UNIT.sem)} \\
VP\(_{SAVE-VARIABLE}\)    & \(\rightarrow\) &  VB\(_{SAVE}\) OP PROP-NAME IN UNKNOWN-NAME & {{\em fCreateSetVariable}(OP.sem, PROP-NAME.sem, UNKNOWN-NAME.sem, eSave)} \\
VP\(_{SAVE-VARIABLE}\)    & \(\rightarrow\) &  VB\(_{SAVE}\) [THE] CURRENT OP-NO-DET PROP-NAME &  {{\em fCreateSetVariable}(OP-NO-DET.sem, PROP-NAME.sem, NULL.sem, eSave)}
\\

VP\(_{SET-PROP-BY-VAL}\)  & \(\rightarrow\) & INCREASE [PREP\(_{BY}\)]  & {ePropIncrease} \\
VP\(_{SET-PROP-BY-VAL}\) & \(\rightarrow\) & DECREASE [PREP\(_{BY}\)] & {ePropDecrease} \\

NUMBER    & \(\rightarrow\) &  CARDINAL-NUMBER & {CARDINAL-NUMBER.sem} \\
NUMBER    & \(\rightarrow\) &  CARDINAL-NUMBER NUMBER & {{\em fCreateNumber}(CARDINAL-NUMBER.sem, NUMBER.sem)} \\



DET & \(\rightarrow\) &DET\(_{INDEF}\) & {DET-INDEFINITE.sem} \\
DET  & \(\rightarrow\) &DET\(_{DEF}\) & {DET-DEFINITE.sem} \\
MD   & \(\rightarrow\) & MD\(_{HOT}\) &  {eTempHot } \\
MD  & \(\rightarrow\) & MD\(_{COLD}\)  & {eTempCold} \\
MD & \(\rightarrow\) & MD\(_{HOT-NOT}\) & {eTempHotNot}  \\
MD & \(\rightarrow\) & MD\(_{COLD-NOT}\) & {eTempColdNot} \\
MD\(_{HOT}\)  & \(\rightarrow\) & MUST [EVENTUALLY] \(|\) EVENTUALLY \(|\) [EVENTUALLY] MUST \\
MD\(_{COLD-NOT}\) & \(\rightarrow\) & MUST NOT \(|\) CANNOT \(|\) TEMP-COLD NOT \\
MD\(_{HOT-NOT}\)  & \(\rightarrow\) & TEMP-HOT NEVER \(|\) TEMP-COLD NEVER \\



ELSE  & \(\rightarrow\) &  else \(|\) [THEN] OTHERWISE & {else} \\
ELSE-IF  & \(\rightarrow\) & ELSE IF & {elseif} \\

COORD   & \(\rightarrow\) &RB\(_{THEN}\) & {eControlExit} \\
COORD   & \(\rightarrow\) &CC\(_{AND}\)  RB\(_{AFTER}\)  THAT & {eControlSync} \\
COORD   & \(\rightarrow\) &CC\(_{AND}\) RB\(_{ONLY}\) RB\(_{THEN}\) &  {eControlSync} \\
COORD   & \(\rightarrow\) & CC\(_{AND}\) & {eControlContinue} \\
COORD   & \(\rightarrow\) & RB\(_{THEN}\)  CC\(_{AND}\) & {eControlContinue} \\
COORD   & \(\rightarrow\) & RB\(_{UNLESS}\)  &  {eControlColdForbid} \\
COORD   & \(\rightarrow\) & RB\(_{UNTILL}\)  & {eControlColdForbid} \\

COMPARE  & \(\rightarrow\) & IS & {eEqual} \\
COMPARE   & \(\rightarrow\) & IS NOT & {eNotEqual} \\
COMPARE   & \(\rightarrow\) & BE & {eEqual} \\
COMPARE   & \(\rightarrow\) &CANNOT BE &  {eNotEqual} \\
COMPARE  & \(\rightarrow\) & EQUALS [TO] & {eEqual} \\
COMPARE   & \(\rightarrow\) & EQUAL [TO] & {eEqual} \\
COMPARE   & \(\rightarrow\) &IS NOT EQUAL [TO]  & {eNotEqual} \\
COMPARE   & \(\rightarrow\) &IS EQUAL [TO] &  {eEqual} \\
COMPARE  & \(\rightarrow\) & IS GREATER THAN & {eGreaterThan} \\
COMPARE   & \(\rightarrow\) & IS LESS THAN & {eLessThan} \\
COMPARE   & \(\rightarrow\) & IS GREATER [THAN] OR EQUAL [TO]  & {eGreaterEqual} \\
COMPARE   & \(\rightarrow\) & IS LESS [THAN] OR EQUAL [TO] & {eGreaterEqual} \\


COMPARE    & \(\rightarrow\) &  COP\(_{IS}\) RB\(_{NOT}\) EQUAL TO & {eNotEqual} \\
COMPARE   & \(\rightarrow\) &  AUX\(_{DOES}\) RB\(_{NOT}\) EQUAL [TO] &  {eNotEqual} \\

%SOMETIMES    & \(\rightarrow\) &  sometimes \(|\) OTHER TIMES \\

\hline
\end{tabular}}
\caption{Phrase-Level Categories}
\end{table}









\begin{table}

\scalebox{0.7}{
\begin{tabular}{|lcl|l|}
\hline
A & \(\rightarrow\) & B & \\\hline
RB\(_{WHEN}\)  & \(\rightarrow\) &  when \(|\) whenever \\
RB\(_{THEN}\)  & \(\rightarrow\) &  then \(|\) , \(|\) do &  {eControlExit} \\
RB\(_{WHILE}\)    & \(\rightarrow\) &  while  \(|\) AS LONG AS \\
RB\(_{UNTIL}\)  & \(\rightarrow\) &  until \\
RB\(_{UNLESS}\)  & \(\rightarrow\) &  unless \\
RB\(_{OTHERWISE}\)  & \(\rightarrow\) &  otherwise \\
RB\(_{NOT}\)  & \(\rightarrow\) &  not \\
RB\(_{WHEN}\) & \(\rightarrow\) &  when \\
RB\(_{THAN}\)  & \(\rightarrow\) &  than \\
RB\(_{NEVER}\)  & \(\rightarrow\) & never \\
RB\(_{AS}\)  & \(\rightarrow\) &  as \\ 
RB\(_{ONLY}\)  & \(\rightarrow\) &  only \\

VB\(_{STORED}\)    & \(\rightarrow\) &  stored \(|\) saved \\
VB\(_{TIME-PASSED}\)    & \(\rightarrow\) &  [have] PASSED  \(|\) [have] ELAPSED \(|\) elapses  \(|\) passes \(|\) elapse \(|\) pass \\
VB\(_{STORED}\)    & \(\rightarrow\) &  stored  \(|\)  saved \\
VB\(_{SAVE}\)  & \(\rightarrow\) &  save \(|\) store \(|\) stores \\
VB\(_{LOAD}\)  & \(\rightarrow\) &  load \\
VB\(_{PASSED}\)  & \(\rightarrow\) & passed \\
VB\(_{ELAPSED}\)  & \(\rightarrow\) &  elapsed \\
VB\(_{DOES}\)  & \(\rightarrow\) &  does \\
VB\(_{INCREASE}\) & \(\rightarrow\) &  increase  \(|\) increases \(|\) ncreased \\
VB\(_{DECREASE}\)  & \(\rightarrow\) & decrease  \(|\) decreases   \(|\) decreased \\
%VB\(_{METHOD}\)  & \(\rightarrow\) &  increase \(|\)  increases  \(|\) increased  & {increase} \\
%VB\(_{METHOD}\) & \(\rightarrow\) &  decrease \(|\)  decreases  \(|\) decreased &  {decrease} \\
VB\(_{HAPPEN}\)  & \(\rightarrow\) &  happen \\
VB\(_{SET}\) & \(\rightarrow\) &  set \\
VB\(_{FORBIDEN}\)  & \(\rightarrow\) &  forbiden \\
VB\(_{SET-PROP}\) & \(\rightarrow\) & turn  \(|\) change  \(|\)  set  \(|\)  turns  \(|\)  changes  \(|\)  IS SET \(|\)  sets & {ePropSet} \\

AUX\(_{MUST}\) & \(\rightarrow\) & must  \(|\) shall  \(|\)  should \(|\)  will \\
AUX\(_{MAY}\)  & \(\rightarrow\) &  may \(|\) could \(|\) can \(|\)  does \\
AUX\(_{DOES}\)  & \(\rightarrow\) &  does \\
AUX\(_{CAN}\)  & \(\rightarrow\) & can \\
AUX\(_{CANNOT}\)   & \(\rightarrow\) &cannot \\

COP\(_{IS}\)  & \(\rightarrow\) &  is \(|\)  be \\
COP\(_{ARE}\) & \(\rightarrow\) &  are \\

%DET\(_{INDEF}\)  & \(\rightarrow\) &  a | an \\
DET\(_{DEF}\)  & \(\rightarrow\) &  the \\
DET\(_{INDEF}\)  & \(\rightarrow\) &a \(|\) an \(|\) any \(|\)  all \(|\)  some \(|\)  other \(|\)  another  & {eSymbolic} \\
DET\(_{DEF}\)   & \(\rightarrow\) & the  & {eInstance} \\
DET\(_{THAT}\)  & \(\rightarrow\) &  that \\
JJ\(_{LAST}\)  & \(\rightarrow\) &  last \\
JJ\(_{CURRENT}\)  & \(\rightarrow\) & current \\
JJ\(_{LONG}\)  & \(\rightarrow\) &  long \\ 
JJ\(_{EQUAL}\)  & \(\rightarrow\) &  equal \\
JJR\(_{LESS}\)  & \(\rightarrow\) &  less \\
JJR\(_{GREATER}\)  & \(\rightarrow\) &  greater \\
 
PREP & \(\rightarrow\) & TO  \(|\)  from \(|\) by  \(|\) IN \(|\)  of \\
PREP\(_{IF}\)  & \(\rightarrow\) &  if \\
PREP\(_{IN}\)  & \(\rightarrow\) & in \\
PREP\(_{OF}\)  & \(\rightarrow\) & of \\ 
PREP\(_{WITH}\)  & \(\rightarrow\) &  with \\
PREP\(_{AFTER}\)  & \(\rightarrow\) &  after \\

CC\(_{OR}\)  & \(\rightarrow\) & or \\
CC\(_{AND}\)  & \(\rightarrow\) &  and \\

NN\(_{TIME-UNIT}\)    & \(\rightarrow\) & minutes \(|\) minute \(|\)  seconds  \(|\) second  \(|\) hour  \(|\)  hours   \(|\) day \(|\) days  \\
NN\(_{TIME}\)  & \(\rightarrow\) &  time \\
NN\(_{TYPE}\)  & \(\rightarrow\) & type \\
NN\(_{CLASS}\)  & \(\rightarrow\) &  Category \\
NN\(_{PERCENT}\)  & \(\rightarrow\) &  percent \\
NN\(_{TIMES}\)  & \(\rightarrow\) &  times \\
NN\(_{OTHER}\)  & \(\rightarrow\) &  other \\
NN\(_{FOLLOWING}\)  & \(\rightarrow\) &  following \\ 


PUNCT\(_{DOT}\)  & \(\rightarrow\) &  . \\

PRP\(_{ITS}\)   & \(\rightarrow\) &  its \\

TO  & \(\rightarrow\) &  to \\
\hline

\end{tabular}}

\caption{Lexical Categories}
\end{table}

\end{document}


\appendix 
\section{Live Sequence Charts}
\paragraph{Definition 1}
Every LSC can be represented as a tuple:
\[\textit{LSC}=\langle L, M, C, Alt, Lp \rangle\]
\begin{itemize}
\item \(L\) represents a set of lifelines. Each lifeline is a feature-structure containing the following attributes: 
class name, object name, binding (static/dynamic), methods, properties. This set minimally consists of  elements \(\textit{User,Clock,Env}\in L\) that represent the user,   internal clock, and   external environment. 
\item  M represents a set of message instances. Each message is a feature-structure containing the following attributes: a sender lifeline, a receiver lifeline, temperature (hot/cold), and execution-mode (executed/monitored).
\item C represents a set of condition instances. Each condition is a feature structure containing the following attribute: an expression, a temperature and execution mode. Conditions are expressions in first-order predicate logic.
\item Alt represents a set of alternative control (switch) structures. Each alternative control structure contains a set of alternatives, where each alternative contains the following feature structure: number of iterations (up to infinity), an ordered set of messages.
\item Lp represents a Loop structure. Each   contains the following feature structure: number of iterations (up to infinity), an ordered set of messages.
\end{itemize}

\paragraph{Definition 2}
Every LSC can be represented as a sequence:
\[\textit{LSC}=\langle e_1, e_2,..., e_n \rangle\]

LSC is a partially ordered set of events that depicts a dynamic system behaviour: 
\begin{itemize}
\item simple events:
\begin{itemize}
\item message \(\langle a( l_1,l_2, args),modal,exe\rangle \)
\item condition \(\langle exp, modal, exe \rangle\) 
\end{itemize}
\item complex events
\begin{itemize}
  
\item for \(\{ \#  \langle  e_1,..,e_k  \rangle \}\) 
\item while \(\{ exp  \langle  e_1,..,e_k  \rangle \}\) 
\item switch \(\{  exp_j \langle  e_1,..,e_k   \rangle\}_{j=1}^m\) 
\end{itemize}
\end{itemize}

Implicit in the  LSC is a  system model  SM, that is a description of the assumed static domain:
\begin{itemize}
\item classes
\begin{itemize}
\item name
\item properties
\begin{itemize}
\item[] name
\item[] type
\item[] enum
\item[] default
\end{itemize}
\item methods
\begin{itemize}
\item[] name
\item[] args 
\end{itemize}
\item instances
\begin{itemize}
\item[] name
\item[] binding
\end{itemize}
\end{itemize}
\end{itemize}
The semantics of the LSC language An explicit and non-trivial translation of the LSC language into temporal logic is provided by 

Events may be basic or complex. Basic events may be of two   types:
\begin{itemize}
\item a message: \(msg = \langle a, l_1, l_2, arge[], temp, exec\rangle \)
\\ where \(\forall i: l_i\in L, \forall_j : args[j]\in L, a\in A, temp \in \{hot,cold\}, exec \in \{executed,monitored\}\).
\item a condition: \(cond = \langle exp, temp, exec\rangle \)
\\ where \(exp\) is an expression in first-order logic over entities in the domain and their properties.
\end{itemize}
Intuitively, a message is an  event wherein an action is being carried out. The action may involve one or more entities (that is, it can be reflexive, transitive, ditransitive, and so one). A condition, on the other hand, represents a stative event, a declaration of state-of-affairs, how things are (or ought to be).\footnote{In Vendler's terminology \cite{vendler}, a message is an event of type `activity',  `accomplishment', or an `achievement',  a condition is an event of type `state'.} 


Complex events may contain charts, and may be of three different types
\begin{itemize}
\item a  loop \(lp = \langle \#, c \rangle \)\\ where \(\#\) is an integer and \(c\) is an LSC chart.\footnote{We only consider finite loops in this definition, as infinite loops change the expressive power of LSCs, taking them beyond the limits of temporal logic.}
\item an alternative \(alt = \langle cond, c\rangle \) \\where \(cond\) is a condition  and \(c\) is an LSC chart.
\item an switch \(\{ \langle cond_j, c_j\rangle \}_{j=1}^{\#}\) \\ which depicts a finite set of alternatives.
\end{itemize}
Intuitively complex events represent control structures which are linguistically expressed using complex tenses in natural language, for instance, the habitual, conditionals  and counterfactuals.

\section{Related Work}\label{related}
\begin{itemize}
\item Modeling with LSC michal gordon
\item Modeling with other CNLs survey
\item NLP for NLP
\item Semantic parsing in general
\begin{itemize}
\item Collins / Zettlemoyer
\item Zettlmoyer / Artzi
\item Ray Mooney
\item Steedman / Luis
\item Percy / Berant
\end{itemize}
\item Semantic parsing into temporal logic
\item the use of temporal logic in natural language semantics
\end{itemize}


Requirement sentences \(d\) are inherently ambiguous, and we seek the most probable interpretation of the sentence. So  the objective function is 
\[
\begin{array}{ccl}
f(d)& = &  {argmax}_{m} P(t|d)   
  \end{array}
\]

We further assume that the requirements  language  \(\mathcal{L}\) is strongly generated by a probabilistic context free grammar \(G\) where P(t) is given by the product of the probabilities of rules that derive it.   \(s\in \mathcal{L}\) may be derived by in different ways represented by syntactic parse trees, which define a partition of the set \(d\), so: \[P(m,d)=\sum_{t\in\{t|leaves(t)=d, model(t)=m\}}  P(t) \]
But we approximate it as
\[P(m,d)=\max_{t\in\{t|leaves(t)=d, model(t)=m\}}  P(t) \]

%{Estimating Emission Parameters.} 
Our emission parameters  \(P(d_i|m_{i})\) represent the probability of a verbal description given a model snapshot, which for our present purposes, represents the semantics of the sentences. This semantic representation may result from different syntactic representations of the sentence. We calculate this probability by marginalising out the syntactic tree that gave rise the the same SM snapshot.
\[P(d|m) = \sum_t P (d,t | m) = \sum_t \frac{P(d, t, m)}{P(m)} \]
 In order to generate all tree \(t\) that respect \(d,m\) and assign probabilities to \(P(t,d,m)\)  parameters we assume a generative probabilistic model over the syntactic structures generating strings in the requirements language. In particular, we assume a probabilistic context-free  extension of the context-free grammar developed by Gordon and Harel \cite{?}. Each sentence in the language may correspond to one or more syntactic analyses, but each syntactic analysis correspond to o single action in constructing the LSC chart. A detailed example for such semantic construction is provided in \cite{Kugler05}. We calculate the probability of a string as the sum of the probabilities of all the syntactic trees for \(d\) according to the grammar, where the probability of a syntax tree \(P(t|d)\) is estimate  by multiplying the probabilities of the context free rules that participated in its derivation derivation. 
\[P(d) = \sum_{t} P(d,t) = \sum_{\{t|yield(t)=d\}} \prod_{\{r|r\in t\}} P(r) \]
The parameters \(P(r)\) are estimated from a parallel corpus in which each requirement is paired up with its gold syntactic structure. 
For efficiency reasons we replace the summation over all possible tree with summing only the N high probable trees. Later, we empirically show that as N grow larger, the parser is more accurate, however we pay a price in parsing speed. 



\begin{itemize}
\item 
\item \(\hat{P}(m)\) The system model \(m\) represents a complete implementation of classes, methods, properties, and values enumeration. Estimating its probability directly would require us to estimate what are the probabilities that computer programs in general posses certain classes, methods and properties. Since our data set is relatively small, we replace the estimate of the model with an estimate of its size \(P(|m|)\). We assume that the size of a model per requirement is distributed normally and estimate the distribution parameters \(\mu,\sigma\) from our \(m_i\) examples.
\item \(\hat{P}(m_i,m_{i-1})\) We would finally like to estimate the probability of the intersection of trees, which represents the amount of overlap between them. In this work, we replace this parameter with a normalized size of the intersection. 
\(\frac{\#(m_i\cap m_{i-1})}{K(m_i, m_{i-1})}\) Note that if we sum over all the intersections we get a convolution kernel over \(m_i,m_{i-1}\), that is, the inner product of the two trees, where each tree is represented as a vector of subtrees.
\end{itemize}

\begin{table}
\center
\begin{tabular}{|rl||ccc|}
\hline
Synthetic & Real-World & Precision & Recall & F-Score \\
\hline  
100\% &0\% & & & \\
99\% & 1\% & & & \\
50\% &  50\% & & & \\   \hline
\end{tabular}
\caption{Mixing Synthetic and Real World Examples}
\end{table}


\begin{table}
\center
\begin{tabular}{|r||cccc|}
\hline
Synthesic  & Precision & Recall & F-Score  & \\
\hline  
100 & & & & \\
1000 &   & & & \\
10000 &    & & & \\   \hline
\end{tabular}
\caption{Synthetic Training and Devset Accuracy}
\end{table}