

\chapter{Relational Networks}
\label{ch04rr}


The previous modeling strategies we discussed assume that the probabilistic grammar defines the probabilities of phrase-structure trees, and hand-coded constraints encode linguistic (hard or soft) constraints and derive their grammatical functions.  It is possible to define the probabilistic grammar itself as a set of soft morphosyntactic constraints that ensures a coherent predicate argument-structure by mapping grammatical functions to phrase-structure trees. 
%he benefits of such approach are twofold. Firstly, the functional information will help to improve the accuracy of the prediction. Secondly, 
The predicate-argument structure may be read off directly from the trees, without the need to post-process the data.  In addition, the functional information can help to improve the quality of the tree prediction.  This is the motivation underlying the Relational-Realizational modeling framework proposed by Tsarfaty \cite{tsarfaty10rr} and it is applied to modeling Semitic phenomena in Tsarfaty et al.\  \cite{tsarfaty08rr,tsarfaty09alternative,tsarfaty10morphosyntactic}.


\section{Representation}


 
 
\section{Modeling}

\subsection{Grammar-Based Modeling}
The Relational-Realizational  model assumes a grammar called RR-PCFG which is a tuple
\(RR=\langle \mathcal{T, N, GR}, S\in\mathcal{N}, \mathcal{R}\rangle\) where PCFG is extended with a finite set of grammatical relations labels \(\mathcal{GR}=\{gr_1,gr_2,...\}\) such as subject, predicate, object, modifier etc.\ (\(\mathcal{GR}\cap\mathcal{N}=\emptyset\)).
The set \(\mathcal{R}\) contains three kinds of rules which alternate the generation of structural and functional notions:
\[\mathcal{R}=\mathcal{R}_{projection}\cup\mathcal{R}_{configuration}\cup\mathcal{R}_{realization}\]




\begin{figure}
\center
\begin{tabular}{c}
(a)\scalebox{0.9}{
\Tree[.S 
[.NP_{nom}-\sbj~ {\em ani}\\I ] 
[.VB-\prd~ {\em awhb}\\like ] 
[.NP_{acc}-\obj~ {\em awth}\\her ]
 ]
$\Rightarrow$
\Tree[.S [.\{\sbj,\prd,\obj\}
 [.\sbj~ [.NP$_{nom}$  {\em ani}\\I ]  ] 
 [.\prd~ [.VB {\em awhb}\\like  ] ]
  [.\obj~ [.NP$_{acc}$  {\em awth}\\her ] ] 
   ] ]}\\
(b)\scalebox{0.9}{
%\Tree[.S  VB-\prd~ NP-\sbj~ NB-\obj~ ]
\Tree[.S 
[.VB-\prd~ {\em awhb}\\like ] 
[.NP_{nom}-\sbj~ {\em ani}\\I ] 
[.NP_{acc}-\obj~ {\em awth}\\her ]
 ]
$\Rightarrow$
\Tree[.S [.\{\sbj,\prd,\obj\}
 [.\prd~ [.VB {\em awhb}\\like  ] ]
 [.\sbj~ [.NP$_{nom}$  {\em ani}\\I ]  ] 
  [.\obj~ [.NP$_{acc}$  {\em awth}\\her ] ] 
   ] ]
%\Tree[.S [.\{\sbj,\prd,\obj\} [.\prd~ VB ]   [.\sbj~ NP ] [.\obj~ NP ]  ] ]
}
\end{tabular}
\caption{\small{Generating canonical and non-canonical configurations using the RR model: S level CFG productions at the
LHS of (a) and (b) are different. The RR-CFG representations at the RHS of (a) and (b) share the projection and realization parameters and differ only in their configuration.
%the order in which they are realized. (a) is an SVO order, (b) is a  Verb-Initial construction
%The NPs in subject and object position are identical across trees, but differ in their realization parameters.
}}\label{rr:proj}
\end{figure}




Let us assume that \(C_P \rightarrow C\ldots C\) is a context-free production in the original phrase structure tree where each daughter constituent \(C_i\in\mathcal{N}\) has the grammatical relation \(gr_i\in\mathcal{GR}\) to the parent category.
%are the morphosyntactic representations of an ordered sequence of its daughter constituents and 
The set \(gr_1...gr_n\)  is defined as the {\em relational network} (a.k.a.
 the {\em projection} or {\em subcatgorization}) of the parent categories. Each   grammatical relation \(gr_i@C_P\) is realized as a syntactic category \(C_i\) and carries morphological marking appropriate to signaling this grammatical relation.
%
The RR grammar articulates
the generation of such a construction in three phases:
\begin{itemize}
\item {\bf Projection:} \(C_p\rightarrow \{{gr_i}\}_{i=1}^n@C_P\)\\
The projection stage generates the function of a constituent as a set of grammatical
relations (henceforth, the relational network) between its subconstituents. 

\item {\bf Configuration:}  \( \{{gr_i}\}_{i=1}^n@C_P\rightarrow gr_1@C_P \ldots gr_n@C_P\)\\
The configuration stage orders the grammatical
relations in the relational network in a linear order in which they occur on the surface.\footnote{The configuration phase can
place  “realizational slots”
 between the ordered elements
signaling periphrastic adjunction and/or punctuation.
Modification may generate more
than one constituent. See further details in the original publication \cite{tsarfaty08rr}.}
\item {\bf Realization:} \({gr_i@C_P\rightarrow C_i}\)\\
In realization, every grammatical relation  generates the
morphosyntactic representation of the child constituent
that realizes the already-generated function. 
\end{itemize}

The RR model is recursive history-based model, where  every constituent node triggers a generation cycle, conditioned on this node category.
% in fact a PCFG, where CFG rules capture the three stages of generation. Every
Every time the projection-configuration-realization
cycle is applied,  the probability of this constituent node is replaced with
 the probabilities of the three stages, multiplied:



\begin{center}
 \begin{tabular}{ll}
  \({\bf P_{RR}} (r)=\)  & \\
{\em Projection} &  \({\bf P_{projection}}(\{gr_i\}_{i=1}^{n}|C_P)\) \({\bf \times} \)  \\

{\em Configuration} & \( {\bf P_{configuration}}(\langle gr_0:gr_1,g_1,\ldots\rangle | \{gr_i\}_{i=1}^{n}, C_P)\) \(\times\)\\

{\em Realization} &  \(\prod_{i=1}^{n}  {\bf  P_{realization}}( {C_i} | gr_{i}, C_P)\) \(\times\) \\

& \({\bf P_{adjunction}}(\langle {C_{0_1}}, ... ,{C_{0_{m_0}}} \rangle | gr_{0}:gr_{1}, C_P)\) \(\times \) \\


& \(\prod_{i=1}^{n}  {\bf P_{adjunction}}(\langle {C_{i_1}}, ... ,{C_{i_{m_i}}} \rangle | gr_{i}:gr_{i+1}, C_P)\) \\
\end{tabular}
\end{center}
The first multiplication implements an independence
assumption between grammatical relations and configurational positions. The second multiplication implements an independence assumption between grammatical positions and morphological markings of the nodes at that position. These assumptions are appropriate for languages with flexible word order and rich morphology. The later multiplications implement independence between the generation of complements and the generation of adjuncts.\footnote{
%This is a distinction that is relevant to all of the world languages 
Discussion of the distinction between complements and adjuncts is omitted here, refer to \cite{tsarfaty10rr} for further details.}

%One of the main advantages of the RR modeling method is that it makes explicit the commonalities and differences we expect to find across syntactic structures. For instance, 
Figure \ref{rr:proj}
shows the RR generation process for the different clauses we discussed before. In their simple context-free representation, there is no parameter sharing between the two constructions. In the RR representation, the trees share the {\em projection} and {\em realization} parameters, and only differ in the {\em configuration} parameter, which captures their alternative word ordering patterns. By making the commonalities and differences between the clauses explicit, the grammar can create new rules for unseen examples. 
%bearing identical RNs that are in turn realized in
%different possible configurations.
% and allows to directly learn relevant
%This factoring of the model helps to generalize morphosyntactic patterns  (agreement, case marking, etc.)  by directly observing them and estimating their probability  from corpus statistics.
% and function underlying the Separation Hypothesis, and the conditioning we articulate captures one possible way to model a systematic many-to-many correspondence.
%This three-step process does not generate functional elements such as auxiliary verbs and particles, or punctuation marks,  which are outside of constituents’ RNs. 
%\begin{itemize}
%\item Projection:\(C_p\rightarrow \{{gr_i}\}_{i=1}^n@C_P\)
%\item Configuration:
%\item Realization:\({gr_i@C_P\rightarrow C_i}\),\({gr_i:gr_{i+1}@C_P\rightarrow C_{i_1}... C_{i_k}}\)
%\end{itemize}



Training RR grammars  can be done using  the transform-detramsform method outlined in Algorithm 3.  It starts off with a set of phrase structure trees in which every node specifies the phrase-label, grammatical function, and morphological features of that node. The treebank then goes through a transformation  which   includes (i) separating grammatical functions from their realization, (ii) separating the position of every relation from its morphological marking. Training can be done in a standard way, using maximum likelihood estimates. Since the resulting grammar assumes independence between the rules, we can use a simple chart parser over sequences or over lattices to parse unseen sentences.
% using the resulting Relational-Realizational treebank grammar. 


\subsection{Graph-Based Modeling}
\subsection{Transition-Based Modeling}


%\section{Evaluation}

%\section{Coping with Non-Configurationality}

\section{Summary and Further Reading}
