
\chapter{Introduction to Parsing Morphologically Rich Languages}
\label{ch01}

%Parsing systems  aim to automatically analyze input sentences in natural language, that is, they assign each sentence with a   structure that reflects its human perceived interpretation.  

The automatic syntactic analysis of sentences, or in short, {\em  parsing}, is the task of accepting a natural language sentence as input and returning its underlying structure as output. The syntactic analysis of  the sentence ``Alice likes Bob", for instance, would reveal  two entities, ``Alice" and ``Bob", and  a ``like" relation between them. ``Alice does not like Bob" would reveal the same entities with a modified (negated) relation between them.  Parsing reveals crucial information concerning the {\em predicate-argument} structure of sentences, or as it is often informally put: ``who did what to whom, when, where" and so on.  Parsing is  an important step towards natural language understanding (NLU)  and   parsers are key components in   applications such as question answering, information extraction, and machine translation. 


%The best parsing systems to-date are data-driven and statistical. That is, they  are statistical structure prediction models that are trained on the patterns and frequencies observed in syntactically annotated corpora. State of the art  parsing systems trained on English corpora were shown to parse English texts in  high accuracy, but ross-linguistic evaluation campaigns have shown that  these these systems do not perform as well when they are trained to parse languages with structures and characteristics that are substantially different than English.

Morphologically rich languages (MRLs) are languages such as Arabic, Czech, Farsi, Finnish, Hebrew, Hungarian, Turkish, and many more. The shared characteristic of MRLs is that words are internally  complex,  and each single word (or rather, each  space-delimited token) may carry multiple meaning-bearing elements called {\em morphemes}.   Word-level morphemes express   functional information that is typically expressed using word {order} patterns in English.
 %(for instance, that in ``Alice likes Bob" Alice is the sentence's subject and Bob is the object). 
 As a result, MRLs allow for more  flexible word order patterns than English. Complex {\em word structure} and variable {\em word order} are two characteristics that  pose challenges for  parsing models that have been developed with English in mind. %Using such models out of the bix then  impairs the development of  MRL processing  technology. 


The inherent difficulty in the development of   parsers that effectively cope with   MRLs is that this development typically involves profound understanding of the two extreme ends of the computational linguistics/natural language processing (CL/NLP)  cross-disciplinary scale. On the one hand, we aim to model  non-trivial  linguistic structures that involve more complex morphosyntactic interactions than in English. 
%(Perhaps  incidentally, these structures are more complex and  less well-behaved than syntactic structures that we find  in English.) 
On the other hand, prediction of these complex  structures  requires  advanced  methods for  structure prediction, one of the most complicated forms  of prediction, which in turn  relies on  advanced,  and sometimes specially tailored,  learning and decoding algorithms. Developing state-of-the-art solutions for parsing MRLs thus requires  
%structure-prediction systems that can effectively cope with predicting the complex linguistic structures that characterize MRLs requires 
synthesizing these two, profoundly different, domains of knowledge. 
%, alongside binary and multi-class classification. Beyond that, these structures require the development of  algorithms than can efficiently generate and traverse such structures.

This chapter lays the ground for the developent of such system presenting the linguistic challenges exhibited by MRLs on  one hand, and the technological challenges in building parsing systems on the other one.
We start by defining  parsing  as a structure prediction task and identifying   the obligatory components  of  any statistical parsing architecture (representation, learning, decoding and evaluation).
%and defining each of these components formally.
%This chapter introduces the topic of parsing MRLs (henceforth, PMRL) by juxtaposing the string relation between the complexity of the linguistic task and the technical challenge . The syntactic structure in MRLs is more complex those used for parsing English due to  rich morphosyntactic interaction, which have to  to be adequately represented. This means that advanced structure prediction algorithms have to be developed in order to cope with accurately prediction such structures.
We then outline  important characteristics of   word-level (morphological) and sentence-level (syntactic)  structures  in MRLs, and discuss why these structures are hard to predict. We   outline the overarching challenges in parsing MRLs and present a general strategy for approaching them using {\em joint modeling}, and we   refine the task of  each individual  component in the parsing  architecture accordingly.

% demonstrated in Figure~\ref{ambiguity}.  A parsing system aims to find a single  syntactic analysis reflecting the human perceived interpretation of the sentence. That is, for the sentence ``Time flies like an arrow" we would like to pick the (a) analysis, while for the sentence ``Fruit flies like a banana", though superficially similar, we would pick the analysis reflected in (b). Syntactic ambiguity may be effectively resolved using data-driven  statistical  methods.

% three overarching challenges that have to be addressed when parsing such languages, regardless of the parsing framework used.

%This book builds from the ground up a formal framework that accommodates the intricate structures of MRLs. It covers parsers for different representation types (Phrase-Structures, Dependency-Structures, Relational Networks) and applies both generative and discriminative statistical methodsS
\section{What is Parsing?}
\begin{figure}
\center
\begin{tabular}{ccc}
(a)~\scalebox{0.95}{\Tree[.S [.NP Alice ]  [.VP [.VB likes ]  [.NP Bob ] ] ]}
&
(b)~\scalebox{0.95}{\Tree[.Root [.likes [.subject Alice ] [.object  Bob ] ]  ]}
&
(c)~\scalebox{0.95}{\begin{tabular}{|ll|}
subject: & 
\begin{tabular}{|ll|}
 lemma: & Alice \\
gender: & feminine \\
 number: & singular \\
 \end{tabular}
 \\ & \\
object: &  \begin{tabular}{|ll|}
 lemma: & Bob \\
 gender: & masculine \\
 number: & singular \\
 \end{tabular}
 \\ & \\
predicate: &  \begin{tabular}{|ll|}
 lemma: & like \\
 number: & singular \\
 tense: & present \\
 \end{tabular} \end{tabular}}
 \end{tabular}
 \caption{Syntactic representation alternatives for the sentence "Alice likes Bob". (a) is a phrase-structure tree, (b) is a  labeled dependency tree, and (c) is a typed feature structure.}\label{alice}
\end{figure}

A parser is a computer program that takes a sentence in  natural language as input and provides an analysis of its underlying structure as output.  There are different  levels of natural language parsing, on a par with different  levels of linguistic description. {\em Morphological parsing} analyzes word-structure, {\em syntactic parsing} analyzes sentence structure, {\em semantic parsing} analyzes sentence meaning, and so on. 
In CL/NLP, parsing is  typically understood as recovering the {\em syntactic} representation of the sentence, that is, analyzing the ways words are combined to form phrases and sentences.

A {\em  syntactic representation} is a theoretically loaded term; there are as many ways to represent syntactic structures as there are  grammatical  formalisms studied in linguistics. The sentence ``Alice likes Bob", for instance, may be represented using a {\em constituency tree}, as in phrase structure grammars \cite{chomsky57structures}, using a {\em dependency tree}, as in dependency grammars \cite{tesniere59elements}, or using a {\em feature structure}, as in unification-based grammars \cite{shieber86unification}. All  of the  representation types capture the same  grammatical notions: that  ``Alice'' is the subject, ``Bob'' is the  object, and ``like'' is the  predicate. We  refer to  these structures using the cover term {\em parse}.

%\end{itemize}
%Traditional grammatical notions such as \emph{subject,
%  predicate, object,} etc., are crucial for semantic interpretation,
%as they capture the \emph{predicate-argument structure} of the
%sentence, or, as is often informally put, ``who did what to whom".
 %If one can identify syntactic entities and establish their semantic
%reference or denotation, then the grammatical relations provide the
%necessary information to determine the sentence's meaning
%The syntactic analysis of a sentence is typically a  graph in which nodes represent entities and arcs represent relations between these entities. 
%It is customary to distinguish between the linguistic data (e.g., a transitive sentence), its format representation (constituency? dependency?) and the annotation theory (label choices, level of nestedness, etc).

% \cite{chomsky57structures} \cite[ch.\
%4]{chomsky57structures}.
%The 

%{Constituency-Based
%Dependency-Based
%Deep-Grammars}





%%%%% \subsubsection{Data-Driven Parsing}
%%%%% \subsubsection{Data-Driven Parsing}
%%%%% \subsubsection{Data-Driven Parsing}
%%%%% \subsubsection{Data-Driven Parsing}
%\subsubsection{Data-Driven Parsing}

%The best performing parsing systems to date are supervised, data-driven and statistical.
%This means that the system is first presented with a set annotated examples called a {\em treebank} --- a list of sentences annotated with their syntactic parse-trees --- from which it aims to induce a model that can predict the syntactic analysis of unseen sentences. 

% (Figure \ref{fig:arch}). % In a {\em statistical} parsing model, the choice between competing parsing alternatives (i.e., `disambiguation') is made based on corpus statistics.
% then when the system is given a novel sentence as input, it aims to construct an analysis based on the generalization extracted from the training 
%\subsection{Parsing as Structure Prediction}




Formally, we define  parsing as a {\em structure prediction} task. We assume that  the input space \(\mathcal{X}\) is a set of sentences in natural language  and the output space  \(\mathcal{Y}\) is a set of possible parses for sentences in that language.   A parsing system  implements a prediction function \(h\)  such that \(h(x)
\in\mcy\) is a parse for \(x\in\mcx\). 
%A graphical depiction of the architecture is given in Figure~\ref{fig:arch1}.
% \[h:\mcx\rightarrow\mcy\]
\begin{align} % requires amsmath; align* for no eq. number
\label{parser}h:\mcx\rightarrow\mcy
\end{align}
 There are many conceivable instantiations of the  function \(h\).
 If \(\mathcal{X}\) contains all sentences in English and \(\mcy\) is the space of all possible phrase structure  trees of English sentences, then \eqref{argmax} defines a phrase-structure parser for English. If  \(\mcx\) is the set  of Swedish sentences and \(\mcy\) is the set of dependency trees for  all sentences in Swedish, then  \eqref{argmax} defines a dependency-based parser for Swedish.  

 %Formally this means that each element \(x\in\mathcal{X}\) is potentially mapped to a  set of valid output structures \(y\in\mathcal{Y}\) that represents different  interpretations. We refer to this set as \(\mathcal{Y}_x\subseteq\mathcal{Y}\).  
 
A pervasive problem in   natural language parsing is  ambiguity. A natural language sentences are inherently ambiguous, and may be assigned different syntactic analyses that reflect different interpretations. The sentence ``Time flies like an arrow'', for example, may admit the four different analyses in Figure~\ref{ambiguity}, reflected in either representation type. Due to  ambiguity in  language, parsers are required not only to  analyze the sentence but also to {\em disambiguate} it, that is, to select a single syntactic analysis, or  {\em parse}, \(y\in\mcy\) for the given sentence  \(x\in\mcx\).\footnote{It is not uncommon that such disambiguation is dependent on extra-sentential or even extra linguistic factors. These factors will not concern us here. Throughout this book we assume that the  linguistic materials in the sentence -- words, morphemes, order, lexical materials, etc. -- are sufficient for selecting the best analysis. This assumption underlies all state-of-the-art parsing models that have been developed for English.}    
%``Best" is understood here as picking the analysis that most likely reflects the sentence's human precieved interpretation.
%\footnote{It is not uncommon that the best interpretation is dependent on extra-sentential or even extra linguistic factors. These factors will not concern us here. Throughout this book we assume that the  linguistic material in the sentence -- words, morphemes, order, lexical materialsetc -- is sufficient for disambiguation.}   
 Formally this means that each   \(x\in\mathcal{X}\) can be mapped to a set of valid output structures \(y\in\mcy\) that reflect different interpretations. We refer to this set as \(\mathcal{Y}_x\subseteq\mathcal{Y}\).    The  function  \(h\)   then selects the best parse from \(\mcy_x\)  by means of a scoring function.
 \begin{align} 
\label{argmax}
h(x)= \text{argmax}_{y\in\mcy_x} Score(x,y)
\end{align}
 
%  In order to pick the most likely interpretation, we assume  {\em supervised statistical parsing architecture} in which we use use a set of  example sentences annotated by human experts to learn a good predictor \(h:\mcx\rightarrow\mcy\) that finds the most likely syntactic analysis of a sentence \(x\in\mcx\).  
  
 

Ideally, we would like to develop a single parsing architecture that can learn to parse sentences in different languages. 
This book thus focuses on {\em data-driven} parsing. We set out to develop a general statistical parsing  model that can acquire a language-specific prediction function \(h\)  automatically when applied to language-specific data, and use it for for parsing naturally-occurring sentences in that language.
%How do we go about developing such a system?  

% In practice, such systems exhibit large differences in the performance of the system when it is applied to different languages. One of the goals of this book is to help the bridging this cross-linguistic gap -- by demonstrating how differences in  languages structure and characteristic affect the strategies that have been employed for effective structure prediction.
 

 
\begin{figure}
\begin{center}
\scalebox{0.9}{
\begin{tabular}{cc}
(a)~\Tree[.TOP [.S [.NP Time ] [.VP [.VP flies ] [.PP [.P like ] [.NP [.DET an ] [.NN arrow ] ] ] ] ] ]
 & 
  \Tree[.root [.flies time [.like [.arrow an ] ] ] ]  
\\
(b)~\Tree[.TOP [.VP [.V Time ]   [.NP flies ]  [.PP [.P like ] [.NP [.DET an ] [.NN arrow ] ] ] ] ] 
 & 
\Tree[.root [.time  flies [.like  [.arrow an ] ] ]   ]
\\
(c)~\Tree[.TOP [.NP [.NP [.NN Time ] [.NN  flies ] ]   [.PP [.P like ] [.NP [.DET an ] [.NN arrow ] ] ]  ] ]

 & 
 \Tree[.root [.like [.flies  time ]  [.arrow an ] ] ]   
 \\
(d)~ \Tree[.TOP [.S [.NP [.NN Time ] [.NN  flies ] ]  [.VP [.V like ] [.NP [.DET an ] [.NN arrow ] ] ]  ] ]
 
  & 
  \Tree[.root [.flies  time [.like  [.arrow an ] ] ]   ]
  \end{tabular}}
\end{center}
\caption{Syntactic ambiguity in the sentence ``Time flies like an arrow''. Each line represents a single interpretation, The left-hand side represents a phrase-structure tree and the right-hand side represents a  dependency representation. The trees in different raws reflect different interpretations. The trees in each raw represent the same interpretation.}\label{ambiguity}
\end{figure}

\subsection{Representation}

The development of any model for structure prediction must begin with the formal definition of   the  input and the output spaces, \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. For the time being, we assume a standard definition of  the input space \(\mathcal{X}\), as  is defined  in most frameworks for parsing English (we alter  this definition  later in this chapter). 
%
Let  \(\Sigma\) be a finite vocabulary of English words (or, a {\em lexicon}). The input space  for \(h\) is defined to  contain finite sequences over this alphabet  \(\mathcal{X}=\Sigma^*\). According to our formal definition, the output space \(\mathcal{Y}\) contains syntactic structures for  all sentences in \(\Sigma^*\),  but how do these  structures look like?
 %In fact, the  syntactic  representations provided by a parser \(y\in\mcy\)  crucially depends on the  representation type that is chosen by the parser developer. 


%The term `syntactic structure' is  theoretically loaded. A widely accepted common wisdom in linguistic theory is that the  syntactic structure  of a sentence reflects a systematic correspondence between the sentence form and the sentence's meaning. Researchers in linguistics have  developed  different formal frameworks for representing such structures.\footnote{This effectively means that our development of a  parsing  model employs an implicit assumption that {\em there is an underlying structure to be recovered}, and that this structure  is relevant for recovering the sentence's interpretation. This a necessary for developing mechanism for structure learning.} The  syntactic  representation provided by a parser \(y\in\mcy\)  thus crucially depends on the formal  representation type chosen by the parser developer.  Let us  survey  common choices.


 



%And perhaps also provide hints for more complex relations (that is, that ``tuna" is the {\em external subject}of the predicate ``eating".
When analyzing the syntactic structure of a sentence we would like to   identify the {entities}  in the sentence and the  {relations} between them. For instance, in  the sentence ``My cat likes eating tuna'', we want to identify ``My cat'' and  ``tuna" as the main entities, and to reveal a \emph{subject} relation between ``My cat''  and the verb ``likes'',   a  {\em complement}  relation between ``eating" and ``like", and an   {\em   object} relation between ``tuna'' and "eating". 

One way to represent this information is  by means of a {\em constituency tree} \cite{bloomfield33language, chomsky57structures},
a linearly-ordered tree of which  terminal nodes are the input words and   non-terminal (internal) nodes are labeled spans. Phrase-structure trees are constituency trees wherein the spans are labeled with phrase types, such as: a Noun Phrase (NP), a Verb Phrase (VP),  a Sentence (S), and others.\footnote{See table \ref{tag-gc} for the standardly used syntactic categories.} %\footnote{The resulting structure is also known  as phrase-structure trees \cite{chomsky57structures}.} 
%
The phrase-structure tree for   our example sentence  is illustrated in Figure \ref{ex:pstree}(a).  
We discuss phrase-structure parsing at length   in Chapter \ref{ch02const}.

A different way to represent syntactic structures is by means of {\em dependency trees}.
%which follow on the rich linguistic tradition of \cite{tesniere59elements} and the prague School. 
A dependency tree is composed of arcs, each arc connects two words in the sentence. Arc labels define the type of grammatical relation between the connected words. A dependency  tree spans all words in the sentence and an artificial root.   A dependency tree for  our  example   is illustrated in Figure~\ref{ex:pstree}(b).  We discuss dependency parsing systems in Chapter \ref{ch03dep}.
 %

Both constituency structures and dependency structures express a set of grammatical  relations such as {\em subject of, object of, complement of, modifier of} in the structure.
%These relations are called {\em grammatical relations} or {\em grammatical functions}. 
Dependency structures deliver an explicit representation of these  relations as labeled arcs. In constituency trees, these relations are typically  marked as ``dash features'' on the internal nodes. {When the grammatical relations are not specified explicitly, 
  it is common to derive the grammatical relations based on tree positions. For instance, in English it is customary to identify  the leftmost NP under S as the sentence {\em subject},  the  rightmost NP daughter of a VP as the {\em object}, and similarly to define rules for inferring all other relations.}

We can  express the information that is shared among these two representation types   by means of a {\em function tree}  --- a constituency tree that  identifies the meaningful spans in the sentences and label each phrase with its grammatical relation to the dominating phrase. 
%The information concerning  grammatical relation between spans is common to both representations.  
The function tree in Figure~\ref{ex:pstree}(c) represents the spans and grammatical relations that are shared between the phrase structure in (a) and the dependency structure in (b).
%\footnote{Different studies use different terminology. Grammatical relation labels are also often referred to as {\em grammatical functions}, or even just {\em dependency types}, but they all refer to the same linguistic information.}  



{Constituency trees  and dependency trees are commonly used in the development of statistical parsing  systems, but they are by no means the only  possible alternatives.  additional representation types that have been developed by linguists
 are Lexical-Functional Grammar (LFG
  \cite{bresnan00lfg}) Head-Driven Phrase-Structure Grammars (HPSG \cite{sag99hpsg})  Combinatorial Categorial Grammars (CCG \cite{steedman96surface}) and others.
  %\footnote{The latter has been proven particularly useful for recovering a semantic interpretation directly from sentence from, see \cite{x}.}  
  Developing parsing models for these representation types requires deep linguistic understanding and would take us far afield.  However, statistical methods have been effectively applied to developing parsers based on these formalisms (in LFG \cite{cahill08lfg}, HPSG \cite{miyao08cl}, CCG \cite{hockenmaier02models,cc07cl} etc.), and such strategies may certainly benefit from the morphosyntactic modeling strategies we discuss in this book.
  
%This book then focuses on constituency-based and dependency-based parsing systems, and on novel extensions that have been developed to address  morphosyntactic modeling in statistical frameworks. The question that  would be most relevant for all representation types we discuss, is:  how should existing represented types be extended, if at all, in order to cope with the complex output structures that characterize MRLs?

 
 \begin{figure}
 \begin{center}
 \begin{tabular}{cc}
(a) & \scalebox{0.9}{\Tree[.-ROOT- [.{S-root} [.NP-sbj [.DT-det {\bf My} ] [.NN-hd {\bf cat} ] ]   [.VP-prd [.VB-hd {\bf likes} ] [.S-xcomp [.VP-prd [.VBG-hd {\bf eating} ]  [.NP-obj [.NN-hd {\bf tuna} ] ] ] ] ] ] ]}
%(b)~\Tree[.\sign{S} \qroof{Five committee members}.\sign{NP}   [.\sign{VP} [.\sign{V} read ] [.\sign{NP} [.\sign{D} this ] [.\sign{N} book ] ] ] ]
\\ \\
(b) & \scalebox{0.9}{\Tree [.-ROOT- [.prd [.{\bf likes} [.sbj  [.{\bf cat} [.det {\bf My} ] ]  ] [.xcomp  [.{\bf eating} [.obj {\bf tuna} ] ] ] ] ] ]
%\Tree[.\sign{S} [.\np~ I ]   [.\vp~ [.\vb~ read ] [.\np~ [.\dt~ this ] [.\nn~ book ] ] ] ] }
%(b)~\Tree[.\sign{S} \qroof{Five committee members}.\sign{NP}   [.\sign{VP} [.\sign{V} read ] [.\sign{NP} [.\sign{D} this ] [.\sign{N} book ] ] ] ]
%(a)~\Tree[.-ROOT- [.{root} [.sbj [.det All ]  cats  ]   [.pred [.VB like ]  [.com  eating   [.dobj tuna ] ] ]  ] ] ]  ] ] ] ]}
} \\
\\
(c)~ & \scalebox{0.9}{\Tree[.-ROOT- [.{root} [.sbj [.det {\bf My} ]  {\bf  cat}  ]    {\bf likes}  [.xcomp   {\bf eating}  [.obj {\bf tuna}  ] ] ] ]}
\end{tabular}
\end{center}
\caption{The syntactic representation of natural language sentences: A phrase-structure tree (a), a dependency tree (b) and a function tree (c) for the  sentence: "My cat like eating tuna". }\label{ex:pstree}
\end{figure}


 \subsection{Modeling}
 
We defined a parser as structure prediction function \(h\) that automatically assigns a single parse tree  to an input  sentence in some language.
Due to inherent syntactic ambiguity
  %the parser  does not only analyze a sentence, but also  {\em disambiguates} it, that is, 
 the parser has to select the best   analysis from a set of possible candidates  \(\mcy_x\).
% for a  sentence \(x\in\mcx\) that best reflects its human precieved interpretation. 
%This ambiguity will be reflected regardless of any of the syntactic representation types; four different admissible analysis for the sentence "Time flies like an arrow", may be expressed in both constituency-based in dependency-based terms. 
The choice between the  candidates is guided by a scoring function: \[h(x)= \text{argmax}_{y\in\mcy_x} Score(y;x)\]
A parsing {\em model}  determines   the form of the scoring function  \(Score(y;x)\). 
% that performs syntactic analysis and disambiguation, by selecting the highest scoring analysis from all the permissible analyses for a  sentence.
%\begin{align} 
%\label{score}
%h(x)= \text{argmax}_{y\in\mcy_x} Score(y;x)
%\end{align}

%The scoring function  \(Score(y;x)\) is designed to reflect how good the structure \(y\) is as a syntactic analysis of the given sentence \(x\). 

%In order to define a scoring function over a joint event  involving two complex objects  \(x\in\mcx,y\in\mcy\),  we aggregate scores of smaller events that characterize this pair --- these may be parts or  input sentence \(x\),  parts of the output structure \(y\), or properties of the correspondence between the two. 

 %
 In this book, we focus on linear models that define linear scoring functions.
 \[Score(x,y) = {\bf w}^T {\bf g}(x,y)\]
Let us define the scoring function more formally.
\begin{exe}
\ex
Let \({\bf g}\)  be a function mapping each (input,output) pair to an \(n\)-dimensional  vector. 
We  call {\bf g} a  {\em feature function} and we refer to each \(g_i(x,y)\) as a  {\em feature} of the model.
 \begin{align} 
\label{fung}
{\bf g}: \mcx \times \mcy \rightarrow \mathds{R}^n
\end{align}
\ex Let \({\bf w}\in \mathds{R}^n\) be a parameters vector of the same dimensionality. 
We call {\bf w} a weight vector, where each \(w_i\) represent the weight of the feature \(g_i(x,y)\).
\ex  We can now define the scoring function as  the dot product of the feature vector and the parameterized weight vector. Our prediction is now given by 
\begin{align} 
 h(x) =\argmax_{y\in\mcyx} {\bf w}^T {\bf g}(x,y)
\label{linear}
\end{align}
\end{exe}


The syntactic representation \(y\) is a complex structure.
The strategy we employ here  is  to represent the complex structure by means of smaller pieces (factors or features)  of the overall representation. The function {\bf g} selects the features or parts of the structure that are relevant for the prediction,  the vector {\bf w} quantify importance of any of the features in making good predictions.    Using a linear representation  has the additional advantage of allowing us to straightforwardly use  of ready-made machine learning algorithms for learning the model parameters.


Our {\em linear} parsing model can thus be abbreviated as a tuple  \(M\) where \(\mcx,\mcy\)  define input and output spaces,  {\bf g} is a  feature function and {\bf w} is a parameterized {\em weight vector}.
 %mapping any input-output pair into an ordered set of observed features, and a vector 
% {\bf w} is a parameterized {\em weight vector}.
\begin{align} 
\label{model}
M = \langle \mcx, \mcy, {\bf g}, {\bf w} \rangle
\end{align} 
%Where \(\mcx,\mcy\)  are the input and output spaces, {\bf g} is a  feature function} and  {\bf w} is a parameterized {\em weight vector}.
%

We already discussed the choice of \(\mcx,\mcy\). How should we choose the function \({\bf g}(x,y) \) such that the model makes accurate predictions? In actuality, the feature function {\bf g}
\((x,y)\) relies of our knowledge of the domain and often complements the formal representation choices that we have made. Let us take a few commonly used examples:
 
\begin{itemize}
\item {\bf Grammar-Based Modeling.} A  grammar is a formal  system consisting of rules that can be applied sequentially in order to derive a syntactic parse tree. In  grammar-based parsing systems, the model  features are grammar rules. The function \({\bf g}(x,y)\)  maps every syntactic parse to the set of rules that are observed in  tree derivation, and the model parameters {\bf w} are the weights of the grammar rules.  
%We discuss grammar-based modeling in Chapter \ref{ch2const}.%as observed in gold trees
% according to their occurrences in the treebank when viewing gold parse trees as 
%encoding tree derivations. 

%When using generative modeling the decoding algorithm  is often based on dynamic programming, and it is designed to efficiently build, pack and explore the space of possible derivations of a sentence.  Statistical models that are based on probabilistic context-free grammars  are an example of a simple generative parsing architecture that lies at the backbone of many  broad-coverage parsers to date. We discuss such systems for constituency parsing  in \textsection\ref{sec:const}.
    
\item {\bf Transition-Based Modeling.} A transition-based system is a formal machine that defines states and possible transitions between them. 
Every transition between states  dictates an action that can be applies for constructing a parse tree. 
%The transition system defines a start state and a finish state, and transitions e
Every transition sequence which starts in an initial state and ends at a final state corresponds to a set of actions that constructs a permissible parse. The model feature vector \({\bf g}(x,y)\) describes state-transition pairs.  The model parameters are  the weights of each  transition choice in each possible state. %We  study transition-based modeling in Chapter \ref{ch03dep}.
% pair in the sequence defining the tree construction. 
%Transition-based parsers can be made very efficient. For instance, it is possible to construct a decoding algorithm can predict a dependency parse tree in time complexity linear in the length of the input (\(O(n))\).
 %If the algorithm explores various paths at each state, the complexity may be higher.  We discuss varieties of transition-based dependency parsing for Semitic languages in \textsection\ref{sec:dep}.

\item {\bf Graph-Based Modeling.} In graph-based systems the model  features are pieces (factors) of the graph representation of the parse-tree. The  function  \({\bf g}(x,y)\) maps the graph to a set of its subparts (factors) and properties, and the weights {\bf w} are assigned directly to the graph factors.
% and features.\footnote{In cases that grammar rule constitute substructures of the tree, grammar based methods may be subsumed under graph-based methods. In practice, this is not always the case, and we going to discuss these different types of models separately.}  We  discuss graph-based modeling in Chapter \ref{ch03dep}.
\end{itemize}
Technically, {\bf g} maps a  novel complex  events to a set of smaller events that may be observed in our data. The vector {\bf w} stands to regulate the contribution of each of the  features to the prediction of the overall structure. In principle, every value assignment to the vector \(w\) yields a different scoring function, and thus delivers different empirical predictions.

The model \(M\) is thus a theoretical construct that defines   possible input  and output structures,  and a set of scoring functions. We would like to pick a scoring function that best reflects our data. In order to implement a parsing system we  need to complete the following  tasks: find the best {\bf w} assignment, search for the best scoring candidate, and evaluate the result of the prediction. 
%
Every parsing system  then relies on three algorithms:
 \begin{itemize}
 \item {\bf Learn:}
  the learning algorithm  accepts a model \(M\) and a set of  data as input, and returns a weight vector assignment \(\omega\in{\bf w}\) as output.
 \item {\bf Decode:}
the decoding algorithm   accepts a model  \(M\),  a weight vector assignment \(\omega\in{\bf w}\)  and a sentence \(x\in\mcx\) as input and returns a syntactic tree \(y\in\mathcal{Y}_x\) as output.
 \item {\bf Evaluate:} an evaluation algorithm accepts a predicted structure and a gold structure for \(x\in\mcx\) as input, and returns  the accuracy of the prediction as output.
 \end{itemize}
% The resulting parsing architecture,  from a birds eye view, is shown in Figure~\ref{fig:arch3}.
The resulting parsing architecture is depicted in Figure~\ref{fig:arch3}. 
The general workflow can be defined as follow. A set of data enters a learning phase in which the relevant  features are observed and the model parameters  are estimated. The parameters are passed on to the decoding phase that now accepts a new sentence as input. Based on the parameters and the formal constraints on the representation, the decoder finds the highest scoring  analysis and provides it as output. 
We elaborate on the three types of algorithms in turn.




\begin{figure}
\center
\scalebox{0.55}{
\includegraphics{system-01-02}
}
%sentence \(\rightarrow\)\begin{tabular}{c}
%\fbox{\begin{tabular}{c} \ \fbox{parsing} \\  \(\uparrow\) \\\fbox{training}\\\end{tabular} }\\ \(\uparrow\) \\ treebank
%\end{tabular} \(\rightarrow\) \begin{tabular}{c}parse-tree\\ \(\downarrow\)  \\  \fbox{eval} \\ \(\uparrow\) \\ gold-tree\end{tabular}\(\rightarrow\) evaluation scores
\caption{An Architecture for Statistical Parsing and Evaluation from a Birds Eye View}
\label{fig:arch2}
\end{figure}

% Every instantiation of the weight vector {\bf w} defines a different prediction function that respects the formal constraints and modeling assumptions of \(M\). The following list contains  examples for  abstract models that are commonly  used in  statistical parsing.

%The tuple \(M\) then encapsulates our formal assumptions, i.e., what formal constraints \(x,y\) must adhere to, and our {\em linguistic} assumptions, i.e., what subparts of the (input, output) pair and the correspondence are relevant for designing scoring function that performs good predictions. \(M\) also defines the goals of  statistical learningL what parameters have to be estimated.





 %The function \({\bf g}(x,y)\) that selects relevant features of the input sentence \(x\),  the output structure \(y\), and the correspondence between the two, The vector \({\bf w}\) is a parameter vector, (or a {\em weight} vector) of the same dimensionality, defining the weight of each element in {\bf g}. 
 
 
\subsection{Learning}
A parsing  model \(M\)  determines the form of the input and output objects \(x\in\mcx\) and \(y\in\mcy\),  the form of the feature function \ffn~ and the   model parameters \wv.
\[M = \langle \mcx, \mcy, \ffn, \wv \rangle\] 
We can say that the model \(M\) defines a hypothesis class \(\mathcal{H}\).
% and a set of model parameters {\bf w}. 
%Each instance of the model \(m\in M\)   represents 
Each  specific  value assignment   \(\omega\) to the vector \wv~ instantiates a specific  hypothesis \(h\in\mathcal{H}\) that can be used for prediction. 
% represents a single prediction hypothesis \(h\in\mathcal{H}\).
 % that obey the same formal constrains and modeling assumptions. In order to use this model for prediction we will require a specific value assignment for the parameter vector \({\bf b}\in R^n\).
  % Each instance in \(M\) may be thought of as a particular   value assignment to the weight vector {\bf w}.The task of the learning algorithm is to pick and instance of the model \(m\in M\)  which represents a specific value assignment   \(\omega\)
%We define model instances \(m\in M\) as a tuple,  representing the different hypotheses in the class.
%\[m = \langle \mcx, \mcy, {\bf g}, \omega \rangle\] 
We overload the \(h\) notation to introduce the specific   assignment \(\omega\) in the scoring function.
\[h(x,\omega)=argmax_{y\in \mcyx}  \omega^T  \ffn(x,y)\]

%In principle, the vector \(\omega\) could be set arbitrarily or based on some expert knowledge.  
In  statistical settings, \(\omega\) is {estimated} from data.
We assume a data set    of \(k\) sentences in the desired language, paired up with their correct syntactic tree (i.e., their {\em gold} tree). \begin{align} 
\label{eq:argmax}
\mathcal{D}=\{(x_i,y_i)\}_{i=1}^{k}
\end{align}
We refer to \(\mathcal{D}\) as our {\em treebank}. 
The learning task is  now more accurately defined as follows:
\begin{exe}
\ex Given a model \(M\) and a treebank \(\mathcal{D}\),   pick a    parameter value assignment \(\omega\) that would make an optimal prediction with respect to the empirical observations in  \(\mathcal{D}\) and the formal constraints in \(M\).
\end{exe}

When the examples are not annotated (\(\data=\{x,\emptyset\}_{i=1}^k\)), the learning is {\em unsupervised}. When only \(l<k\) examples are annotated,  
or when the example sentences are annotated with an incomplete output representation, the learning is semi-supervised.
In this book we concentrate  on {\em supervised} learning, where our data is correctly and completely annotated, and we  occasionally resort to semi-supervised methods that can utilize unannotated examples ---  which are more cheaply available.




% respecting the formal constrains defines by \(\mathcal{X}, \mcy\) and the linguistic assumptions underlying {\bf g}.

%We can now define t statistical learning as follows. Given a  corpus   \(\mathcal{D}\)  of  sentences annotated with their correct syntactic representation, a the formal definition of a model \(M\) we would like to  define  relevant  events or features of the syntactic representation, and learn weights for these events from observations in  \(\mathcal{D}\), in a way that maximizes a certain objective -- for instance, minimizing an expected error on the structure. Given these weights, the parsing algorithm has to efficiently traverse all possible syntactic trees  for the given input sentence and return the highest scoring analysis according to the learned model.
 

Different learning methods differ in what they define to be ``optimal'' assignment.
All learning methods assume a learning objective \(L(\data,M)\) that the learning procedure aims to optimize.
Supervised learning comes in (at least) two flavors, {\em generative} and  {\em discriminative}, differing in how they define the objective function for optimization.
%\footnote{Strictly speaking, probabilistic methods may be either generative or conditional, depending on the kind of distribution that they are assumed to impose on our data. We retain the term "generative" for both, to comply with existing terminology in machine learning.}
%\footnote{Both forms of these forms of learning are {\em parametric}, we assume that we know which parameters we need to estimate and how many of them we have. The are also non-parametric learning methods, that is, when we do not know up front the number of model parameters, or when we do not have model parameters at all (example-based learning, k-nearest neighbor). We do not discuss such methods here.}
%This methods differ in their definition of "optimal", and in they rely on different modeling assumptions.
% Let us briefly distinguish the two.
 
\begin{itemize}
\item {\bf Generative  methods}    assume that the training data \data has been generated by a    probability distribution.   These learning algorithms aim to maximize the likelihood of the data under this modeling assumption. Assuming that all examples are independent, the likelihood of the data is the product of the probability of  individual sentences.
% , which is in turn equivalent to  maximizing sum of their log-likelihood.
\begin{align} 
\label{learn-gen}
\argmax_{\omega} L(\mathcal{D}; M) & = \argmax_{\omega}  \text{likelihood}(\mathcal{D};\omega) \\
    &  = \argmax_{\omega}  \prod_{i=1}^k P (x_i,y_i;\omega ) 
%\\   &  = \argmax_{\omega}   \sum_{i=1}^k \log{} P(x_i,y_i;\omega )
\end{align}
%When we maximize this term we in fact seek estimates that are {\em unbiased} and {\em consistent}, that is,  as the data grows to infinity the estimates to converge to the true probabilities.  
We elaborate on the use of these methods for estimating the parameters of  grammar-based models in Chapter~\ref{ch02const}.
\item {\bf Discriminative methods.}  In discriminative methods, we do not assume anything about the input sentences \(x\in\mcx\). We model, for any particular \(x\), what are good outcomes.  The objective function  consists of  a loss function over the data for each parameter assignment, and a term that captures model complexity under this assignment (regularization). When we minimize this objective function we minimize empirical loss and retain a model which is as simple as possible.
\begin{align} 
\label{learn}
L(\data; M) =   \text{loss}(\omega, \mathcal{D}) + \text{complexity}(\omega)
\end{align}
%There are different ways to define the loss and complexity terms, again respecting different assumptions about the data.  
We will demonstrate the different uses of  discriminative learning methods in parsing in  each of the following chapters.
\end{itemize}
 
%Generative methods have an {\em explanatory} advantage -- they  take into account both the structure of the input and the structure of the  output. Discriminative methods do not aim to explain anything about the input structure, they model directly the end result of the classification.
% The specific form of the learning function depends on the modeling assumptions we encoded in \(M\). If we assume a generative probabilistic framework, the learning function depends on the definition of the probability of the candidate structures. It also implicitly depends on the assumption that the data has been generated by this particular distribution -- and assumption that is not always empirically accurate. 
% If we assume a discriminative framework, our expression depends on the form of the loss function and on term that quantify model complexity (often referred to as a regularization term).

When the learning objective \(L(\data;M)\) is well behaved (e.g., it is convex) we can solve it analytically and get an  {\em exact}  solution. If it is not well behaved, and cannot be solved analytically, we need to resort to {\em approximate} methods. 

Learning techniques may also be classified according to the learning objective.  {\em Local} learning procedures  focus on optimizing a set of local decisions that are used when constructing the overall parse. Each decision may be reduced to multiclass classification problem. {\em Global} methods, in contrast, are trained to optimize the score of the entire structure, where the number of potential outcomes is in principle unbounded. This is done by repeatedly calculating scores for the entire structure under each parameter assignment.
% which repeatedly calculate empirical loss, or empiricla risk and improve the estimates until reaching a desired  criterion. This may involve multiple training passes over the data and may be computational expensive. 

An additional question we can ask about learning techniques is whether they are  {\em batch} or {\em online}.   In online methods, the result of the  prediction in training epoch \(i\) is used for optimizing the parameter vector in epoch \(i+1\), and we do so repeatedly. Batch methods  do not use the result of the prediction in each round, though but may rely on other inference algorithms to calculate the empirical loss or regularization terms. 

In the chapters to follow we focus on the model aspects that have to be taken into account when selecting a  learning methods, and on designing feature functions accordingly. A   classification of the  learning methods that we use in this book is found in Table~\ref{tab:learn}.\footnote{This book does not go into details of  deriving the different estimates or estimation procedures, as these are not  relevant to our discussion, and they may be studied  elsewhere.  We briefly present the relevant algorithms in the context of their use, and discuss the details of their application for parse-tree prediction. 
}
%We look at learning methods from the other way round, after defining an abstract model, 

%A pre-condition for statistical estimation is a  set of  example sentences in the particular language. When the sentences are paired up with their correct and complete syntactic representation, we are concerned with supervised learning. When we have no annotated examples whatsoever we are concerned with unsupervised  If our data is incomplete, that is, when the examples are annotated with only a partial representation with respect to the predicted structure, or if only a subset of our data is annotated, we are concerned with semi-supervised learning.  In this book we focus mainly on supervised learning, with occasional recourse to semi-supervized method that would allow us to leverage un-annotated data).
%When we have no annotated examples whatsoever (strictly speaking, when \(\mathcal{D}=\{(x_i)\}_{i=1}^{k}\), we are concerned with unsupervised learning


 %A different axes along which learning algorithms may be characterized is {\bf batch} versus {\bf online} learning. 
 
 
\begin{table}
\center
\scalebox{0.9}{\begin{tabular}{l|lllll}
Method & Generative or & Probabilistic or & Exact or  & Local or & Batch or\\
 & Discriminative? & Non-Probabilistic? & Approximate? & Global & Online?\\
\hline
MLE           & Generative     & Probabilistic       & Exact  &  Local & Batch \\
EM             & Generative     & Probabilistic       &  Approximate &  Local & Batch \\
LL/MaxEnt  & Discriminative & Probabilistic      & Exact  & Local & Batch   \\
CRF            & Discriminative & Probabilistic      & Exact  & Global & Batch   \\
SVM          & Discriminative & Non-Prob.\ & Exact  & Local & Online \\
MIRA          & Discriminative & Non-Prob.\  & Exact  & Global & Online \\

%Structured Perceptron (Collins 2002) - "discriminative" on the 1-0 loss, sort of, in certain conditions - nonprob - exact - local -   online
%Structured Perceptron (Collins and Roark 2004) - "discriminative" on the 1-0 loss, sort of, in certain conditions - nonprob - "exact" - global -  online
\end{tabular}}
\caption{A Classification of the learning methods used in this book}\label{tab:learn}
\end{table}
 
  \subsection{Decoding}




%Implementing a parsing system based on one of these (or any other) model,  requires implementing two separate  algorithms:  a decoding algorithm that accepts a sentence and a weights vector, and seeks the structure \(y\mathcal{Y}_x\) that obtains the highest score, and  a learning algorithm, that assigns values to the the weights vector {\bf w}. In statistical supervized parsing system, the parameter values are set based on a sample of manually annotated examples which we call a treebank. The overall parsing architecture is outlines in Figure \ref{fig:arch}. We discuss important characteristics of  the decoding and learning algorithms in turn.

%In this book we assume that the score of a parse tree is defined by a linear model.
%The score of an analysis \(y\) is defined to be the dot-product of a weight vector {\bf w} with a vector \(\phi(x,y)\) representing  events or features of the parse tree \(y\in\mathcal{Y}\) as a syntactic representation for the given sentence \(x\in\mcx\). 
%In what follows we refer to elements in  \({\bf g}(x,y)\) as the model {\em events}, and to the weight vector {\bf w} as the vector of model {\em parameters}.   The goal of the learning algorithms is the set values to the weights in the model parameters vector.  Due to inherent ambiguity in natural language, a sentence  may admit  multiple syntactic analyses that reflect different interpretations.\footnote{In fact, the number of possible phrase-structure trees grows exponentially  with sentence length, but only a small subset of them would be considered acceptable by a human.} 

%As oppose to simple prediction, for instance in the case of  binary  or multi-category classification, the set of candidate outputs of for an input sentence is not fixed in advance --- it depends on the input string. 
Our learning algorithm delivers an estimate \(\omega\) for {\bf w}, that defined a particular instance model \(m\in M\). 
A decoding algorithm receives a model instance \(m\in M\)  and a sentence \(x\in\mcx\) as input, and returns an optimal output structure \(y\in\mcy_x\)  respecting the modeling assumptions. The decoding algorithm solves the prediction task  defined above, repeated here for convenience.
\[h(x)=argmax_{y\in\mcy_x} Score(x,y)\]

%The different parts of the prediction function define the different tasks assumed for such a decoder.
As opposed to binary  or multiclass prediction,  the set of outcomes (parse candidates) for a given sentence is not fixed in advance, and can in principle  exponentially grow with the length of the input.\footnote{The number of binary trees that may be constructed for an input string equals the number of ways in which we can place balanced brackets, known to be as its {\em catalan number}.}
The decoding algorithm thus has to  complete several  tasks, reflecting the different parts of the formula:
\begin{exe}
\ex
\begin{itemize}
\item Generating parse candidates for \(x\in\mcx\) (the \(\mcyx\) set)
\item Calculating the score of each parse candidate (the \(Score(x,y)\) function)
\item Searching for the highest scoring candidate (the \argmax~expression)
\end{itemize}
\end{exe}
We assume that calculating the score of the parse  is linear in the size of the parameters vector and may be done   efficiently. 
The time-intensive part of the decoding involves generating and traversing  possible candidates, and keeping track of the highest scoring one.
%The decoding algorithm in fact  solves the {\em argmax} expression in the function \(h\) repeated below.
%To do so, it has to complete the following tasks: traverse the relevant candidates, calculate their scores, and return the highest scoring one.


%
%We assume that the  formal constraints on \(\mcx,\mcy\) and \({\bf g}\) provide  sufficient information for constructing relevant output candidates. 
 {\em Exhaustive} search algorithms   traverse the  set of all permissible output candidates for a sentence. Such algorithms are  {guaranteed} to  return an optimal solution.    {\em Greedy} search algorithms make a sequence of locally optimal decisions, in the hope that such a sequence  will lead to a globally optimal, or at least a reasonable, solution. 
In the general case, greedy algorithms do not come with  guarantees that the returned output structure is globally optimal.  However, there  are particular cases where it is possible to prove that a  sequence of locally optimal decisions leads to a globally optimal solution. 
%This condition very much depends on the nature of the problem and the modeling assumptions, and in particular on the definition of {\bf g}  in our scoring function.

%Exhaustive  and greedy algorithms represent two extreme approaches for finding an optimal solution.  Exhaustive algorithms  are  slower  but they come with formal guarantees. Greedy ones may be  extremely fast but are not guaranteed to find the best solution. 
For a subset of the structure prediction problems that  respect certain  properties, we can perform an exhaustive search   that traverses an exponential number of  candidates in polynomial time  using dynamic programming methods (DP).   
\begin{exe}
\ex Conditions for Dynamic Programming
\begin{xlist}
\ex The suboptimal problem property: the optimal solution of a large problem to optimal solutions of smaller subproblems.
\ex The overlapping problem property:  the optimal solution of a large problem is required to  calculate optimal solutions for  smaller subproblems repeatedly.
\end{xlist}
\end{exe}
%DP methods for tree recognition may be easily extended for keeping track of the best tree, and these methods have dominated decoding approaches for statistical parsing.  
 For real-world situations, however, this may not be sufficient. DP algorithm that are polynomial not only in the length of the input but also in the size of the parameter vector. The parameter vector may be prohibitively large, and  even a DP program may not be sufficiently fast. To achieve decoding efficiency, researchers  develop {\em approximate}  methods that would allow us to speed up  a DP algorithm  without sacrificing much accuracy, sometime  even retaining  formal guarantees. 

\begin{itemize}
\item {\bf Beam Search}  limits the search at each stage for a finite number of candidates, and thus keeps a manageable time/space complexity.
\item {\bf A* Search}  does not explore all possible alternatives at each decision point, but employs an admissible heuristic function to pick the next best candidate. A* search method provide a formal guarantee that first solution that is found in this way is also the optimal one.
\item {\bf Coarse to Fine}  When the set of alternative candidate is too large to explore exhaustively, it is possible to define a coarse grain projection of the candidates space, decode this space exhaustively, and then use only the best result(s) for searching through fine grained candidates that are compatible with it.
\item {\bf Pruning}  Another common way to speed up decoding is to define a certain threshold, and skip those the candidates that do not meet these threshold conditions. 
%\item {\bf Dual Decomposition}
%\item {\bf Factored Models} A common way to increase model efficiency is factoring: defining two separate models 
\end{itemize}

To sum up: decoding algorithms implement a  prediction function that in principle has to traverse an exponential number of  output candidates and find the best one. Enumerating all candidates and keeping track of all scores would be prohibitely expensive. If the modeling assumptions in \(\mcx,\mcy\) and our feature function {\bf g} meet the DP properties, decoding may be solved exactly and efficiently. In other cases, when \(M\) does not respect the DP properties, or when exhaustive search is not fast enough for practical purposes, approximate methods may be employed. In general, we may say that the expressive power of the feature function limits the application of DP and related approximations, and vice versa.  The art of model design is  in finding a right balance between model expressively and  computational efficiency.

\subsection{Evaluation}
 We follow a common practice in machine learning where the annotated data (the treebank) is first split into a training set and a test set that are disjoint. The learning algorithm is applied to the train set, and the decoding algorithm is applied to raw sentences from the test set.
The part of the treebank reserved as test set contains the gold analyses of the parsed sentences, and comparing parse hypotheses with gold trees allows us to quantify the parser's performance as scores (Figure \ref{fig:arch3}). 

 
%How can we tell how well a parser has performed? To do this we need to parse sentences that have not been seen during training, and compare their parses with the correct analyses. 

A naive way to score predicted parse trees would be to assign as follows
\begin{align}
Eval(y_p,y_g) =
\begin{tabular}{ll}
 1& (y_p = y_g) \\
  0 & otherwise \end{tabular}
\end{align} 
However, in structure prediction setting such exact matches are hard to come by. In order to gain meaningful empirical evaluation of the strengths and weaknesses of our system we would like to be able to quantify partial success. 
We can more accurately quantify the success of a parser using targeted evaluation metrics.

Evaluating metrics  for structure prediction typically  measure the dis-similarity between the parser output and the gold structure, and normalize it in order to obtain a metric that ranges from 0 (disjoint structures) to 1 (exact match).
The way to define (dis)similarity often depends on the representation chosen for the formal output.
% and may be done, e.g., using distance-based metrics. 
\begin{align}
Eval(y_p,y_g) = \frac{similarity{(y_p,y_g)}}{norm(y_p,y_g)} = 1-\frac{dissimilarity{(y_p,y_g)}}{norm(y_p,y_g)}
\end{align} 
 Different ways of defining  and normalizing the similarity measure gives rise to different evaluation.
A particularly  useful technique in structure prediction is to reduce the structure into a set of objects, and normalize using the structure size. This formally yields the well-known precision and recall scores.
 \begin{itemize}
 \item {\bf Precision}(y_p,y_g)
 \[P=\frac{|y_p \cap y_g |}{|y_p|}\]
 \item {\bf Recall}(y_p,y_g)
 \[R = \frac{|y_p \cap y_g |}{|y_g|}\]
 \item {\bf Fscore}(y_p,y_g)
 \[F= \frac{P+R}{2\times P\times R}\]
 \end{itemize}
 The way in which the structures are broken down to sets varies to fit the structure definition.
 We will discuss some the most  useful metrics in Chapter \ref{ch05eval}. 


Evaluation metrics are typically not used to score individual trees but the average the results on an entire set of examples. The two most common ways the average accuracy results on a set of parsed examples are using Micro and Macro averaging. In parsing, we typical use the latter, as it is less sensitive to  variation in sentence length.

\begin{itemize}
\item {\bf Micro average}
\begin{align} 
\label{micro}
eval(\{g_i,p_i\}_{i=1}&^n) = \frac{1}{n}\sum_i eval(p_i,g_i)
\end{align}
\item {\bf  Macro average}
\begin{align} 
\label{macro}
eval(\{g_i,p_i\}_{i=1}&^n)= \frac{\sum_i similarity (p_i, g_i)}{\sum norm(p_i,g_i)}
\end{align}
\end{itemize}



Such evaluation metrics and procedures which depend on the particular form of the output structure are called {\em Intrinsic}.
There also exist {\em extrinsic} evaluation campaigns that do not score the structures directly --- they score parsing performance indirectly based on the performance of an embedding task. Extrinsic evaluation campaigns give a faithful indication of the usefulness of the system for a particular task, but this  indication does not typically generalize across different tasks (translation, information extraction, etc).
%\begin{figure}
%\center
%sentence \(\rightarrow\)\begin{tabular}{c}
%\fbox{\begin{tabular}{c} \ \fbox{parsing} \\  \(\uparrow\) \\\fbox{training}\\\end{tabular} }\\ \(\uparrow\) \\ treebank
%\end{tabular} \(\rightarrow\) parse-tree
%\scalebox{0.55}{
%\includegraphics{system-01-02}
%}
%\caption{An overall  architecture for data-driven supervised statistical parsing}\label{fig:arch}
%\end{figure}
\subsection{Syntactic Analysis and Disambiguation}
Statistical parsers are computer programs that perform syntactic analysis and disambiguation of natural language sentences.
An input sentence  to a parser is drawn from a set of sentences in a   language \(\mcx=\Sigma^*\), and  %\item The Syntactic Representation % A {\bf probability model}  for  assigning probabilities to parse trees
 the output   \(y\in\mcy\)  is a formal representation of a syntactic parse tree. 
 %The set of constraints on $\mathcal{X}$ and $\mathcal{Y}$  define the form of the syntactic parses, that is, whether they are dependency-trees or constituency-trees, and which trees are admissible. 
The syntactic analysis of a sentence is a complex structure, and we employ methods for structure prediction which represent a complex structure by means of a set of simpler events that are extracted using a feature function, and which can be used to score different analyses. These events may be the rules of a formal grammar, transitions in a state machine, or simply factors of the parse tree itself. The probabilities, scores or  weights of these events are the model parameters, and they are estimated from annotated data based on corpus statistics. 

The prediction is done using a linear model that assumes a feature function {\bf g} mapping any \((x,y)\) pair to a high-dimensional  feature vector, and a parameterize weight vector {\bf w}.
\[h(x)=\argmax_{y\in\mcy_x}  {\bf w}^T {\bf g}(x,y) \]
%This linear model  is abbreviated as the tuple \(M\).
%These events should be observable  in our  data. 
%We refer to the formal constraints over the syntactic representation \(\Gamma\) and to the modeled events and feature-templates  definition as  \(\Phi\).%{\bf training algorithm} for estimating the probability distribution \(p\) 
%Then we can use this assignment in a  linear model for prediction
%\[h(x)=\argmax_{y\in\mcy_x} {\bf g}^T \omega\]
%We can compare our prediction in hindsight to gold hypothesis assigned by a human expert in order to empirically quantify the parsers succes.
%The different ways in which data-driven parsing systems are modeled, formalized and implemented  can be roughly grouped into  (i) grammar-based, (ii) transition-based and (iii) graph-based methods. These methods are orthogonal to the kind of formal syntactic representation being used. Let us characterize these systems in turn.

When implementing a  parset based on this  form, we must solve the following tasks:
\begin{itemize}
\item {\bf modeling:}  formally defining the  constraints on the input  space \mcx, the receipt of generating parse candidates \mcy, and the parse decomposition (or, feature function).
\[M = \langle \mathcal{X}, \mathcal{Y}, {\bf g}, {\bf w}\rangle \]
\item {\bf learning:}  devising a learning algorithm that accepts a model \(M\) and a data set \data~as input, and returns an optimal  parameters assignment \(\omega^*\) as output.
\[\omega^*=\argmin_{\omega} \text{loss}(\data,\omega) + \text{complexity}(\omega)\]
\item {\bf decoding:}  devising a decoding algorithm that accepts a model instance \(m\in M\) and an input sentence \(x\in\mcx\) as input, and returns a syntactic analysis \(y\in\mathcal{Y}_x\) as output.
\[h(x,\omega^*)=\argmax_{y\in\mcy_x}  {\omega^*}^T {\bf g}(x,y) \]
\item {\bf Evaluation:} devising an evaluation algorithm that accepts a predicted parse and a gold parse as input, and calculates the accuracy of the prediction as output.
\[eval(y_p,y_g)=\frac{\text{similarity}(y_p,y_g)}{\text{norm}(y_p.y_g)|}\]
\end{itemize}
The elaborated architecture is depicted in Figure~\ref{fig:arch3}. The general workflow can be defined as follow. A treebank enters a training phase in which the modeled events are observed and the model parameters (the weights/scores of the modeled events) are estimated. The parameters then enter the decoding phase, and the decoding algorithm now accepts a new sentence as input. Based on the parameters and the formal constraints on the representation, it finds the highest scoring possible analysis and provides it as output.


\begin{figure}
\center
\scalebox{0.55}{
\includegraphics{system-01-03}
}
%sentence \(\rightarrow\)\begin{tabular}{c}
%\fbox{\begin{tabular}{c} \ \fbox{parsing} \\  \(\uparrow\) \\\fbox{training}\\\end{tabular} }\\ \(\uparrow\) \\ treebank
%\end{tabular} \(\rightarrow\) \begin{tabular}{c}parse-tree\\ \(\downarrow\)  \\  \fbox{eval} \\ \(\uparrow\) \\ gold-tree\end{tabular}\(\rightarrow\) evaluation scores
\caption{An Architecture for Statistical Parsing and Evaluation from a Birds Eye View}
\label{fig:arch3}
\end{figure}


%We define decoding by specifying a search  algorithm is given the specification of permissible output structures \(\Gamma\),  the definition of modeled events or features templates \(\Phi\),  the  learned parameters  vector {\bf w}, and an input sentence in the language \(x\in L\). Its goal is to traverse all permissible  structures and return the highest scoring structure according to the linear model.

These different  tasks are of course not independent.  %  The identification of  features and feature-templates  relies on  knowledge expert, while the feature selection and weight assignment relies on  machine learning methods.  
The efficiency of the decoding algorithm depends on the possibility to search efficiently through the space of possible structures and the independence between subparts of the structure. Such independence assumptions greatly affect the design of the feature function. The efficiency of the decoding algorithm may also affect the efficiency of learning -- in case of online learning methods, for instance,  where we compare an empirical prediction to a gold annotated structure and update the parameter vector accordingly. Discriminative parsing methods assume re-parsing the train set and updating the parameter vector repeatedly and may greatly suffer if decoding is inefficient.
 %In such cases, training time is a function of the size of the data set and the efficiency of the decoding algorithm. 
 The evaluation metrics depend on the formal definition of \(\mcx,\mcy\) and they  are often structure-specific. These metrics are sensitive to different formal representations and typically do not transform well across representation types and languages. The definition of evaluation metrics may also affect  learning, in case the objective function is one that tries to minimize the empirical error of a prediction. 
 
 The architecture defined here underlies all best performing broad-coverage parsers developed for English, and many optimization algorithms have been employed for the different tasks. One of the key questions we address in this book is whether, or how, this architecture has to be altered in order to deal with languages that are substantially different than English.

%There are currently several 

% Let us conclude our brief survey by reviewing these trends in building parsing systems.

%Such parsing systems typically assume a generative component that generates all possible parse trees \(\mcy_x\) for a given sentence \(x\) (usually in  a packed representation) and the decoding algorithm  explores the different possible trees and seeks the highest scoring one. In order for the decoding algorithms to be efficient, these models are usually factored, that is, consider a division of the parse to independent events, so that we can efficiently decode by traversing and scoring the trees without repeating computations. The level of factoring determines to a large extent the complexity of the decoding algorithm, as well as the feasibility of training the weights using a given amount annotated data. We will not discuss here graph-based models, but we comment on their adaptation and use in \textsection\ref{sec:conclude}.


\section{What are Morphologically Rich Languages?}\label{ch1:mrls}

Morphologically rich languages,  MRLs for short,  are languages in which significant amount of information is expressed at word level.
Let us consider the following examples.
\begin{exe}
 
\ex\label{heb} Hebrew 
\begin{xlist}
\ex \gll{\em wkfmhbit} {\em hlkti}\\
and-when-from-the-house go-past-1st.sing\\
\trans{`and when I went out of the house'}
\end{xlist}
\ex \label{turk} Turkish 
\begin{xlist} \ex
  \gll{\em al+Hn+ymH\$~}\\
was-2nd.poss-red\\
\trans{`it was your the red one'}
\end{xlist}
\ex Warlpiri \begin{xlist} 
\ex\label{warl} 
\gll {\em  Maliki-rli-ji} {\em yarlku-rnu} {\em wiri-ngki} \\
dog-erg-1st.sing-obj bite-past big-erg \\
\trans{`a big dog bit me'}
\end{xlist}
\end{exe}
The  word-token  {\em wkfmhbit} in the Semitic language Modern Hebrew corresponds to five different vocabulary items in English that carry different meanings and have different roles in  phrase  (\ref{heb}): {\em w} (`and', conjunction) {\em kf} (`when', relativzer) {\em m} (`from', preposition) {\em h} (`the', determiner) {\em bit} (`house', noun). The Turkish word {\em al+Hn+ymH~} in (\ref{turk}) corresponds to the phrase "it was your red one".   The first Warlpiri word in (\ref{warl})  {\em Maliki-rli-ji} indicates that  {\em Maliki} (a dog) is performing an action which I  am  the object of, where the actual action ("bite", yarlku) is specified by a different  word-token.  

Word structure in MRLs then breaks a basic assumption concerning words we had so far. Up to this point we have assumed that  input word-tokens are primitive units, each corresponds to a single meaning-bearing element. In MRLs, each input token has an internally complex structure and may contain multiple meaning-bearing units  called {\em morphemes}. 

The complex internal structure of words is not only interesting in its own right, but it has profound effects on the syntactic structure in the language --- that is, the way words may combine in order to form meaningful phrases and sentence. 
Consider, for instance, the phrase ``big dog" in Example~(\ref{warl}). English grammar forces us to place the word ``big" and ``dog" together  in a certain order in order to indicate that ``big" is a property of ``dog''. The grammar of Warlpiri, in contrast,  allows its speakers to indicate the ``modifying property" relation by different means, that is, by marking both the words "dog" and ``bite" with an ergative suffix.\footnote{Readers who are not familiar with the linguistic abbreviations may consult the table in Appendix A.}  

These phenomena are not as obscure and they are found in many MRLs around the world. In fact, we find morphological phenomena also English, albeit in a  smaller scale. Consider the following clauses:
\begin{exe}
\ex\label{case} 
\begin{xlist}
\ex She likes her
\ex Her, she likes
\end{xlist}
\ex\label{agreement}
\begin{xlist}
\ex The girl who, I think,  really likes cookies, eats them all the time.
\end{xlist}
\end{exe} 

Example~(\ref{case}) shows two English sentences that use the same words to express the same meaning, but their word order varies. The way the English grammar allows us to distinguish who likes whom here  is by morphological marking of pronouns. Both "her" and "she" refer to a third person singular feminine pronoun. The pronoun "her" is additionally with the accusative property, indicating that this is an {\em object} of like ---  as opposed to the pronouns "She", which is in a nominative form, indicating that this is the {\em subject}. The linguistic phenomenon of indicating the role of words in the sentence by varying their form is calls {\em case marking}.

Now, consider Example~(\ref{agreement}). The noun ``girl" is clearly the subject of the verb ``like", even though it us quite distant from it. The noun girl is unmarked in and of itself, but English grammar allows us to identify it as the verb of "like" via indicating the properties 1st person singular on the verb ``like, using the suffix "s". Inflecting a verb with certain properties then indicates what kind of subject it can take. Similarly, the properties of ``them'' (3rd person plural) allows to link them to the reference "cookies". The linguistic phenomenon of connecting two different elements by means of marking one with the grammatical properties  of the other, is called {\em agreement}.


Language vary in their capacity to create internally complex words which, in turn, has implications for the kind of syntactic structures that  these words appear in. 
The crucial observation is the following: {\em the more information words can express morphologically, using word form and structure, the less the grammar of that language needs to rely on patterns such as word order and grouping to indicate grammatical relations.}
Case marking and agreement are only two examples for the ways the internal structure of words affects sentences form and sentences meaning,  and this influence may go both ways. 

In order to design models  that  effectively predict  syntactic structures in MRLs, we ought to model the internal structure of words and the implied {\em morphological-syntactic} interactions.
This chapter is dedicated to characterizing   morphological forms, morphological functions and morphological processes that are found in the worlds languages. It does describe any one language in particular but survey an arsenal of morphological means that are common in MRLs. We first introduce basic terms in  morphological theory, and  demonstrate how complex word structures reflect grammatical relations and facets of meaning. Then we  discuss  the tasks of morphological analysis and disambiguation and the main challenges faced by  computational approaches that attempt to  solve it.


%\section{The Structure of Words}


\subsection{Words and Morphemes}
Morphology (from greek "morph": form, shape) is the study of word structure.
Linguists in the American  structuralist traditions define a  {\em morpheme} as the smallest unit of form:function correspondence that is found  in natural language.\footnote{There indeed exist smaller units in natural language, such as syllables, but a syllable does not bear a relation to particular meaning.} 
Some familiar examples from English morphology are shown in \eqref{ex:morphology}.\footnote{When we want to refer to lexical material of a word regargless of other morphological material, we use upper case lemma enclosed in /XXX/. More on this later.} 
\begin{exe}
\ex\label{ex:morphology}
\begin{tabular}{lllll}
 "cat" & : Noun,/CAT/ &+   "s" &: Plural & $\leadsto$ ``cats'' \\
 "eat" &: Verb/EAT/  &+  "s" &: Present, 1st person, singular & $\leadsto$ ``eats''\\
 "fix" &: Noun,/FIX/ &+ "es" &: Plural &  $\leadsto$ ``fixes''   \\
 "fix" &: Verb/FIX/ &+  "ed" &: Past tense &  $\leadsto$ ``fixed''\\
\end{tabular}
\end{exe}
These simple examples already the basic terminology in morphological theory.
A {\em morph} is a morphological form without a function (``cat", ``s", ``eat", ``es", etc). {\em Allomorphs} are two or more morphs that express the same function (``s",``es" above). Morphs may be ambiguous, that is, they may be use to express different functions. The morph``fix" has two different functions -- it may serve as a noun or as a verb in a sentence. The morph ``s" in English has two different functions -- it can mark plural, as in "cats", or it may  inflect present tense verbs into 3rd person singular, as in "eats".

There is an intuitive separation between morphemes that express lexical material, such as those that are found at the left hand side of (\ref{ex:morphology}), and those that mark the value of properties such as tense, number, person, case, animacy and more, as those that are found in the right hand side.  We will refer to the former as {\em lexical morphemes} and to the latter as {\em functional morphemes}.  
It is typical that a morphologically complex word contains a single lexical morphemes and multiple functional morphemes, in which case the lexical morphemes contributes the core lexical material, and is referred to as the {\em host}. The morphological form of the host, that is, the morph of the host, is commonly referred to the {\em stem} of the word.\footnote{This defined the name and the task of a well-known and widely-used NLP application called "stemming" : identifying the morph of the hosting lexical morpheme in every word.}

Morphemes that  stand on their own are called {\em free morphemes} and those that cannot stand on their own and must attach to a host are called {\em bound morphemes}. It might appear tempting at this point to associate free morphemes with lexical morphemes and bound morphemes with functional morphemes, but this is not necessarily so. The english morpheme "of" contributes a genitive (possession) property, just like the suffix ``'s" does, but  the preposition ``of" is a free morpheme and the possessive suffix ``'s" is a bound morpheme. 
Furthermore, lexical morphemes may also be bound, as in ``greenhouse'', to derive more complex lexical meanings.\footnote{The phenomenon of creating complex lexical meanings by compounding simple lexical morphemes is very common in Germanic languages. For example, the term "Sprachabteilungsleiter"  in German denotes the head (sleiter) of the linguistic department (sprache+abteilung).} 
%In English, and certainly in languages which have richer morphology, there is a wide range of mis-matches between the lexical/functional and free/bound distinctions, 
%so we will refrain from making any a-priory association between them.

 %Any text processing platform needs a notion of word. One of the simplest ways to define words is to equate them with space-delimited tokens. This definition breaks down already in English, where the sentence "I'm not New York-based" contains four space delimited tokens, although we would conceive it as containing six word elements "I am not New York based". In fact, some may argue that it contain only five word-elements as in "I am not New\_York based", where New York is taken to be a multiword expression. Even in English then, the mapping between space-delimited tokens and words is in fact many to many.

%A morpheme is the smallest unit of form function correspondence.

% A morph is
 
 %An allomorph is
 
\subsection{Morphological Forms}

The brief exposition above of morphology in English, a language which is considered morphologically impoverished, shows a fair amount structure in its word forms. However, the morphological forms involved appeared to be quite simple and regular. There are exceptions to this apparent simplicity. Consider for instance, the following.
\begin{exe}
\ex\label{ex:morp-form}
\begin{xlist}
\ex\label{form1} ox : N.singular vs.\ oxen : N.plural 
\ex\label{form2}  child  : N.singular  vs.\  children  : N.plural 
\ex\label{form3}  sheep : N.singular vs.\  sheep : N.plural 
\ex\label{form4}  eat  : V.present vs.\  ate : V.past 
\ex\label{form5}  read :  V.present vs.\  read : V.past 
\end{xlist}
\end{exe}
%In all of the above example we recognize a ``stem", the morph that is associated with the core meaning of the word (ox, eat, child, sheep). 
In \eqref{form1} there is a concatenation of the morpheme ``en" that indicates a plural function. In  \eqref{form2} there is a concatenation of the morpheme ``en" and the stem undergoes a change of vocalization. In   \eqref{form4} the stem undergoes a whole transformation and vocalization change to indicate the past tense, and in \eqref{form3} no apparent morphological change occur to indicate the plural (this is often called a null morpheme).
%

MRL present wide variety  of form changes in order to express grammatical properties of grammatical relations. Let is survey the most common morphological forms that are observed world-wide and studied in morphology theory.\footnote{Examples are taken from the introductory text book of \cite{morphology}.}
\begin{itemize}
\item {\bf Concatenation:}  Placing  two meaning-bearing units one next to another is the simplest way for a grammar of a language to indicate that are related.  In syntax, this is the main mechanism  used for combine words into phrases and sentences. Likewise in morphology this a commonly used way to create morphologically complex words.

Affixes are bound morphemes that are simply appended to the host, and very common form of morphological marking. We distinguish between prefixes that attach before the host and suffixes that attach after the host, as in (\ref{eng-do}a) and (\ref{eng-do}b) respectively.
%Prefixes and Suffixes: strings concatenated before or after the step, respectively.
\begin{exe}
\ex\label{eng-do} Prefixes and Suffixes in English \label{ex:morp-eng}
\begin{xlist}
\ex\gll  un : neg + do : Verb \(\leadsto\) undo : Verb
\\ undo : Verb + ing : pres.prog \(\leadsto\) undoing : Verb.pres.prog \\
\end{xlist}
\end{exe}
 Infixes are affixes that are inserted at one or more positions within the stem. For instance, the diminutive affix in spanish may be an infix.
\begin{exe}
\ex Infixes Spanish \label{ex:morp-}
\begin{xlist}
\ex   Edgar : NNP +  u\'{i}t : diminutive   \(\leadsto\)  Edgu\'{i}tar : NNP.diminutive
\end{xlist}
\end{exe}
\item {\bf Vocalization}: Languages indicate a change in a grammatical property using 
a change in  vocalization  or in prosody, that is reflected in pronunciation. Well known examples are ablaut and umlaut morphs. Ablaut  a vowel alternation indicating a change in grammatical properties, as is well familiar from English and Latin. 
\begin{exe}
\ex Ablaut in English \label{ex:morp-}
\begin{xlist}
\ex   \gll sing vs.\ sang vs.\ sung\\
sing vs.\  sing.past vs.\  sing.past.perfect  \\
\end{xlist}
\ex Ablaut in Latin \label{ex:morp-}
\begin{xlist}
\ex   \gll vid\-{e}o vs.\ vidi\\
see.present vs.\  see.perfect  \\
\end{xlist}
\end{exe}
Umlaut is a modification of a vowel under certain condition, familiar from German. 
\begin{exe}
\ex Umlaut in German \label{ex:morp-}
\begin{xlist}
\ex   \gll Vatter vs.\ vett\"{e}r \\
father vs.\ fathers  \\
\ex   \gll Mutter vs.\ mutt\"{e}r \\
mother vs.\ mothers  \\
\end{xlist}
\end{exe}
\item {\bf Subtraction}: We have gotten used to thinking about morphemes as adding functional materials. Sometimes, moving from a simple unmarked  form to morphologically marked form, for instance, adding a "plural" property, involves a deletion of an affix, rather than adding one. This process is called subtractive morphology. A widely studied example is the form of plural verbs in {Koasati, a native American language spoken in Elton, Louisiana and Livingston, Texas.}
\begin{exe}
\ex Subtractive Morphology in Koasati\label{ex:morp-}
\begin{xlist}
\ex\gll   lataf-kan vs.\ lat-kam\\
to-kick-something.singular vs.\ to-kick-something.plural \\
\end{xlist}
\end{exe}
\item {\bf Reduplication}: morphs that are doubles in the stem in order to indicate a grammatical feature or a grammatical function.  Reduplication may partial, as in \ref{samoan}, or it may be complete, as in \ref{malay}.  
\begin{exe}
\ex Partial Reduplication in Samoan \label{ex:morp-}
\begin{xlist}
\ex \gll manao vs.\ manano \\
'(he)-wishes' vs.\ '(they)-wish' \\
\ex \gll galue vs.\ galulue \\
'(he)-works' vs.\ '(they)-work' \\
\end{xlist}
\ex Complete Reduplication Malay \label{ex:morp-}
\begin{xlist}
\ex \gll kursi vs.\ kursikursi \\
'chair' vs.\ 'chairs' \\
\ex \gll medzah vs.\ medzahmedzah \\
'table' vs.\ 'tables' \\
\end{xlist}
\end{exe}
In English, French and other European languages, complete reduplication serves an intensifier (as in goody-goody). Partial reduplication a is however much more common for s expressing grammatical functions in the worlds languages. 
\item {\bf Root and Template:} Semitic languages employ a particularly complex way to derive verb forms using templatic morphology. A morphological lexicon of Semitic languages defines the consonantal roots that define core lexical meanings. These roots may be inserted into templates that define affixes, infixes and vocalization patterns, in order to derive nouns, verbs and adjectives in the languages.
\begin{exe}
\ex Templatic Morphology in Modern Standard Arabic \label{ex:morp-}
\begin{xlist} 
\ex   K.T.B + \(\Box\)a\(\Box\)a\(\Box \leadsto \)  KaTaB  \\
write +  hitXaXX \( \leadsto \)  to write \\
\ex   K.T.B + ma\(\Box\)\(\Box\)u\(\Box \leadsto\)  maKTuB \\
write +  maXXuX \( \leadsto \)  letter \\
\end{xlist}
\ex Templatic Morphology in Modern Hebrew \label{ex:morp-}
\begin{xlist} 
\ex   K.T.B + hit\(\Box\)a\(\Box\)e\(\Box \leadsto \)  hitKaTeB  \\
write +  hitXaXX \( \leadsto \)  correspond \\
\ex   K.T.B + mi\(\Box\)\(\Box\)a\(\Box \leadsto\)  miKTaB \\
write +  miXXaX \( \leadsto \)  letter \\
\ex   K.T.B +  \(\Box\)\(\Box\)u\(\Box \leadsto\)  KaTuB \\
write +  XaXuX \( \leadsto \)  written \\
\end{xlist}
\end{exe}
\end{itemize}
In sum, there are many different ways in which word-forms may be created or changed in order to express grammatical properties, grammatical functions, or relations between words and several aspects of words meanings. The choice of  morphological forms is language specific, and is determined in the grammar of the language.
%Morphemes are therefore  appropriately understood as abstract processes that alter the form of words in order to lead to a change in its meaning.
 
\subsection{Morphological Functions}
While  languages have a variety of means to create morphological forms, these different means are usually used to express the same abstract functions.
\begin{itemize}
\item {\bf Lexical Material.} The morphological form of a word indicates, first and foremost,  its core meaning, or its {sense}. 
Let us denote this core meaning of the word  its {\em lexeme} or  {\em lemma}. In what follows we will make a clear distinction between a lexeme, which contains only semantics, and a stem, which the morph paired with the lemma.  We will indicate lexeme as upper case letters.
So,  we will say that in the word form "cats", "cat" is the stem, CAT is the lexeme, and cat:CAT is the lexical morpheme which carries the core meaning.
The `physical' words "cat" cats" will be referred to word-forms realizing the lemma with different grammatical properties.
%The core lexeme of a word is sometimes referred to as a {\em host} for additional morphemes that have other functions.

\item {\bf Grammatical Properties.} Morphemes may also indicates {\em grammatical properties},\footnote{In linguistics, these are sometimes called {grammatical features}, {\em feature value pairs} or just {\em features}. In this book we will reserve the word {\em features} for its technical (machine learning) sense, and use {\em properties} for the linguistic features, to avoid any confusion.} these are linguistic attribute value pairs that are indicated on top of the core lexical meaning.
The properties could be inherent semantic properties of expressions, such as {\em gender, number, person, animacy}. They can be properties of the situation talked about: examples for this are {\em tense, aspect, mood, modality}, or they can be features indicating the grammatical role of the hosting lexeme to other elements in the utterance, for instance morphemes indicating  {\em case}, or morphosyntactic {\em agreement}.

\item {\bf Grammatical Relations.} Morphemes may be bound to a host but indicate a relation to a completely different  element in a sentence with a different set of properties.
Examples are {\em pronominal clitics}, which indicate a grammatical relation element to a different element of top of the host.
The Spanish word  ``damelo'', for instance, means "give me it" (or "give it to me" in free translation) Here, the first morpheme indicates the lemma, the verb "daar" (to give), the second morpheme indicates an indirect object, the pronoun me (or, ``to me''), marked with 1st person singular. and the last morpheme indicates a direct object, by means of the pronoun "it", marked with 3rd person singular. Here, "da" is the host and it carries two clitics.

\item {\bf Compounding.} Conjoining lexical morphemes is a means to create more complicate lemmas, using a process called compounding. In many European languages, compounding is equivalent to creating {\em multi-word expressions (MWE)}. For instance, the German word ``Donaudampfschiffahrtsgesellschaftskapitän'' is translated into the English multiword expression ``Danube steamship company captain''. A different way in which morphology supports compounding is by means of  {\em construct-state nouns} (this are known as {\em smixut} in Hebrew, {\em idafa} in Arabic, {\em ezafe} in Farsi, and are found in many more languages). In Hebrew,  the word 'bit' (vocalized "bayit") indicates a house,  whereas the phrase `bit spr' (vocalized ``beit sepher")  means `school' (literally, `book house'). Here  the vocalization change marks an  unbreakable semantic relation between the `book' and 'house'.\end{itemize}
As opposed to the first three  morphological functions we discussed, compounding may apply recursively, delivering a productive way to  create words which are unbounded in length.

\paragraph{A  note on function words and phrase-Level morphology.}

So far we treated  morphology as a way of altering the word's properties, functions and meaning. But in fact, the effect of morphology goes far beyond the word on which it is marked. Morphological marking on a word may affect the meaning,  properties, or  relations of an entire phrase or clause. A common example is the possessive suffix 's is English. In "The kind of France's hat", 's is hosted by "France", but it indicates a possession (genitive) relation of the whole phrase "The kind of France" to  "hat". 

Morphological elements are that alter the function of an entire phrase or clause are referred to as "phrase-level morphemes". While word level morphemes are very strict in their distribution (tense is marked on verbs but not on nouns, person is marked on nouns but not on adjectives etc), phrase-level morphemes is at the periphery of the phrase, and can follow up just about anything. The genitive suffix may be attaches to a noun ("John's book"), a numerical expression ("the only one's book"), adverbials (``the one and only's book") and even verbs ("the boy I like's book") -- these are all grammatical  as the suffix marks the  relation between a book to an entity denoted by the phrase, not the word. 

 
%This phenomena of morphemes altering the Anderson calls such morphemes phrase-level morphology.

The term phrase-level morphology\footnote{As it is coined discussed by Anderson in his book \cite{x}.} may cover all sorts of function words such as  determiners, prepositions, or auxiliaries or are free functional morphemes altering the properties of the phrase or the clause. The placement of phrase-level morphemes, whether they are free or bound, is often determined by the  rules of syntax, and not by  rules of morphology.




 \subsection{Morphological Processes}
 Having characterized the different morphological forms that are found in the worlds languages and the functions that these morphological forms realize, we are now ready to characterize the stuff words are made of, or more accurately, the morphological processes that take place in creating (any) word in (any) language.
 
Let us assume that the full morphological analysis of a word form may  consist of a lemma, grammatical properties,  clitics indicating relations to other elements in the sentence and possibly other related entities.
There  are distinct   morphological processes that take place in the formation of  words, which may be classified by their function:  (i) derivational morphmes create lemmas, (ii) inflectional morphemes add properties and (iii)  compounding/clitics indicate or connect to other elements in the sentence. %Each of these processes  takes care of a different part of the analysis
 
\paragraph{(i) Derivational Morphology.} Derivational processes determine core the  lexical meaning of the word.
Derivational morphology processes may combine  lexical and functional morphemes to create lemmas.
By joining and conjugating derivational morphemes, the sense of the lemma, and its syntactic category, may be altered and refined.

\begin{exe}
\ex English
\begin{xlist}
\ex Believe (verb)
\ex Believe+ able \(\rightarrow\) believable : (adjective)
\ex Un+Believe + Able \(\rightarrow\) Unbelievable : (adjective)
%\ex Un+Believe+ Able+ ty  \(\rightarrow\) Unbelievablity : (noun)
\end{xlist} 
\end{exe}
Derivational morphological processes may employ any kind of morphological form that we have seen, they can range from simple concatenation processes as in Turkish to templatic morphology as in Semitic Languages
\begin{exe}
\item Turkish
\begin{xlist}
\ex 
ruhsat +lan +dır +ıl+ama +ma+sı+nda +ki \(\rightarrow\) ruhsatlandırılamamasındaki\\
having had the inability to license (adjective)
\end{xlist} 
\ex Hebrew
\begin{xlist}
\ex k.t.b : write + shi\(\Box\)\(\Box\)e\(\Box\) : redo  \(\rightarrow\) shikteb : re-wrote  (verb)
\end{xlist} 
\end{exe} 
 

\paragraph{(ii) Inflectional Morphology.} Inflectional processes  add grammatical properties to lemmas.  These properties may be inherent  properties (such as gender, number and person), or, properties that recover the role of a word  in the sentence (case  or agreement). Inflectional  processes  take a  lemma and create its {\em paradigm},   the set of  inflectionally related word forms.

 \begin{exe}
 \ex
 \label{greek} A Greek Inflectional Paradigm
 \begin{center}
 \begin{tabular}{|rrc|cc|}
 \hline
lemme: l\'{u}\-{o}  (I loose)&  &  & Present & Perfect \\
& number & person & & \\
\hline
 & singular & 1 & luo&eluon \\
 & singular & 2 & lueis & elues \\
 & singular & 3 &luei &  elue\\
 \hline
 & dual & 1 &lueton & elueton\\
 & dual & 2 &lueton &  lueten\\
 \hline
 & plural & 1 &luomen & eluomen\\
 & plural & 2 &luete & eluete \\
 & plural & 3 & luousi & eluon \\
 \hline
 \end{tabular}
 \end{center}
 \end{exe}

 %Inflectional morphology creates word forms which share core meaning and lexical material, but differ in the grammatical properties. Word forms that share a lemma may differ in their semantic properties of the entities they refer to in the world (feminine or masculine, singular or plural) they may differ in  Inflectional morphology typically does not change the category of the word, it merely adds information about the semantic and syntactic contexts in which it may appear. 
 
% \begin{figure}
 %-- Paradigm in English -- Paradigm in Hebrew-- paradigm in German\\
 %\end{figure}
 
 A Paradigms is a multidimensional array that indicates the range of different properties that can be marked  on top of the core lexical meaning.  The coordinates of each cell indicate a particular combination of grammatical properties, and the content of the cell indicates the  word-form that realizes this lemma with these inflectional properties.
 
 In many languages we find the phenomenon of paradigm {\em syncretism} a single word-form that appears in multiple cells (such as the word-form lueton in (\ref{greek})). Paradigm syncretism adds another level of ambiguity to the form of words -- it is not only the the case the simple morphs may mark different features, but complex morphological forms may be ambiguous as to the feature bundles that they realize.
  
 

\paragraph{(iii) Compounds and Clitics.}
 Compounding and cliticizing  are morphological processes that may conjoin  word-forms from different paradigms.
 In some languages, these morphological processes are the way of generating multiword expressions.
 
 \begin{exe}
\ex Swedish
\begin{xlist}
\ex Realisationsvinstbeskattning\\
capital gains tax
\end{xlist}
\ex Persian
\begin{xlist}
\ex dox'tær-e z?'b?-ye d?stæm\\
girl beautiful-of friend-my\\
my friend's beautiful daughter
\end{xlist}
\end{exe} 

This effectively means that we different elements with different functions in the same expression, and each element may have its own position in the syntactic (and semantic) representation.
 
 \paragraph{Morphological Synthesis Across Languages}
 As a general rule, there is a universal order in which  morphological processes are employed in  synthesizing words in natural language: derivational processes usually  precede inflectional processes, which in turn  precede compounding and cliticizing additional elements. In morphology theory, it is customary to hear that `derivation feeds inflection' (or  that `inflection bleeds derivation').   %This ordering is depicted  in (\ref{morph-order}).
 \begin{align}\label{morph-order}
  \text{\em derivation} <  \text{\em inflection} < \text{\em clitics/compounding} 
  \end{align}

Considering the paradigmatic view of morphology, this ordering logically follows: we must derive a lemma before we can inflect it, and we must realize the inflected word-forms before we can conjoin, compound, or contract them. We  assume that analysis should recover the same components, and may take place in the reverse order, following similar   reasoning.
 \begin{align}\label{analysis-order}
 \text{\em tokenizing clitics/compounds}  <  \text{\em recognizing inflection} < \text{\em analyzing derivation}
  \end{align}

 The dimension of the paradigms and the extent of paradigm syncretism is subject to immense  cross-linguistic variation. While English employs paradigms that are fairly modest, paradigms in richly inflected  languages may have much larger dimension, indicating a variety of properties, and employing tens to hundreds, or even thousands of form, that may realize the same lemma in different syntactic contexts. The productivity of derivational and compounding processes is seemingly variable -- from a handful of idiosyncratic examples in one language, to extremely elaborate and productive systems in others.
 
\subsection{Morphological Analysis and  Disambiguation}

 
 %The longest word in German.

 
 % \paragraph{The Structure of the Lexicon}
In the previous section we assumed that  input words are theoretical primitives, and that \(\mcx=\sigma^*\). In MRL this assumption breaks down, as words are internally complex, and may contribute different elements to the syntactic representation.
Our computational account of parsing MRLs then begins by formally defining a morphologically rich lexicon (henceforth, and MRLon) that relates morphological forms to morphological functions, assuming the different morphological processes we outlined above.

 %that do not map directly onto words in closed set \(\Sigma\). 
\begin{exe}
\ex
\begin{itemize}
\item Let \(\mathcal{L}\) be a finite set of   lemmas (a.k.a.\ word senses)
\item Let \(\mathcal{C}\) be a set of  categories (a.k.a.\ part-of-speech tags)
\item Let \(\mathcal{A}\) be a set of  property names (a.k.a.\ features or properties)
\item Let \(\mathcal{V}\) be a set of  property values (a.k.a.
 feature values or feature assignment)
\end{itemize}
For brevity, we denote  \(\mathcal{V}_A\)  the set of values that an attribute \(A\in\mathcal{A}\) may be assigned. 
\ex
A  lexicon entry \(e\) is a tuple: \[e=\langle l, c, b\rangle\] 

such that \(b\) is a property bundle denoting a set of attribute:value pairs
  \[b=\{a:v|a\in \mathcal{A}, v\in \mathcal{V}_a\}\]
\ex

We define the  unlexicalized morphosyntactic representation (MSR) of a lexical entry \(\langle l,c,b\rangle\)  to be the respective unlexicalized tuple  \( \langle c,b \rangle \).
\end{exe}
Examples for lexical entries for a  fragment  of English are given in Table \ref{lex:e}. Note that all entries  contain an MSR, but the lemma \(l\) might remain empty in case of  function words (a.k.a.\ ``stop words" in information retrieval applications). The MSR \(b\) is empty  in case no  grammatical properties are indicated  beyond the  word-sense. The category \(c\) is mandatory. %\footnote{\url{http://en.wikipedia.org/wiki/A_Boy_Named_Sue}} 

\begin{table}\center
\scalebox{0.9}{\begin{tabular}{|r|cll|}
\hline
``Lucy" & Lucy & NN & \{gender:feminine, number:singular\}\\
``Lucy" & Lucy & NN & \{gender:masculine, number:singular\}\\
``snowing" & snow & VB & \{tense:present\} \\
%``snowing" & snowing & NN & \{countable:-\} \\
``apples" & apple & NN & \{number:plural,countable:+\} \\
``sings" & sing & V & \{person:3,number:singular,tense:present\} \\
``the" & - & DET & \{definitness:+\} \\
``a" &  - & DET & \{definitness:-\} \\
``will" & -  & AUX & \{tense:future\} \\
``is" &  - & COP & \{person:3,number:singular,tense:present\} \\
``his" &  - & PRP\$ & \{gender:masculine, number:singular\} \\
``yesterday" &  yesterday & RB & - \\
``knowingly" &  knowingly & RB & - \\
\hline
\end{tabular}}
\caption{A morphologically-rich lexicon for a fragment of English}\label{lex:e}
\end{table}

\paragraph{Some observations on Table \ref{lex:e}}

\begin{itemize}
\item In each entry, the category \(c\) as fixed: The  word "knowingly" does not decompose into lemma and features as distinct entries in the MRLon. A derivational process  fused the semantics of  "knowing"+"ly" to obtain this adverbial,  but for the purpose of \emph{syntactic} parsing, all semantically-bearing derivational processes must have already taken place. One reason for that is that derivational morphemes encode semantic information that may also change the category \(c\) of the token(in the  "knowingly" example V has  changed to RB), but in order to proceed with syntactic parsing, we want to view the category \(c\) as fixed. 

\item  In each entry, the category \(c\) as mandatory: Note  that the  word "sings" does not decompose into "sing" and "s" as distinct entries in the MRLon.  "s" is an inflectional morpheme that adds features (third person singular) to the syntactic category, and does not carries its own part of speech tag. 
% In order to proceed with syntactic parsing, we want to view the .
\item In each entry, the category \(c\) as unique: You might wonder, at some later point, what distinguished a single  lexical entry with many features from a complex token that contains multiple entries. A rule of thumb  is  that the number of independent POS categories defines the number of lexicon entries "lumped together" in a token.  E.g., "knowing" is a verb marked with tense features, and "ing" can not be a standalone entry with its own syntactic category. A word like "won't" contains two different POS tags, "will" (AUX) and "not" (RB) and thus we do separate them into separate morphemes.  In order to proceed with syntactic parsing, we want to view the .

\end{itemize}
This  MRLon is based on organizational principles that  fit {\em any} language and we easily applied it  here to English. However, it can be applied for a range of language types.
In case morphology-free (isolating) languages,  \(b=\emptyset\) and the definition of an MRLon entry collapsed into a tagged word. In case of morphologically rich languages, \(b\not=\emptyset\) and there e are multiple lexical entries for each word sense. We refer to the set of lexical entries that share a POS and a word sense as a paradigm.

\[\rho(w) =\{\langle l_i,c_i,b_i \rangle | w=(l,c,b), l_i=l, b_i=b\} \]

%\subsection{Morphological  Analysis}
\begin{exe}
\ex Let us assume a MRLon \({\Sigma}\). We define a morphologically rich language as a triplet 
\[L = \langle \mathcal{T}, \Sigma, O\rangle \] 
such that:
\begin{itemize}
\item \({\Sigma}\) is a finite set of lexical entries 
\item  \(O \) is a finite set of aggregation operations that may aggregate  lexical entries.
%\footnote{Such aggregation operations may go beyond simple concatenation, as has been demonstrated before.} 
\item \(\mathcal{T}\) is a finite set of  space-delimited tokens 
\end{itemize}
We require that \(\forall t\in \mathcal{T} :\exists_{m_1...m_n \in {\Sigma}, \oplus_1 ... \oplus_n\in O}  : ((m_1 \oplus_1 m_2).... \oplus_n m_n) =t\) 
\end{exe}

\paragraph{Morphological Analysis}
Let \(L= \mathcal{T}_L,\Sigma_L, O_L\) define a morphologically rich language. 
%and let \(\mathcal{S}\)  be the set of all sequences composed of elements from \({S_L}\), that is, \(\mathcal{S}={S_L}^*\).  
%and let \({S_L}\) be the set of valid spellouts over \(\Sigma_L, O_L\).
A sentence \(x\in\mathcal{X}\)  of lengths \(n\)   in the MRL \(L\) is  defined to be \(x=x_1.... x_n\) where \(\forall i :x_i\in \mathcal{T}_L\).
A token \(t\in \mathcal{T}_L\) may be {\em spelled-out} as a sequence of lexicon entries \(l_1...l_n\)  which  were aggregated to form the  space-tokens in the written text.   
{\em Spelling out} the content of tokens in the input stream into  valid lexicon entries is the first step for {\em any}  language and any NLP system.\footnote{In English NLP, this task is known as {\em tokenization}, where word tokens such as ``I'm",``John's" or ``Thanks!" are spelled out as ``I+am", ``John+is"  and ``Thanks+!" respectively.}

%A sentence \(x\in \mathcal{X}\) of lengths \(n\)  contains a sequence of potentially-complex space-delimited  tokens \(x_1...x_n\in\mathcal{T}\) 

\begin{exe}
\ex The the set of possible spellouts in the language L is a relation between the set of tokens and sequences of MRLon entries.
 \[\mathcal{S}= \mathcal{T}\times {\Sigma}^*\] 
 \end{exe}
%
By definition, each space-delimited token must have at least one element in this set. 
 Ambiguity  arises when there are multiple spellout possibilities for a single token. A simple example for spellout ambiguity in English, is, for instance, "John's" which can be spelled out in two ways, "John + is'' (simple contraction of two lexical entries) and "John + 's" (a lexical morpheme and a possessive clitic). %Morphologically richer languages typically show far more severe spellout ambiguity.
  An example for more more severe spellout ambiguity in Hebrew is shown in tables \ref{heb:lex} and \ref{heb:tok}.

\begin{table}\center
\begin{tabular}{|r|c|}
\hline
``aren't" &  are + not \\
``won't" &  will + not \\
``John's" &  John + his\\
\hline
\end{tabular}
\caption{Complex space-delimited tokens in English}\label{eng:tokens}
\end{table}


 \begin{table}\center
\begin{tabular}{|r|c|}
\hline
``BCLM" &  B + CL + FL + HM \\
``BCLM" &  BCL + FL + HM \\
``BCLM" &  B+CLM \\
``BCLM" &  B+H+CLM \\
``BCLM" &  BCLM \\
\hline
\end{tabular}
\caption{An ambiguous space-delimited token in Hebrew}\label{heb:tok}
\end{table}


\begin{table}\center
\begin{tabular}{|r|cll|}
\hline
``B" & in & IN & - \\
``BCL" & onion & NN & - \\
``BCLM" & organization & NNP & - \\
``FL" & of & IN & - \\
``CL" & shadow & NN & \{gender:masculine, number:singular\} \\
``HM" & they & PRP & \{gender:masculine, number:plural,case:nom\} \\
\hline
\end{tabular}
\caption{A morphologically-rich lexicon for a fragment of Hebrew}\label{heb:lex}

\end{table}


\begin{exe}
\ex For any MRL we assume a morphological analysis function   \[\mathcal{M}: \mathcal{T}\rightarrow\mathcal{P}({\Sigma}^*)\] 
\end{exe}
The function assigns to each token a set of all of its spellout possibilities. 
This set may be represented as a lattice structure, as shown in Figure \ref{x}.
%\subsection{Morphological Disambiguation}

Every token \(x_i\in T_L\) can have multiple spellouts, that is a set of elements from \(S_L\).
% that is \[\{ (x_i,s) |(x_i,s) \in S_L\} \geq 1\]
%\(x\in T_L,SP,t\in \mathcal{S}^*:s1...snSPt, x=s_1...s_n\)
%A sentence in a Morphology Rich Language (MRL) L is a sequence of tokens: \(x=x_1...x_n\)
Morphological analysis of a sentence  is a function mapping a token  in \(T_L\) to a set of spellout possibilities:
\[\mathcal{M}(x_i) = \{(x_i,s_1) , (x_i,s_2)  ... (x_i,s_n)\}\] where \[(x_i, s_j)\in {S_L}^*\]
This set may be represented as a lattice structure, as shown in Figure \ref{x}.
Note that by our definition of an MRL it is always the case that \(|\mathcal{M}(x_i)|>0\).
For a sentence \(x=x_1...x_n\), the morphological analysis of  a sentence \(\mathcal{M(x)}\) is the concatenation of the individual \(M(x_i)\).
\[\mathcal{M}(x)= \mathcal{M}(x_1)+\mathcal{M}(x_2)...+\mathcal{M}(x_n)\]
%A sentence mapping \(s^*\) is a function from tokens to their correct spellout in context.

The morphological analysis function may be spelled out manually as a a set of hand written rules, it may be computationally program using a formal grammar, for instance a regular grammar (an equivalent view is implement MA as a finite state machine), or it may be acquired in a data-driven way from a set of annotated or unannotated data.

The results of the process will yield an \(\mathcal{M}(x)\) object may also be represented graphically as a lattice, as in Figure \ref{x}.

%A spelled out sentence is a concatention   of the spellouts of \(x\), given S:
%\(S_M=s0s01..s0js11..s1k...sn1..snl\) where s0is the special token ROOT.
\paragraph{Morphological Disambiguation}
We define a  mapping \(\gamma\) to be a set of ordered pairs of words and their correct spellout:
\[\gamma=\{(x_i,e_1... e_{m_i})|e_1...e_{m_i} \in \Sigma\}\]

The  of {\em morphological disambiguation}  picks out a mapping for a sentence \(x\in\mathcal{X}\).  WWe will later define \(s^*\) as the probabilistic (or highest scoring) choice of \(\gamma\).
%\[s^* = argmax_{s\in\mathcal{M}(x)} P(s|x)\]

%We understand \(s^*\) as the correct spellout (or correct segmentation, in the simple case) of a word in context.
\[s^* = argmax_{s\in\mathcal{M}(x)} Score(s,x)\]

Morphological disambiguation then pre-supposes a morphological analysis function, and performs disambiguation. 
without context this would be  impossible. With contect disambiguation may still be hard, even for a human hearer. Dependending on extra-linguistic factors, different decisions may be taken. 

We may design models for morphological analysis in an analogous way to our parsing models, assuming a set of annotated or unannotated examples.
The model definition will be as follows
\[M=\langle \mcx,\mcy,\mathcal{M}, {\bf g}, {\bf w}\rangle\]
This definition is almost the same of our definition of a statistical parser, with one difference: we explicitly assume a morphological analysis component as part of the architecture, which pre-processes the input to provide all possible morphological analysis, represented as a lattice structure.

In case we have a data set \(\mathcal{D}\) in which each space-delimited morphologically tokenized and  analyzed, implementing a model for morphological disambiguation will follow the already familiar path
\begin{itemize}
\item {\bf modeling:} Designing the input and output spaces and the morphological analysis function for generating all the morphological analysis candidates on an input string
\[\ma: \mcx\rightarrow\mcy\]
\item {\bf  learning:} Assuming a model \(M\) and a data set \(\mathcal{D}\) we want to select an instance of the model, such that \(\omega\) is a parameter value assignment for \wv.
\[m=\langle \mcx,\mcy,\mathcal{M}, {\bf g}, \omega\rangle\]
\item {\bf decoding:} Assuming a model instance \(m\) and an input sentence \(x\in\mcx\) as defined above, we want to recover the best path in the morphological analysis lattice.
\[s^* = argmax_{s\in\mathcal{M}(x)} Score(s,x)\]
\item {\bf evaluation:} Given a predicted spellout and the spellout of \(x\), the evaluation algorithm assigns a score that reflect the (normalized) similarity between the two. 
\[Eval(y_p.y_g) = \frac{similarity(y_p.y_g)}{norm(y_p,y_g)}\]
\end{itemize}
A graphical depiction of the architecture can be found in figure \ref{x}. The main difference from Figure \ref{fig:arch3} is that our input is first going into an MA component and output a lattice structure, which enters decoding.

%In parsing, the part of the model that specified all possible output candidates, trees or hyperpaths, is assumed to be a part of the decoder. We didn't we assume so here? The reason for that is that in contrast to syntactic analysis that assume composition operations of a general abstract rules, the morphological analysis function requires a set of language specific morphological operations to be defined. Separating the MA component from the rest of the model, and thus approach data-driven morphological disambiguation in a language independent way, just like we did in parsing.

\begin{figure}
\center
\scalebox{0.45}{
\includegraphics{system-01-03}
}
%sentence \(\rightarrow\)\begin{tabular}{c}
%\fbox{\begin{tabular}{c} \ \fbox{parsing} \\  \(\uparrow\) \\\fbox{training}\\\end{tabular} }\\ \(\uparrow\) \\ treebank
%\end{tabular} \(\rightarrow\) \begin{tabular}{c}parse-tree\\ \(\downarrow\)  \\  \fbox{eval} \\ \(\uparrow\) \\ gold-tree\end{tabular}\(\rightarrow\) evaluation scores
\caption{TODO: change to an Architecture for Morphological Analysis and Disambiguation}
\label{fig:arch4}
\end{figure}


\section{What is this Book About?}
Before we can say what this book it about, it is  important to clarify what this book is not about.
This book is not about the design of morphological models for analysis and disambiguation, and it is not about developing a purely syntactic parser for a particular language. Rather, this book is about designing  parsing models for {\em morphosyntactic} analysis and disambiguation, taking into account the inter-relations between morphology and syntax, and developing computational methods that can cope with it.

%Before we dive into the specifics of the different models for morphosyntactic in the next chapters, this 
This section  serves to instantiate an argument in favor of developing  {\em joint} models for morphosyntactic analysis and disambiguation  (Section~\ref{sec:joint}).  We  outline the overarching challenges in  developing  such  architectures (Section~\ref{sec:challenges}) the  present the structure and plan for the rest of this book (Section~\ref{sec:structure}).

\subsection{Morphosyntactic Analysis and Disambiguation}\label{sec:joint}

Syntactic analysis reveals the underlying structure of the sentences, a crucial step towards representing sentence meaning.
In English and similar languages, the input to the parser consists of space-delimited  tokens that represent the basic units of  syntactic analysis.
In morphologically rich languages 
each  space-delimited word-token  may contain multiple units which carry different aspects of meaning. In particular, it may define a lemma, a set of properties, additional elements, or the relation to other parts of the syntactic structure.
%To illustrate, recall the Hebrew token {\em wkfmhbit} in example (\ref{heb-seg}).
% \heb{◊ï◊õ◊©◊û◊î◊ë◊ô◊™}.
%in example  \eqref{heb-seg}
% It contains multiple morphological units  indicated with their own part-of-speech tags (and/CC when/REL from/PREP the/DT house/NN).

 In order to parse a sentence in an MRL, the text has to go through   morphological analysis which uncovers the sequence of relevant, morphologically rich, lexical entries,  which can then be combined into phrases and sentences.  The  morphological analysis of the Hebrew  phrase {\em kfmhbit icati}, each of which is a different terminal in the syntactic parse tree.
 \begin{exe}
 \ex  {\em kfmhbit icati}
 \begin{xlist}
 \ex {\em w}/CC {\em kf}/REL {\em m}/IN {\em h}/DT {\em bit}/NN  {\em icati}/VB  \\
 and when from the house left.1st.singular
 \end{xlist}
 \end{exe}
 
 \begin{figure}
 \center
\scalebox{0.85}{ \Tree[.ROOT [.FRAG [.CC and ] [.SBAR [.REL when ] [.S [.PP [.IN from ] [.NP the house ] ]  [.VP [.VB left ] ]  [.NP [.PRP *PRO* ]   ] ] ] ] ]}
 \end{figure}
% A morphological analyzer has to identify, at the very least, the different morphological segments,  their corresponding part-of-speech tags, and possibly a set of inflectional features.
 %\footnote{To know more on morphological analysis for Semitic languages, consult chapter 1, this volume.}  
%
The morphological analysis in MRLs may be highly ambiguous, due to the high word-form variation, complex orthography, (and, in some languages also  the omission of diacritics).  For example, a Hebrew  word like {\em bclm} may admit different analyses,  as illustrated in \eqref{ex:segments}.
%\footnote{TODO: add arabic and maltese examples, add averages of ambiguity level on available corpora.}
%which imposes different segmentation into syntacticaly relevant units.
\begin{exe}
\ex {\em bclm}  
\label{ex:segments}
%\begin{xlist}
%\ex Hebrew\\ {\em bclm}
%\heb{◊ë◊¶◊ú◊ù}
\begin{xlist}
\ex {\em b}/IN {\em clm}/NN
\ex {\em b}/IN {\em h}/DT  {\em clm}/NN
\ex {\em b}/IN {\em h}/DT  {\em cl}/NN  {\em fl}/POSS  {\em hm}/PRN
\ex {\em bcl}/NN    {\em fl}/POSS  {\em hm}/PRN
%\ex \heb{◊ë◊¶◊ú}/NN  \heb{◊©◊ú}/POSS  \heb{◊î◊ù}/PRN
%\ex  \heb{◊ë}/IN  \heb{◊î}/DT \heb{◊¶◊ú◊ù}/NN
%\ex \heb{◊ë}/IN \heb{◊¶◊ú}/NN  \heb{◊©◊ú}/POSS  \heb{◊î◊ù}/PRN
%\ex \heb{◊ë◊¶◊ú}/NN  \heb{◊©◊ú}/POSS  \heb{◊î◊ù}/PRN
%\ex \heb{◊ë}/IN \heb{◊¶◊ú◊ù}/NN
%\ex  \heb{◊ë}/IN  \heb{◊î}/DT \heb{◊¶◊ú◊ù}/NN
%\ex \heb{◊ë}/IN \heb{◊¶◊ú}/NN  \heb{◊©◊ú}/POSS  \heb{◊î◊ù}/PRN
%\ex \heb{◊ë◊¶◊ú}/NN  \heb{◊©◊ú}/POSS  \heb{◊î◊ù}/PRN
\end{xlist}
%\ex Arabic
%\ex Amharic
%\end{xlist}
\end{exe}
Such ambiguity can only be resolved in context. In languages with relatively fixed word order, such as English, disambiguation methods that take into account local linear context such as neighboring words may be sufficient for accurately disambiguating the morphological analysis. In MRLs,  that exhibit relative flexibility in the placement of words and phrases, a word that contains disambiguating cues may be arbitrarily distant from the word that needs to be disambiguated. 

To illustrate, consider now the Hebrew word  {\em hneim} in Example~(\ref{disamb}).  % may appear in different phrases:
\begin{exe}
\ex\label{disamb} \begin{xlist}
\ex\label{disambig-adj}  {\em bclm \underline{hneim} fl hecim} % \heb{◊ë◊¶◊ú ◊î◊†◊¢◊ô◊ù ◊©◊ú ◊î◊¢◊¶◊ô◊ù}
\\ in-\underline{{\sc def}-shadow} \underline{{\sc def}-pleasant} of the-trees
\\ in the pleasant shadow of the trees
\ex\label{disambig-verb}  {\em bcl fl hecim hwa at zmninw \underline{hneim}} %\heb{◊ë◊¶◊ú ◊©◊ú ◊î◊¢◊¶◊ô◊ù ◊î◊ï◊ê ◊ê◊™  ◊ñ◊û◊†◊ô◊†◊ï ◊î◊†◊¢◊ô◊ù }
\\ in-{\sc def}-shadow of {\sc def}-trees \underline{he.MascSing} {\sc def} time-ours \underline{made-pleasant.1MascSing}
\\ In the shadow of the trees he made our time pleasant
\end{xlist}
\end{exe}
 %\heb{◊î◊†◊¢◊ô◊ù}
In \eqref{disambig-adj} there is agreement on the definite article with the previous noun which renders the correct analysis of %\heb{◊î◊†◊¢◊ô◊ù} 
{\em hneim}  a definite adjective. In \eqref{disambig-verb}, there is agreement on a proceeding pronoun ``he", which is the subject of the entire sentence.
% and which appears distant  from the predicate due to the flexible order of phrases in Hebrew.
 This agreement helps us understand {\em hneim}
% \heb{◊î◊†◊¢◊ô◊ù}
 as a verb, inflected to reflect the agreeing  properties. Here, the existence of an arbitrarily distant agreeing element helps to pick out the correct morphological analysis of this word.

So, we argue that in order to assign  syntactic analysis we need first to identify the correct lexical entries that are involved and their morphosyntactic representation, but in order to disambiguate morphological analyses in context, we argue for the need to employ information concerning the overall syntactic structure, which is obtained through full-fledge parsing.  Now, assuming we would to accept both premises, how can we break out of this look?

% 

%Such ambiguity os again may be resolved using statistical modeling, where we aim to implement a prediction function  that selects the most probable sequence of terminals given the sequence of space delimited words in a language.
%But, as has been argued repeatedly,


 
% in order to predict the correct morphological segmentation, we need cues from the correct syntactic structure. So it appears as though in order to syntactically parse the sentence we need to first analyze it morphologically, but in order to correctly disambiguate morphological analyses, we need to first 
 A popular approach is to employ a pipeline architecture as depicted in Figure \ref{pipeline}.
 we first set up a  morphological disambiguation component, 
%utilizing surface cues from immediate context (neighboring words,  part-of-speech tags, prefixes, suffixes, etc.). We
and then provide the most probable morphological segementation  as input to a standard parsing model. The parser thus aims to assign a tree to the given sequence of morphological segments.  The appealing property of a pipeline approach is its simplicity and modularity.
The downside of  a pipeline is that 
errors in the morphological disambiguation component  may propagate to the parser and seriously undermine its prediction accuracy. 




An alternative way to   build the architecture is to  select the most probable morphological segmentation and syntactic analysis at once. 
%The two types of architectures for parsing Semitic languages are sketched in Figures \ref{pipeline} and \ref{joint} respectively.
The main advantage of the joint strategy,  as advocated by \cite{tsarfaty06integrated, cohen07joint, rtyg08joint,goldberg11pcfgla}, is that
it  allows us to use syntactic information to disambiguate morphological information, and vice versa. It also helps to avoid error propagation. 
The joint strategy  may be more challenging to develop and implement. Computationally, It may require us to go over a larger  search space combining  possible morphological and syntactic analyses. Learning-wise, the interweaving of both morphological and syntactic information in the  overall feature function may create a complex model that is hard to acquire from data.
Add to that, that data sets that are available for parsing MRLs, that are far less studied that English, are often fairly small, this may pose additional challenges for devising appropriate learning techniques that can generalize well, and that do not over-fit the training data.
%However, we show in section \ref{sec:const} a representation in which the morphological and syntactic structures interact via  lattice parsing, allows to retain  efficiency as well as accurate parsing architecture.

%Correctly resolving the morphological ambiguity in the input is essential for obtaining a correct parse, but whether this morphological disambiguation should be done before or jointly with the parser is an empirical question.
% In Section \textsection\ref{sec:const} we present  an efficient lattice-based decoder that caters for a joint solution for syntactic and morphological disambiguation. %Semitic languages.



\begin{figure}
\begin{center}
%{\em sentence} \(\rightarrow\) \fbox{analyzer}  \(\rightarrow\) {\em segmented sentence}  \(\rightarrow\) \fbox{parser}  \(\rightarrow\) {\em parse-tree}
\scalebox{0.4}{\includegraphics{system-01-04}}
%\[s* = \text{argmax}_{s\in MA(x)} p(s|x) \text{\hspace{0.2in}} y* = \text{argmax}_{\{y|yield(y)=s\}} p(y|s)\]

\end{center}
\caption{A pipeline architecture for parsing morphologically rich languages.}\label{pipeline}
\end{figure}




\begin{figure}
\begin{center}
%{\em sentence} \(\rightarrow\) \fbox{parser} \(\rightarrow\)\begin{tabular}{c} {\em  parse-tree }\\ {\em segmented sentence}\end{tabular}
\scalebox{0.4}{\includegraphics{system-01-05}}

%\[s*,y* = \text{argmax}_{yield(y)\in MA(x)}p(y,s|x)\]
\end{center}\caption{A joint architecture for parsing morphologically rich languages.}\label{joint}

\end{figure}


%\paragraph{Morphology complements syntax in the realization of grammatical relations.} Our departure point has been the definition of syntactic analysis as an automatic natural language processing task with the primary goal of  recovering the predicate argument structure of natural language sentences. In language which are morphologically impoverished, it is possible to infer, with relatively high accuracy, predicate argument structures from syntactic information alone. In MRLs, this is not the case.  The rich functional information we find in bound morphemes, and word order variation that are allowed in such languages, makes syntactic information insufficient for recovering a complete and semantically  useful  representation of the grammatical relations in the sentence.   This primary goal of this book is therefore to introduce effective methods for morpho-syntactic analysis and disambiguation, that would be informative and useful for further semantic processing. TODO: example figure

%\paragraph{Morphological disambiguation is necessary for syntactic parsing} Morphological information provides invaluable information to the syntactic parser --- first and foremost it provides essential information about the input signal, namely, how the space-delimited input tokens should be spelled-out a list of  lexical entries that may serve as basic components for the construction of syntactic structures (constructing ,meanigful phrases and sentences out of them, and/or the introduction of dependencies between them). Secondly, syntactic models for MRLs may crudely generate a large number of potential parse candidate -- this is a common property of data driven parsing models but it is further magnified in langauages that allow a fair amount of lexibility in the ordering of word or phrases. Syntactic parsing than crucially depends on the availability of disambiguating information in the statistical models, and in many MRLs, the only way to rule out impossible analysis if by means of verifying that the morphological marking of the different syntactic components are complete and coherent.  Finally, morphological disambiguation identifies the core semantic content of sentences, the lemmas, and are thus necessary for  the construction of abstract, cross-linguistic semantic resources that would not need to deal with language-specific inflectional and derivational information.
% disambiguation recovers complete MSRs for the lexical entries, and these entries have crucial role in selecting the 
%TODO: example figure

%\paragraph{Syntactic parsing is oftentimes necessary for morphological disambiguation}

%The above two premises suggest NLP architectures that are composed as a pipeline, in which a morphological model for analysis and disambiguation 
%TODO: example figure

%\paragraph{Joint Modeling} This book is  about joint modeling of morphology and syntax.  The alternative architectures one may consider for this task are summerized in figure 1. 

\subsection{The Overarching Challenges}\label{sec:challenges}
 
We view {\em statistical parsing} of  {morphologically rich languages}  as a structure prediction task where we aim to induce  a prediction function  \(f:\mathcal{X}\rightarrow{Y}\), where \(\mathcal{X}\) is a sentence in an MRL   and \(\mathcal{Y}\) is a syntactic parse-tree, from a set of annotated examples.  To the function \(f\) we first need to  define the input and output spaces,   \(\mathcal{X}\) and \(\mathcal{Y}\), respectively. 
In  English parsing, \(x\in\mathcal{X}\) is  a sequence of words from a finite vocabulary \(\Sigma\), that is, \(x\in \Sigma^*\). \(\mathcal{Y}\) contains parse trees with elements of \(\Sigma\) as nodes in the trees. In MRLs, elements in \(x\) have a more complex internal structure, and thus are not necessarily contained in  \(y\in\mathcal{Y}\)  as is. This poses challenges for each of the architectural components we specified above:


\begin{itemize}
\item  
 {\bf The Representation Challenge:}  The input signal is MRL contain tokens that are internally complex and inherently ambiguous, and the input tokens  do not reside as is in the syntactic representation. How  should we then define the input and output spaces,  \mcx~and \mcy, respectively? How can we define formal devices that generate for each  input sequence the set of morphosyntactic parse candidates?
\item 
 {\bf The Modeling Challenge:}  To obtain computationally tractable models we typically have decompose the overall structure and impose independence assumptions between different parts. How should we define the structure decomposition? How should we design the feature function? And, most crucially, what kind of independence assumptions are appropriate for the morphosyntactic parsing task?
\item 
 {\bf The Decoding Challenge:}  In MRLs, the input signal for the decoder is inherently ambiguous, and  the sequence of lexical entries may be spelled out in different ways. Each possible morphological spell-out gives rise to a potentially exponential number of parse candidates to select from. This gives rise to a huge search space of potential candidates. What search strategy would be appropriate for traversing the space of candidates efficiently, or performing a more limited search that retains formal guarantees, or, alternatively, does not sacrifice much accuracy?
 \item 
 {\bf The Learning Challenge:} : In acquiring models from annotated data we would have to generalize from seen events to unseen combinations. For MRL parsing, the challenges of data-driven acquisition of  a statistical model is heavily magnified. Both the lexical parts and the syntactic parts or our annotated examples will cover only a small fragment of the possible combinations. Lexical entries that belong to rich morphlogical paradigm are subject to immense amount of realization possibilities, and we will not be able to observe all combinations up front. At the syntactic level, variability in word ordering patterns implicates a huge space of crude over-generation, which will have to be effectively constrained or controled. 
 What learning algorithms would be appropriate for obtaining good generalization of morphosyntactic and yet avoiding over-fitting?
 \item    {\bf The Evaluation Challenge:}  Parse evaluating metrics and procedures assume that the input sequence \(x\in\mcx\) is effectively a part of the output structure, and that competing analyses may be compared with respect to its indexing. When parsing MRLs, \(x\) is not necessarily a part of the output structure, and in case the morphological hypothesis has been incorrect, the gold parse and the parse hypothesis may differ in length and have different lexical yields. What  metrics and procedures are appropriate for comparing morphosyntactic candidates?
\end{itemize}

This book then develops, from the ground up, a  theory for representing structures and modeling morphostyntactic parsing in constituency-based, dependency-based, and hybrid approaches. Our overarching goal is to develop systems and strategies for effectively addressing the aforementioned overarching challenges.

% In this chapter we  concentrate on  defining \(x\in\mathcal{X}\)  The next chapter will in turn examine the  intricate relation of elements in \(\mathcal{X}\) to the formal syntactic structures in \(\mathcal{Y}\), and how the  parsers may learn to predict such structures. When parsing morphologically languages, it is  crucial to understand and model correctly  the structure of the input signal, otherwise, the parsing results may be unpredictable or uninterpretable.  This chapter is dedicated to formally define what are words, what is the input signal to the parser, and what is the relation between them.

%This book addresses the question how to build models and algorithms that can cope with morphologically rich languages. In practice, we are still concerned with a structure prediction task where we want to induce a prediction function  \(h:\mcx \rightarrow\mcy\) where \(x\in\mcx\) is a sentence in an MRL and \(y\in\mcy\) is it syntactic, or rather, morphosyntactic, representation.

%We have so far assumed that sentences are drawn from a set of sentences over a certain vocabulary \(\Sigma\), and that each element  \(s\in\Sigma\) corresponds to an input space-delimited token. In morphologically rich languages, the correspondence between vocabulary items and space-delimited input tokens is not that clear. 
 
 
\subsection{Structure and Plan}\label{sec:structure}

This  chapter  introduced  the  statistical parsing  architecture at a high level and reviewed its basic components. It also surveyed basic linguistic characteristic of  MRLs, and outlined the   overarching challenges that result from the intersection of methods for structure prediction with the complex linguistic  structures that we would like to predict in MRLs. The following chapters are dedicated to the development of systems that effectively cope with these challenges, in the context of  different formal frameworks. 
%We introduced the standard architecture assumed in all statistical parsing frameworks, the 

%In Chapter 2 redefine \(\mcx\) and \(\mcy\) our input and output spaces, and devise models and algorithms that  can effectively learn a parsing model and efficiently parse with it. The rest of the book is organized as follows.
%Chapter 2 is entirely devoted to defining the  structure of the input sequences in \(\mcx\). 

Chapter 2, 3 and 4 are organized according to the formal representation types that the parsing system assumes. In each of these chapters we define the representation type, modeling assumptions, learning techniques and decoding algorithms for this particular kind of output structures. In Chapter 2 we focus on phrase-structure  trees and discuss grammar-based models. In Chapter 3 we focus on dependency parsing, and discuss  transition based as well as graph-based models.

 In particular, Chapter 4 discusses a joint representation, called Relational-Realizational, that combines constituency, dependency and morphology information, and devise models that can be used for parsing with this representation.  On a technical level, the latter section serves to demonstrate how the techniques introduced in previous two section that can be effectively incorporated into models based on a different representation types. On a linguistic level it serves to show how hybrid representations allow for modeling assumption that better capture our linguistic intuitions about these languages. 
%In Chapter 5 we develop a new representation format for parsing MRL, and show how we can apply either grammar-based modeling or transition% based modeling, This chapter acts as a case study, and illustrates how, given expert domain knowledge, appropriate learning and decoding algorithms may be defines based on the solution we presented in previous chapters. In fact, the presented solutions would be applicable for a wide range of syntactic representations, those that this book does not aim to cover.

Chapter 5 is finally devoted to evaluation of parsers for MRLs, and addresses some non-trivial challenges that emerge in the context of this book, for instance: how can we evaluate the prediction accuracy of  joint morphosyntactic structures? How to compare the accuracy of the prediction across different formal frameworks? What are the implications of the label set and annotation idiosyncracies when we preform such  cross-parser and cross-framework comparability? 

%\subsection{Structure and Plan} This remainder of this book is organized as follows. In chapter \ref{ch-morphology} we are going to formally define the structure of the input, and the structure of a morphologically rich lexicon. This chapter is relevant for anyone who would like to implement or apply a parser for a morphologically rich language. In fact, this chapter is also necessary for anyone who wants to develop any NLP application for a morphologically rich language -- be it speech recognition, machine translation, or information extractions. The terminology and the formal definitions we introduce in this chapter will lay the ground for any statistical parsing framework we are going to further develop.

%\subsection{Acknowledgments}

 \section{Summary and Further Reading}
In this chapter we defined the syntactic parsing, morphological processing, and challenges that emerge from the interaction between these morphological and syntactic levels of processing. We have shown that for languages with rich morphology, basic assumptions concerning the parsing process break down, including the assumption that the input to the parser no longer contains a deterministic set of nodes, and an overall re-consideration of the architecture and its various components is needed. 

The design of the  data-driven parsers of the kind presented here rely on the formal representation of syntactic structures as has been developed by linguists in the last half decade. The constituency-based representation has been initially proposed by the the American structuralist linguists Bloomfield \cite{bloomfield33language} and Harris \cite{harris60}, and was later adopted by the linguist Noam Chomsky in his definition of Phrase-Structures and generative grammars \cite{chomsky57structures,chomsky65aspects}. A different linguistic tradition in Europe following the work of Tesniere \cite{tesniere59elements}, focused on dependency-based grammar formalisms. This descriptive work that relies on dependency-based frameworks is vast and is led by studies of Hudson \cite{hudson84wg} and Melcuk \cite{melcuk88dep}.

Later on, additional grammatical formalisms that have been developed in order to provide a computationally tractable alternative to the Chomsky's generative grammars  \cite{chomsky81lectures,chomsky95minimalist}. These developments include  Relational Grammars \cite{postal77passive}, Generalizes Phrase Structure Grammars \cite{gazdar85gpsg}, Lexical Functional Grammars \cite{bresnan82formal}, Head-Driven Phrase Structure Grammars \cite{sag99hpsg}, among others. 
%
An additional family of  grammars formalism,  mildly context-sensitive grammars, includes TAG \cite{joshi77tag} and CCG \cite{steedman96surface}, which have been designed in order to cope with  linguistic phenomena that are not expressible in the other formalisms. Many of the advanced statistical parsing frameworks we present in this book draw on insights obtained via these developments, explicitly and implicitly.


Morphology theory in linguistics has   mostly been studied separately from syntax.  Good introduction to morphology theory are provided in \cite{matthews93morphology} and \cite{spencer01handbook}. 
%Particular morphostyntactic phenomena such as Case and Agreement are discussed in \cite{}
In particular,  Stump \cite{stump01paradigm} provides a taxonomy that characterize the different approaches for  morphological description. Numerous linguists argued for more tight relations between morphology and syntax. In particular, this book may be viewed as introducing theoretical  ideas from the linguist Stephan Anderson's Amorphous Morphology \cite{anderson92amorphus} into the computational domain.

This book is tightly related to other two other books in this Human Language Technology Lecture series.  On the one hand this book may be seen as an application of structure prediction methods discussed in  {\em Linguistic Structure Prediction}  \cite{smith11}. The book can also be seen as 
% book may be seen as taking the general characterization of Smith for Linguistic Structure Prediction and applying it to a specific task, syntactic parsing.
%On the other hand, this book may be seen as  
generalizing and extending  the  {\em Dependency  Parsing} book   \cite{kubler09book}, introducing a wider range of parsing frameworks that can effectively cope with a wider range of language types. 
%Additional books on  computational approaches to morphology and syntax  are\cite{rpark07comp,comp}. 
 These books deal with the morphological and  syntactic processing tasks separately, without  a formal or an empirical link between the parts. 


This book focuses on  joint computational modeling for morphology and syntax, as has been argued for motivated in, at least,  \cite{tsarfaty06integrated,rtyg08joint,cohen07joint} and others. Joint modeling is relevant not only for the morphosyntactic task and has been successfully employed in additional domains: parsing and names entity recognition, parsing and multiword expression identification,  parsing and speech recognition. Many of the ideas presented here are relevant for these other domains too.
%We concentrate  on theory rather than implementation details, but many of the models we discussed have been applied to different MRLs. 
Our appendix provides a per-language survey of the main resources that are available for parsing MRLs and  main empirical results that have been obtained by statistical parsers developed to cope with them. This book along with the resources the  provided at the recent PMRL shared task\footnote{\url{http://www.spmrl.org/spmrl2013-sharedtask.html}}   are hoped to  inspire and facilitate the development of effective models for these  and other MRLs, and for universal parsing.

%Statistical parsing - meagerman - charniak - collins

%Learning methods? - Decoding methods? - Evaluaton methods? 


%Morphological theory - intro book - case and agreement books - typological essays - nonconfigurationality


%typology  greenberg features greenberg word order

%\section*{Problems}
%\begin{itemize}
%\item[1.1]
%Define each application on slide 10 as a structure prediction task. What is the function? what is the input? what is the output?
%\item[1.2]
%Define each of the following terms: Morpheme, Morph, Allomorph, Stem, Affix, Lemma, Feature, Gender, Paradigm, Syncretism, MSR.
%\item[1.3]
%Write rich MSRs for the following words: Jack, snowing, apples, sings, a, the, could, his, yesterday, John's, won't. Hint: in some cases you may need more than one MSR per word.
%\item[1.4]
%Find three examples of syncretism in three languages. Why might it challenge parsing?
%\end{itemize}


