
 The term {\em parsing}  in its broad sense refers to the process of automatically analyzing a string of symbols in order to reveal how they are combined into  a complete {structure} that represents {meaning}.  
 %The parsing task embodies an assumption of an underlying structure, or a grammar, that is a set of rules that defines the set of possible combinations and their meaning. In formal  computer languages, parsing pre-supposes a set of rules that unequivocally determines  the meaning of the input string. In   natural language processing, much of language technology these days relies on the development  of computer systems to can  automatically  parse sentences in a human language (henceforth,  in natural language) in order to uncover its intended meaning. 
In  computational linguistics/natural language processing (CL/NLP), the term  {\em parsing} is reserved for the automatic syntactic analysis of a natural language sentences, that is, the task of analyzing the  way in which words are combined to form meaningful phrases and sentences. Parsers  assign sentences their a rich  structure which reflect their human perceived interpretation, and thus there are an essential part of almost any language technology application, from text analytics and information extraction to machine translation.

A syntactic parser is typically  a  pipeline, consisting of different levels of  
processing, corresponding to different levels of linguistic analysis: {\em phonological} processing refers analyzing the structure of sound waves and identify syllables and words, {\em morphological} processing analyzes the structure of words and uncovers the elements that contribute to  words' meanings,  {\em syntactic} processing  (or, parsing) analyzes the structure of sentences in order to identify the main entities and relations, {\em semantic} processing, aims to uncover the meaning of complete utterances, by considering the meaning of the pieces as well as the semantic of their combinations. And finally, processing {\em pragmatics/discourse structures} aims to recover implicit aspects of human communication, for instance, placing clustering referring expressions by their shared referent, or  uncovering the  rhetoric structure of a discourse. 

Figure~\ref{levels-of-analysis} demonstrate the outcome of the different levels of analysis of the phrase.  Each level provides cues for the next level of analysis. The analysis of sound waves as words provide the input to the morphological processing component,  the analysis of words according to their part of speech categories and various features such as Singular, 3rd Person, Negation, and so on, helps to determine they are grouped into phrases, and the syntactic analysis helps to uncover the predicate argument structure (who did what to whome), that can more easily  be mapped to logical formulae (more so than directly mapping words into meaning).

%The analysis are ordered may be ordered from  "shallow" or "deep", gradually moving from observable language signal (speech or text) to uncovering a deep notion of interpretation.\footnote{For our present purposes, let us assume that human-perceived interpretation is event-based. That is, it consists of a mental representation of the events describes, the participants in the events, and the inter-relation between  events,  between participants, and between events and participants. In linguistics and philosophy, this event-based semantics is termed neo-davidsonian semantics, and it is  employed in modeling semantic parsing.}


\begin{figure}
\begin{itemize}
\item She doesn't think this task is unattainable
\begin{itemize}
\item Morphological analysis \\
\begin{tabular}{llllllll}
She& does & not & think & her & task & is & unattainable\\
PRN.3Sing & AUX.3Sing & RB.neg & VB.Inf & PRP.Sing & NN.Sing& COP.3Sing & JJ \\
\end{tabular}
\item Syntactic Analysis \\
\scalebox{0.75}{\Tree[.S [.NP [.PRN.3Sing She ] ] [.VP [.AUX.3Sing does ] [.RB not ]  [.VP [.VB.In think ] [.SBAR [.S [.NP [.PRP her ] [.NN task ] ] [.VP  [.V is ] [.ADJP [.JJ unattainable ] ] ] ] ] ] ] ]}
\item Semantic Analysis \\
\(\neg\)Think (She, \(\neg\)Can(Be(Attain((Task(Her)))))
\item Co-Reference Resolution\\
\(\neg\)Think (She(*), \(\neg\)Can(Be(Attain((Task(Her(*))))))
\item Human perceived interpretation:\\
It is not the case that she thinks the following: that her  task cannot be attained. From this we  infer that she has a task, and  she thinks this  task can be attained.
\end{itemize}
\end{itemize}
\caption{Levels of Processing: Phonological, Morphological, Syntactic, Semantic and Pragmatic analysis of the sentence  ``She does not think this task is unattainable."}\label{fig:levels-of-analysis}
\end{figure}




The earlier levels of the  pipeline provide even more information to deeper levels of processing. For instance, the word doesn't is needs to be analyzed as a contracted word containing two elements: the auxiliary verb "does" and the negative marker "not". Each of these elements has its own role in the syntactic representation. The syntactic parse representation itself contributes information to the semantic component, for instance, that the highest NP element under S is the sentence {\em subjet}, that participant that is driving the situation, and that NP to the left of a VP is a direct object.

Moreover, information from earlier in the pipeline may be needed much later the structure of the word unattainable, consisting of the morphemes "un" (negation) "attain" (action) "able" (can be), corresponds directly to logical elements in the semantic representation. Similarly, the grammatical features Fem.3Sing of the terms "her" and "she" are those who help a pragmatic component identifying that they co-refer to the same entity. 
In order to develop a natural language processing engine that can faithfully convert observable utterances into meaningful analyses, we need to make accurate predictions in every level of processing.

As oppose to formal computer languages, natural language exhibit ample ambiguity in all levels of processing. At the phonological level "si" may correspond to both the words "see" and "sea". At the morphological level "John's" may be interpreted as "John is" or "of John". At the syntactic level "I wrote a book with a pencil" may mean that I did the writing with a pencil, or that the book had a pencil as one of its characters. And on the semantic level "every man loves a woman" may be interpreted as a single woman that all man love, or that every man has his own beloved woman. This ambiguity is resolved using statistical modeling, that aim to score the best possible analysis based on its linguistic coherence and relation to context.

The syntactic analysis of the sentence may be constituency-based or dependency-based (as in Figure), or it may confine with any computational linguistic formalism such as HPSG, LFG, GPSG, CCG, CG and more. In this book we focus on three forms of representation, a constituency-based, a dependency-based, and a combined one -- but the methods we present have been used and may be successfully applies to a range of other statistical frameworks.

%Parsers are key components in the architecture of  technological applications, including: Question Answering, Information Extraction,  Machine Translation, and more.   

%\begin{quote}
%T'was brillig, and the slithy toves\\
%Did gyre and gimble in the wabe;
%All mimsy were the borogoves,\\
%And the mome raths outgrabe.\\
%\end{quote}


%\begin{itemize}

\begin{figure}
\Tree[.S [.NP John ] [.VP [.VB loves ] [.NP Mary ] ] ]
\Tree[.ROOT [.loves John Mary ] ]
\end{figure}


\begin{figure}
\scalebox{0.75}{\Tree[.ROOT [.S [.NP [.PRN.3Sing She ] ] [.VP [.AUX.3Sing does ] [.RB not ]  [.VP [.VB.Inf think ] [.SBAR [.S [.NP [.PRP her ] [.NN task ] ] [.VP  [.V is ] [.ADJP [.JJ unattainable ] ]  ] ] ] ] ] ] ]}
\\
\Tree[.ROOT [.VB.Inf\\think PRN.3Sing\\she does not [.unattainable is [.task the ] ] ] ]
\\
\Tree[.ROOT [.sbj {PRN.3Sing\\she} ]  [.aux AUX.3Sing\\does ] [.neg RB\\not ] [.prd VB.Inf\\think  [.comp [.sbj [.det her ]  task ] [.prd [.cop is ] unattainable ] ] ] ]
\caption{The Nature of Syntactic Representation: Constituency Structures, Dependency Structures and Functions Trees}
\end{figure}


\subsection{Structure Prediction}

We formally define parsing as a structure prediction task \(f:\mathcal{X}\rightarrow\mathcal{Y}\) where \(x\in\mathcal{X}\) in which some observable signal   \(x\) is a sentence in a human language and \(y\in\mathcal{Y}\) is its syntactic structure.  
 


In order to cope with  ambiguity we employ statistical models in which 

\(f(x) = argmax_{y\in\mathcal{Y}} Score(x;y)\)

The model implies parameters, and so 

\(f(x) = argmax_{y\in\mathcal{Y}} Score(x;y|\Theta)\)

In our maximum likelihood estimation, we look for \(\Theta\) such that

\(\Theta^* = argmax_\Theta L(D;\Theta)\)


In order to define a parser we need to define
\begin{itemize}
\item The input, output spaces
\item The Learning Algorithm (Trainer)
\item The Search Algorithm (Decoder)
\item The Evaluation Metrics
\end{itemize}

\subsection{Models and Algorithms}

\subsubsection{Representation}
There are as many representations as there are syntactic theories.
Each representation reflects different notions
%
Constituency structures, as presented in Figure~\ref{}, focus on identifying the main entities and the relation between them and grouping words into phrases and phrases into clauses.
Dependency structures, as represented in Figure~\ref{}, focus on the direct relations between words in the sentence.
Relational Networks  as  in Figure~\ref{},  completely ignore the order of words, and focus on grouping them into nested sets.\footnote{Advanced theories of computational syntax such as LFG, HPSG, CCG,  define even more complex  structures that include reference to syntactic and semantic information. We will not discuss them here.}
\subsubsection{Model}
The model defines the objective parser that our parsing algorithm aims to maximize.
The model may be generative or discriminative. A generative model aims to learn the distribution of the sentences and their structure and performs analysis by synthesis. The probability with which the corresponding structure is generated defines its score. In discriminative model, we assume that we know the space of possible structure for a given utterance, and we merely aim to discriminate between them. The way to learn the score of generative models may be done either by 
\subsubsection{Training}
The training procedure sets the values of the model parameters, weights or probabilities, based on corpus data. In this book we focus on supervised (and semi-supervised) approaches, meaning that the training component has access to  annotated examples.
\subsubsection{Decoding}

A decoding algorithm is an algorithm that can generate all competing analysis, assign structure to them based on the model parameters, and finds the highest scoring parse tree. 

\subsubsection{Evaluation}
How do we know if we parsed correctly? We need metrics that allow us to quantify the success of the predicted structure. Because we dealing complex structures, a boolean all or nothing metric would not suffice. Therefore, task intrinsic evaluation metrics are defined for the each output structure. This of course poses a problem in comparing parsers with different output structures: 