%
% File naaclhlt2012.tex
%

\documentclass[11pt,letterpaper]{article}
\usepackage{naaclhlt2012}
\usepackage{times}
\usepackage{latexsym}
\usepackage{algorithm}
\usepackage[noend]{algorithmic}

\usepackage{graphicx}
\usepackage[encapsulated]{CJK}

\setlength\titlebox{6.5cm}    % Expanding the titlebox

\def\namecite{\newcite}

\title{Parser Projection for Tree-to-String Translation}

\author{Author 1\\
	    XYZ Company\\
	    111 Anywhere Street\\
	    Mytown, NY 10000, USA\\
	    {\tt author1@xyz.org}
	  \And
	Author 2\\
  	ABC University\\
  	900 Main Street\\
  	Ourcity, PQ, Canada A1A 1T2\\
  {\tt author2@abc.ca}}

\date{}

\begin{document}

\begin{CJK}{UTF8}{cyberbit}

\maketitle
\begin{abstract}
In this paper we explore the effectiveness of using projected parsers for
tree-to-string machine translation. We test dependency parsers trained with
trees projected by two different algorithms: a Maximum Spanning Tree-based
dependency projection algorithm, and a Synchronous Context-Free Grammar-based
projection algorithm. We find that projected parsers help tree-to-string
translation, and that the SCFG projection algorithm outperforms the MST
projection algorithm for the translation task because of the isomorphism
between the projection algorithm and SCFG rule extraction algorithm used in
tree-to-string translation.
\end{abstract}

\section{Introduction}

Training high-quality statistical parsers requires manually annotated treebanks
that are costly to build. The wide availability of bilingual parallel data has
prompted researchers to investigate the feasibility of building parsers for a
resource-scarce language with the help of existing high-accuracy parsers
(typically for English) and word aligned sentence pairs. Such parser projection
efforts \cite{hwa2005bootstrapping,mcdonald2011multi,jiang2010effective} are
usually evaluated in terms of parsing accuracy scores, and so far have seen
limited success in producing accuracy scores that are comparable to what can be
achieved with hand-annotated data.

A major application of automated parsers has been machine translation.  Many
syntax-based machine translation models explicitly incorporate parser output in
their training and/or decoding phase
\cite{galley-naacl04,Liu-acl06,shen2008new}, and have outperformed traditional
phrase-based translation models \cite{OchCL} for many language pairs. The
parsers used are typically trained on hand-annotated gold trees, and the
existence of such a parser restricts the applicability of syntax-based machine
translation models. For example, most results for the string-to-tree model are
reported for translation from a foreign language into English.

We have reasons to believe that although parsers trained on projected trees
fare poorly in standalone evaluations, they can still be useful in an
end-to-end evaluation with a syntax-based machine translation system. On the
one hand, it is well-known that a syntactical constituent may not be the best
translation unit, and syntactic parser output has indeed been proven to be too
restrictive for the machine translation task, and in many cases has to be
relaxed. On the other hand, a less-than-perfect parser could still be
informative to the translation model by offering some guide to structure and
tagging.

In this paper, we investigate applying parsers trained on projected trees to the
popular tree-to-string translation model. We test a method of generating
projected trees based on arc projection \cite{hwa2005bootstrapping}, and develop
a constituent/dependency tree projection algorithm based on synchronous
context-free grammar (SCFG). We demonstrate the effectiveness of projected
parsers with experiments on various language pairs.


\section{Related Work}

Our work extends and tests the practicality of a line of work on building
monolingual analysis tools with the help of a parallel corpus.

\cite{ryan2011training}
\cite{kuhn2004experiments}

Specifically for machine translation, \namecite{denero2011inducing} has
experimented with inducing a monolingual parsing model and reordering model from
just the parallel data.

\section{Parser Projection for Tree-to-String Translation Pipeline}

In this work, we choose to use the dependency representation for the
tree-to-string model. Using dependency trees allows for an easy comparison of
arc-based dependency tree projection algorithms and the SCFG-based tree
projection algorithm we develop in this paper. Dependency parsing is also
faster for use in tree-to-string translation in practice. We should note,
however, that the SCFG-based tree projection algorithm is readily applicable to
constituent tree projection and tree-to-string translation with constituent
parsing.

Taking German-English translation as an example, our work pipeline is:

\begin{enumerate}
\item Generate a projected German treebank.
\begin{itemize}
\item Parse the English side of a German-English parallel corpus.
\item Project English trees to German trees.
\item Filter the projected trees.
\end{itemize}
\item Train a German parser with the projected German treebank.
\item Use this German parser at both training and decoding time for
German-English tree-to-string translation.
\end{enumerate}

\section{Dependency Parser Projection}

\subsection{POS-Tag Projection}

The accuracy of dependency parsing depends very much on the accuracy of part of
speech tagging. In our experiments, we train a projected POS tagger by first
projecting source tags over word alignment to the target side, and then train an
ngram part of speech tagger for the target language on the projected tags. As a
last step, the target corpus is re-tagged with this ngram part of speech tagger.
Orthogonal to tree projection algorithms, other POS tag projection algorithms
could be employed such as the universal tagger of
\namecite{das2011unsupervised}.

\subsection{MST Projection}

The Maximum Spanning Tree dependency projection algorithm is an extension of the
arc-based dependency projection heuristics described by
\newcite{hwa2005bootstrapping}. Instead of doing a ``hard'' projection of the
dependency arcs, all dependencies on the source side are projected over word
alignment to the target side first, and then the maximum spanning tree
dependency parsing algorithm of \newcite{mcdonald2005non} is used to find a best
dependency hierarchy on the target side. The edge cost for the MST is... Since
dependency trees generated by the MST algorithm could be non-projective, they
are made to be projective (how?) before they are used for training a parser.

\subsection{SCFG Projection}

\begin{figure}
\begin{center}
\begin{tabular}{lll}
A & $\rightarrow$ & I {\bf got} B C, 我 C {\bf 买} 了 B \\
B & $\rightarrow$ & a {\bf gift}, {\bf 礼物} \\
C & $\rightarrow$ & {\bf for} D, {\bf 给} D \\
D & $\rightarrow$ & my {\bf brother}, {\bf 弟弟} \\
\end{tabular}
\caption{Example of head-annotated SCFG rules. This set of rules derives the
trees in Figure~\ref{fig:scfg-projection}. Text in bold represents head words.}
\label{fig:rules}
\end{center}
\end{figure}

Following the formal definition of SCFG in \namecite{SattaPeserico05}, an SCFG
$G$ is tuple $(V_N, V_T, P, S)$, where $V_N$ is the set of nonterminal symbols,
and $V_T$ the set of terminal symbols. $S \in V_N$ is an unique start symbol.
Each production rule in $P$ has the form $[A \rightarrow \alpha_1, \alpha_2,
\pi]$, where $A$ is a nonterminals in $V_N$, $\alpha_1$ and $\alpha_2$ are
strings in $(V_N \cup V_T)^*$, both containing $n$ nonterminals.  $\pi$ is a
permutation of indices $1 \dots n$, defining a one-to-one correspondence between
the $i$'th nonterminal in $\alpha_1$ and the $\pi(i)$'th nonterminal in
$\alpha_2$. A derivation $[\gamma_1, \gamma_2, \pi] \Rightarrow_G^s [\delta_1,
\delta_2, \pi']$ rewrites a pair of nonterminals in the string pair $[\gamma_1,
\gamma_2, \pi]$ into a $[\delta_1, \delta_2, \pi']$ with rule $s = [A
\rightarrow \alpha_1, \alpha_2, \pi_s]$. A set of derivations $\sigma = s_1
\dots s_t$ defines a pair of parse trees, $t_1(\sigma)$ on the source side and
$t_2(\sigma)$ on the target side.

We call an SCFG {\em head-annotated} if each production rule has the form $[A
\rightarrow \alpha_1, \alpha_2, \pi, i, j]$, which denotes that the $i$'th
symbol in $\alpha_1$ and the $j$'th symbol in $\alpha_2$ are head words. Either
both symbols are terminals, or they are a pair of corresponding nonterminals.
Figure~\ref{fig:rules} shows a set of such head SCFG rules that derives the
parse trees in Figure~\ref{fig:scfg-projection}. Blue edges of the trees mark
the head words. It is easy to convert such a head-annotated tree to a projective
dependency tree.

In the context of tree-to-string translation, the rule extraction procedure
\cite{galley-naacl04} defines an SCFG that is consistent with both the English
parse tree and word alignments. By finding {\em frontier nodes} on the input
English parse tree where the tree can be safely segmented into tree fragments
without violating word alignment links, we arrive at a set of SCFG rules that
derives both a source parse tree and a target parse tree. The source and target
trees are isomorphic except for the ordering of child nodes under each tree
node. We take advantage of this tree isomorphism property to directly project
the source parse tree onto the target language, using the same procedure as that
of tree-to-string rule extraction.

When the goal is projecting to target dependency trees, it is left to be
resolved how the head words are selected at each tree node in the projected
target tree. Since our input constituent tree is converted from a dependency
tree, the GHKM rule extraction procedure essentially gives us an SCFG where the
head symbol is marked on the source side but not on the target side. If the
source side head symbol is a nonterminal, tree isomorphism dictates that the
target side head symbol is the corresponding nonterminal in the SCFG rule. If
the source side head symbol is a terminal, though, we rely on word alignments to
find out the target side head symbol. Because in less-than-ideal scenarios, the
source side head terminal can be unaligned or aligned to multiple target side
words, we use the following weighting scheme to guarantee that 1) a well-formed
target side head-rule tree can be generated, and 2) in the case where all source
side head words are aligned to a single target side word, the target tree
generated respects the isomorphism of the SCFG derivation.

Include example of non-isomorphic projection.

%SCFG-based tree projection algorithm mirrors GHKM rule extraction. GHKM rule
%extraction defines a set of SCFG production rules that derive a sentence pair,
%complete with word alignment links. This naturally gives us both a source side
%constituent tree and target side constituent tree. Additionally, dependency
%links can be projected by mirroring source side head rules to the target side.
%The only problematic case is when an SCFG rule has a terminal as its head, and
%this terminal is either unaligned or maps to multiple words on the target side.
%We use the following weighting scheme shown in this picture to handle malformed
%word alignments.

Algorithm~\ref{proj-algo} shows the pseudo code of the SCFG tree projection
algorithm. The projection process includes three steps:

\begin{enumerate}
\item
In the source side tree, each node is intialized with a weight equal
to the span it covers (line 1). Then a top-down process (lines 2-3) percolates
the weights down the head word spines to eventually assign each word source
word with a weight equal to the length of the largest span it controls as a
head word.
%Top-down on the source side tree: each frontier node is given a weight equal to
%its parent if it is labelled as head, otherwise it is given a weight equal to
%the length of the span it covers.  As a result, each source word is given a
%weight equal to the length of the largest span it controls as a head word.
\item
Each target word is given a weight equal to the maximum weight of the source
words it aligns to (lines 5-7).
\item
In the target side tree, a bottom-up process is used to determine the head
relations (lines 8-12): each target tree node is given a weight equal to the
maximum weight of its children. The child with the maximum weight is labelled
as head.
\end{enumerate}

\begin{algorithm}[t]
\small
\caption{The algorithm for projecting dependency relations. The source tree $t$
and target tree $t'$ are isomorphic and the head relations $h$ is known for the
nodes in the source tree. ($h(n) = n'$ means the child $n'$ of $n$ is selected
as the head node under $n$.) The head relations $h'$ for the target tree are to
be discovered.}

\begin{algorithmic}[1]
\STATE $\forall n \in t: w(n) \leftarrow $ the length of the span that $n$ covers
\FOR{node $n \in t$ in top-down topological order}
    \STATE $w(h(n)) = w(n)$
\ENDFOR
\STATE $\forall n \in t': w(n) \leftarrow 0$
\FOR{leaf node pair $n \in t, n' \in t'$ that are linked by word alignment}
    \IF{$w(n) > w(n')$}
        \STATE $w(n') \leftarrow w(n)$
    \ENDIF
\ENDFOR
\FOR{node $n \in t'$ in bottom-up topological order}
    \FOR{child node $n'$ of node $n$}
        \IF{$w(n') > w(n)$}
            \STATE $w(n) \leftarrow w(n')$
            \STATE $h'(n) \leftarrow n'$
        \ENDIF
    \ENDFOR
\ENDFOR
\end{algorithmic}
\label{proj_algo}
\end{algorithm}

\begin{figure*}
\begin{center}
\includegraphics[scale=0.6]{proj.pdf}
\caption{Example of applying the SCFG tree projection algorithm. Red dots in
the trees mark the frontier nodes. The source and target trees are isomorphic.
Head rules are marked in blue.}
\label{fig:scfg-projection}
\end{center}
\end{figure*}

The dependency trees generated by the SCFG projection algorithm are always
projective.

\subsection{Filtering Threshold}

Parallel corpora are typically huge and much less clean than manually crafted
treebanks. It is thus important to filter the corpus for a set of projected
trees that are high in quality and manageable in size for training a parser.

A filtering score that naturally follows SCFG tree projection is the percentage
of frontier nodes among all nodes in the source tree. This prefers trees
where the word alignment is more consistent with the source tree and projected
tree structure is more meaningful.

Describe thresholding.

\section{Experiments}

\subsection{Setup}

The English parser we used in our experiments is a shift-reduce dependency
parser \cite{nivre2004deterministic} trained on... The same parser is trained on
the projected treebank and applied to the tree-to-string system. Dependency
trees are converted to constituent trees by propagating the part-of-speech tags
of the head words to corresponding phrase structures.

The tree-to-string translation system we used binarizes input trees into a parse
forest before rule extraction at training time or rule matching at decoding
time \cite{zhang2011binarized}.

The German parser is trained on...

To build a projected treebank, we first apply the projection algorithm and the
filtering threshold to the entire parallel corpus and get a filtered set of
foreign language trees. We then randomly take 80\% of the filtered set as the
training set, 10\% as the development set, and 10\% as the test set for training
and testing the projected dependency parser. We report the POS tagging accuracy
and for dependency parsing accuracy, Unlabeled Attachment Score (UAS), i.e. the
fraction of words whose head words are correctly identified) for dependency
parsing accuracy. It should be noted that although these scores are reported in
the same column in Table~\ref{tab:proj} and Table~\ref{tab:thres}, each score is
evaluated on a different test set. These scores should be taken as a measure of
the learnability of the projected trees which reflects their consistency.

BLEU score \cite{Papineni-02} is used to evaluate machine translation
performance.

\subsection{Results}

Table~\ref{tab:proj} summarizes the result of applying different projected
parsers to the tree-to-string system. As a baseline, we compare the performance
of the tree-to-string system to a state-of-the-art phrase-based translation
system. \texttt{t2s-mst} and \texttt{t2s-scfg} use a projected parser trained
with the MST and SCFG projection algorithms, respectively. We observe that both
systems were able to significantly outperform the phrase-based baseline in terms
of BLEU score, confirming our hypothesis that the translation model can benefit
from a parser trained on projected trees. The SCFG projection algorithm
outperforms the MST projection algorithm by half a BLEU point. We also show the
result of utilizing a parser trained on treebank trees (\texttt{t2s-sup}). The
performance of tree-to-string translation using a parser trained on projected
trees falls between the phrase-based baseline and tree-to-string translation
using a parser trained on gold trees. We also report the Unlabelled Attachment
Score (UAS) and POS tagging accuracy from the parser training phrase. With
projected POS tags we were able to learn a tagger that is adequate in accuracy
in comparison to the tagger trained on gold tags.  However, the accuracy of
dependency parsers trained on projected trees is significantly lower than that
of the parser trained with gold trees, indicating a lack of consistency across
the projected trees in comparison to gold trees. However, we found no direct
correlation between parsing accuracy and machine translation performance when a
parser trained on projected trees was used. We do find a correlation between the
rule table size and machine translation performance, though.

\begin{table*}
\begin{center}
\begin{tabular}{lrrrrr}
 & UAS & POS & Treebank Size & BLEU & Rule Count \\
\hline
pb        &       &       &        & 0.2269 &      \\
t2s-mst   & 70.61 & 92.51 & 196889 & 0.2325 &  85M \\
t2s-scfg  & 63.52 & 92.53 & 200733 & 0.2376 &  99M \\
t2s-sup   & 88.56 & 97.45 &  35290 & 0.2444 & 109M \\
\end{tabular}
\end{center}
\caption{Results on German-English translation.}
\label{tab:proj}
\end{table*}

It is interesting to investigate the effect of varying the filtering threshold
for generating the projected trees for parser training. From
Table~\ref{tab:thres}, we see that in general a tighter threshold results in a
smaller treebank size and higher accuracy of the trained parser. Higher parsing
accuracy, however, is no guarantee of machine translation performance, and we
observe that the BLEU score peaked at a filtering threshold of 0.1. We hypothesize
that a tradeoff has to be made between generating a more consistent training
corpus and letting the parser see more patterns from the training data.

\begin{table*}
\begin{center}
\begin{tabular}{rrrrr}
Threshold & UAS & POS & Treebank Size & BLEU \\
\hline
0.02 & 72.00 & 92.70 &  77899 & 0.2347 \\
0.05 & 68.31 & 92.56 &  98892 & 0.2372 \\
0.10 & 63.52 & 92.53 & 200733 & 0.2376 \\
0.15 & 59.37 & 92.61 & 385751 & 0.2370 \\
\end{tabular}
\end{center}
\caption{Effects of using different filtering threshold for projection.}
\label{tab:thres}
\end{table*}

We also tested our tree-to-string system with projected parser for another two
language pairs: French-English and Spanish-English. For these two language pairs
the original tree-to-string system with a parser trained on gold trees has a
slight disadvantage in comparison with the phrase-based baseline system. We
observe that in such cases, the difference between using parsers trained on gold
trees and those trained on projected trees tends to diminish. For
French-English, \texttt{t2s-scfg} even outperforms \texttt{t2s-sup}.

(Discuss isomorphism).

\begin{table*}
\begin{center}
\begin{tabular}{lrrrr}
Language Pair & Treebank Size & pb & t2s-scfg & t2s-sup \\
\hline
French-English  & 115339 & 0.3006 & 0.3001 & 0.2984 \\
Spanish-English &  91390 & 0.3112 & 0.3055 & 0.3060 \\
\end{tabular}
\end{center}
\caption{Results for other language pairs.}
\label{tab:lang}
\end{table*}

\section{Conclusion}

In this paper we explore the effectiveness of using projected parsers for
tree-to-string machine translation. We test dependency parsers trained with
trees projected by two different algorithms: a Maximum Spanning Tree-based
dependency projection algorithm, and a Synchronous Context-Free Grammar-based
projection algorithm. We find that projected parsers help tree-to-string
translation, and that the SCFG projection algorithm outperforms the MST
projection algorithm for the translation task, because of the isomorphism
between the projection algorithm and SCFG rule extraction algorithm used in
tree-to-string translation.

\section*{Acknowledgments}

\bibliographystyle{fullname}
\bibliography{long}

\end{CJK}
\end{document}
