gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
sequence | paper_headers
sequence | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-1#paper-954#slide-0 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-0 | Syntax in Statistical Machine Translation | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions? | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-1 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-1 | Syntax in the Language Model | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions?
An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings.
Phrase-based decoder produces translation in the target language incrementally from left-to-right
Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right
Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser
We use a standard HHMM parser (Schuler et al., 2010)
Engineering simple model, equivalent to PPDA
Algorithmic elegant fit into phrase-based decoder
Cognitive nice psycholinguistic properties | Translation Model vs Language Model
Syntactic LM Decoder Integration Results Questions?
An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings.
Phrase-based decoder produces translation in the target language incrementally from left-to-right
Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right
Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser
We use a standard HHMM parser (Schuler et al., 2010)
Engineering simple model, equivalent to PPDA
Algorithmic elegant fit into phrase-based decoder
Cognitive nice psycholinguistic properties | [] |
GEM-SciDuet-train-1#paper-954#slide-2 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-2 | Incremental Parsing | DT NN VP PP
The president VB NP IN NP
meets DT NN on Friday NP/NN NN VP/NP DT board
Motivation Decoder Integration Results Questions?
the president VB NP VP/NN
Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents
NP VP S/NP NP
the board DT president VB the
Incomplete constituents can be processed incrementally using a
Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al. | DT NN VP PP
The president VB NP IN NP
meets DT NN on Friday NP/NN NN VP/NP DT board
Motivation Decoder Integration Results Questions?
the president VB NP VP/NN
Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents
NP VP S/NP NP
the board DT president VB the
Incomplete constituents can be processed incrementally using a
Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al. | [] |
GEM-SciDuet-train-1#paper-954#slide-3 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-3 | Incremental Parsing using HHMM Schuler et al 2010 | Hierarchical Hidden Markov Model
Circles denote hidden random variables
Edges denote conditional dependencies
NP/NN NN VP/NP DT board
Isomorphic Tree Path DT president VB the
Shaded circles denote observed values
Motivation Decoder Integration Results Questions?
Analogous to Maximally Incremental
e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday
Push-Down Automata NP VP/NN NN | Hierarchical Hidden Markov Model
Circles denote hidden random variables
Edges denote conditional dependencies
NP/NN NN VP/NP DT board
Isomorphic Tree Path DT president VB the
Shaded circles denote observed values
Motivation Decoder Integration Results Questions?
Analogous to Maximally Incremental
e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday
Push-Down Automata NP VP/NN NN | [] |
GEM-SciDuet-train-1#paper-954#slide-4 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-4 | Phrase Based Translation | Der Prasident trifft am Freitag den Vorstand
The president meets the board on Friday
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | Der Prasident trifft am Freitag den Vorstand
The president meets the board on Friday
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-5 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158
],
"paper_content_text": [
"Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.",
"Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.",
"Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.",
"1990).",
"Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.",
"Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.",
"Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.",
"Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.",
"1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .",
"On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.",
"We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.",
"We directly integrate incremental syntactic parsing into phrase-based translation.",
"This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.",
"The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.",
"The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.",
"Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.",
"Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.",
"Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .",
"In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.",
"Instead, we incorporate syntax into the language model.",
"Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.",
"Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.",
"This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .",
"Hassan et al.",
"(2007) and use supertag n-gram LMs.",
"Syntactic language models have also been explored with tree-based translation models.",
"Charniak et al.",
"(2003) use syntactic language models to rescore the output of a tree-based translation system.",
"Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.",
"Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.",
"Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.",
"Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .",
"Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.",
"The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.",
"The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.",
"These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.",
"Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .",
".",
".",
"the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .",
".",
".",
"president meets τ 3 1 Obama met τ 3 2 .",
".",
".",
"Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.",
"Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .",
"Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.",
"We use the English translation The president meets the board on Friday as a running example throughout all Figures.",
"sentence e, out of all such possible representations τ .",
"This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.",
"Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.",
"P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.",
"After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .",
"The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.",
"An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).",
"The role of δ is explained in §3.3 below.",
"Any parser which implements these two functions can serve as a syntactic language model.",
"P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .",
"e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .",
"To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.",
"An n-gram language model history is also maintained at each node in the translation lattice.",
"The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.",
"Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.",
"Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.",
"As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.",
"Each node in the translation lattice is augmented with a syntactic language model stateτ t .",
"The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.",
"The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.",
"Each node contains a backpointer to its parent node, in whichτ t−1 is stored.",
"Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .",
"Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .",
"In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.",
"For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.",
"Only the final syntactic language model state in such sequences need be stored in the translation lattice node.",
"Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.",
"The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.",
"To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
".",
"Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.",
"Circles denote random variables, and edges denote conditional dependencies.",
"Shaded circles denote variables with observed values.",
"sive phrase structure trees using the tree transforms in Schuler et al.",
"(2010) .",
"Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .",
"As an example, the parser might consider VP/NN as a possible category for input \"meets the\".",
"A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.",
"Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).",
"Parsing runs in linear time on the length of the input.",
"This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The parser runs in O(n) time, where n is the number of words in the input.",
"This model is shown graphically in Figure 4 and formally defined in §4.1 below.",
"The incremental parser assigns a probability (Eq.",
"5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .",
"The phrase-based decoder uses this probability value as the syntactic language model feature score.",
"Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.",
"generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.",
"The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .",
"Figure 5 illustrates this model in action.",
"These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.",
"new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.",
"6, as defined by §4.1), but are not stored.",
"Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.",
"E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.",
"Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.",
"Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .",
"Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.",
"Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.",
"By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.",
"Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.",
"5).",
"During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.",
"New hypotheses are placed in appropriate hypothesis stacks.",
"In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.",
"As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.",
"This results in a new store of syntactic random variables (Eq.",
"6) that are associated with the new stack element.",
"When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.",
"It is then repeated for the remaining words in the hypothesis extension.",
"Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.",
"The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.",
"Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.",
"Our syntactic language model is integrated into the current version of Moses .",
"Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.",
"Equation 25 calculates ppl using log base b for a test set of T tokens.",
"ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .",
"To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.",
"In all cases, including the HHMM significantly reduces perplexity.",
"We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.",
"We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.",
"During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.",
"MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.",
"In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.",
"Figure 8 illustrates a slowdown around three orders of magnitude.",
"Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.",
"Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).",
"Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.",
"Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.",
"This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.",
"We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.",
"We integrated an incremental syntactic language model into Moses.",
"The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.",
"The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .",
"Our n-gram model trained only on WSJ is admittedly small.",
"Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.",
"The added decoding time cost of our syntactic language model is very high.",
"By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.",
"A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.",
"Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.3",
"4",
"4.1",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Parser as Syntactic Language Model in",
"Incremental syntactic language model",
"Incorporating a Syntactic Language Model",
"Incremental Bounded-Memory Parsing with a Time Series Model",
"Formal Parsing Model: Scoring Partial Translation Hypotheses",
"Results",
"Discussion"
]
} | GEM-SciDuet-train-1#paper-954#slide-5 | Phrase Based Translation with Syntactic LM | represents parses of the partial translation at node h in stack t
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | represents parses of the partial translation at node h in stack t
s president president Friday
s that that president Obama met
AAAAAA EAAAAA EEAAAA EEIAAA
s s the the president president meets
Stack Stack Stack Stack
Motivation Syntactic LM Results Questions? | [] |
GEM-SciDuet-train-1#paper-954#slide-6 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-6 | Integrate Parser into Phrase based Decoder | "EAAAAA EEAAAA EEIAAA EEIIAA\ns the the president president meets meets the\nMotivation Syntactic LM(...TRUNCATED) | "EAAAAA EEAAAA EEIAAA EEIIAA\ns the the president president meets meets the\nMotivation Syntactic LM(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-7 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-7 | Direct Maximum Entropy Model of Translation | "e argmax exp jhj(e,f)\nh Distortion model n-gram LM\nSet of j feature weights\nSyntactic LM P( th)\(...TRUNCATED) | "e argmax exp jhj(e,f)\nh Distortion model n-gram LM\nSet of j feature weights\nSyntactic LM P( th)\(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-8 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-8 | Does an Incremental Syntactic LM Help Translation | "but will it make my BLEU score go up?\nMotivation Syntactic LM Decoder Integration Questions?\nMose(...TRUNCATED) | "but will it make my BLEU score go up?\nMotivation Syntactic LM Decoder Integration Questions?\nMose(...TRUNCATED) | [] |
GEM-SciDuet-train-1#paper-954#slide-9 | 954 | Incremental Syntactic Language Models for Phrase-based Translation | "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi(...TRUNCATED) | {"paper_content_id":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29(...TRUNCATED) | {"paper_header_number":["1","2","3","3.1","3.3","4","4.1","6","7"],"paper_header_content":["Introduc(...TRUNCATED) | GEM-SciDuet-train-1#paper-954#slide-9 | Perplexity Results | "Language models trained on WSJ Treebank corpus\nMotivation Syntactic LM Decoder Integration Questio(...TRUNCATED) | "Language models trained on WSJ Treebank corpus\nMotivation Syntactic LM Decoder Integration Questio(...TRUNCATED) | [] |
Dataset Card for GEM/SciDuet
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
This dataset supports the document-to-slide generation task where a model has to generate presentation slide content from the text of a document.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/SciDuet')
The data loader can be found here.
website
paper
authors
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
Dataset Overview
Where to find the Data and its Documentation
Webpage
Download
Paper
BibTex
@inproceedings{sun-etal-2021-d2s,
title = "{D}2{S}: Document-to-Slide Generation Via Query-Based Text Summarization",
author = "Sun, Edward and
Hou, Yufang and
Wang, Dakuo and
Zhang, Yunfeng and
Wang, Nancy X. R.",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.111",
doi = "10.18653/v1/2021.naacl-main.111",
pages = "1405--1418",
abstract = "Presentations are critical for communication in all areas of our lives, yet the creation of slide decks is often tedious and time-consuming. There has been limited research aiming to automate the document-to-slides generation process and all face a critical challenge: no publicly available dataset for training and benchmarking. In this work, we first contribute a new dataset, SciDuet, consisting of pairs of papers and their corresponding slides decks from recent years{'} NLP and ML conferences (e.g., ACL). Secondly, we present D2S, a novel system that tackles the document-to-slides task with a two-step approach: 1) Use slide titles to retrieve relevant and engaging text, figures, and tables; 2) Summarize the retrieved context into bullet points with long-form question answering. Our evaluation suggests that long-form QA outperforms state-of-the-art summarization baselines on both automated ROUGE metrics and qualitative human evaluation.",
}
Has a Leaderboard?
no
Languages and Intended Use
Multilingual?
no
Covered Languages
English
License
apache-2.0: Apache License 2.0
Intended Use
Promote research on the task of document-to-slides generation
Primary Task
Text-to-Slide
Credit
Curation Organization Type(s)
industry
Curation Organization(s)
IBM Research
Dataset Creators
Edward Sun, Yufang Hou, Dakuo Wang, Yunfeng Zhang, Nancy Wang
Funding
IBM Research
Who added the Dataset to GEM?
Yufang Hou (IBM Research), Dakuo Wang (IBM Research)
Dataset Structure
How were labels chosen?
The original papers and slides (both are in PDF format) are carefully processed by a combination of PDF/Image processing tookits. The text contents from multiple slides that correspond to the same slide title are mreged.
Data Splits
Training, validation and testing data contain 136, 55, and 81 papers from ACL Anthology and their corresponding slides, respectively.
Splitting Criteria
The dataset integrated into GEM is the ACL portion of the whole dataset described in the paper, It contains the full Dev and Test sets, and a portion of the Train dataset. Note that although we cannot release the whole training dataset due to copyright issues, researchers can still use our released data procurement code to generate the training dataset from the online ICML/NeurIPS anthologies.
Dataset in GEM
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
SciDuet is the first publicaly available dataset for the challenging task of document2slides generation, which requires a model has a good ability to "understand" long-form text, choose appropriate content and generate key points.
Similar Datasets
no
Ability that the Dataset measures
content selection, long-form text undersanding and generation
GEM-Specific Curation
Modificatied for GEM?
no
Additional Splits?
no
Getting Started with the Task
Previous Results
Previous Results
Measured Model Abilities
content selection, long-form text undersanding and key points generation
Metrics
ROUGE
Proposed Evaluation
Automatical Evaluation Metric: ROUGE Human Evaluation: (Readability, Informativeness, Consistency)
- Readability: The generated slide content is coherent, concise, and grammatically correct;
- Informativeness: The generated slide provides sufficient and necessary information that corresponds to the given slide title, regardless of its similarity to the original slide;
- Consistency: The generated slide content is similar to the original author’s reference slide.
Previous results available?
yes
Other Evaluation Approaches
ROUGE + Human Evaluation
Relevant Previous Results
Paper "D2S: Document-to-Slide Generation Via Query-Based Text Summarization" reports 20.47, 5.26 and 19.08 for ROUGE-1, ROUGE-2 and ROUGE-L (f-score).
Dataset Curation
Original Curation
Original Curation Rationale
Provide a benchmark dataset for the document-to-slides task.
Sourced from Different Sources
no
Language Data
How was Language Data Obtained?
Other
Data Validation
not validated
Data Preprocessing
Text on papers was extracted through Grobid. Figures andcaptions were extracted through pdffigures. Text on slides was extracted through IBM Watson Discovery package and OCR by pytesseract. Figures and tables that appear on slides and papers were linked through multiscale template matching by OpenCV. Further dataset cleaning was performed with standard string-based heuristics on sentence building, equation and floating caption removal, and duplicate line deletion.
Was Data Filtered?
algorithmically
Filter Criteria
the slide context text shouldn't contain additional format information such as "*** University"
Structured Annotations
Additional Annotations?
none
Annotation Service?
no
Consent
Any Consent Policy?
yes
Consent Policy Details
The original dataset was open-sourced under Apache-2.0.
Some of the original dataset creators are part of the GEM v2 dataset infrastructure team and take care of integrating this dataset into GEM.
Private Identifying Information (PII)
Contains PII?
yes/very likely
Categories of PII
generic PII
Any PII Identification?
no identification
Maintenance
Any Maintenance Plan?
no
Broader Social Context
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
no
Impact on Under-Served Communities
Addresses needs of underserved Communities?
no
Discussion of Biases
Any Documented Social Biases?
unsure
Considerations for Using the Data
PII Risks and Liability
Licenses
Copyright Restrictions on the Dataset
non-commercial use only
Copyright Restrictions on the Language Data
research use only
Known Technical Limitations
- Downloads last month
- 206