Datasets:
GEM
/
SciDuet

Tasks:
Other
Languages: English
Multilinguality: unknown
Size Categories: unknown
Language Creators: unknown
Annotations Creators: none
Source Datasets: original
Licenses: apache-2.0
Dataset Preview
gem_id (string)paper_id (string)paper_title (string)paper_abstract (string)paper_content (sequence)paper_headers (sequence)slide_id (string)slide_title (string)slide_content_text (string)target (string)references (list)
"GEM-SciDuet-train-1#paper-954#slide-0"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-0"
"Syntax in Statistical Machine Translation"
"Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions?"
"Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions?"
[]
"GEM-SciDuet-train-1#paper-954#slide-1"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-1"
"Syntax in the Language Model"
"Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions? An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings. Phrase-based decoder produces translation in the target language incrementally from left-to-right Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser We use a standard HHMM parser (Schuler et al., 2010) Engineering simple model, equivalent to PPDA Algorithmic elegant fit into phrase-based decoder Cognitive nice psycholinguistic properties"
"Translation Model vs Language Model Syntactic LM Decoder Integration Results Questions? An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings. Phrase-based decoder produces translation in the target language incrementally from left-to-right Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser We use a standard HHMM parser (Schuler et al., 2010) Engineering simple model, equivalent to PPDA Algorithmic elegant fit into phrase-based decoder Cognitive nice psycholinguistic properties"
[]
"GEM-SciDuet-train-1#paper-954#slide-2"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-2"
"Incremental Parsing"
"DT NN VP PP The president VB NP IN NP meets DT NN on Friday NP/NN NN VP/NP DT board Motivation Decoder Integration Results Questions? the president VB NP VP/NN Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents NP VP S/NP NP the board DT president VB the Incomplete constituents can be processed incrementally using a Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al."
"DT NN VP PP The president VB NP IN NP meets DT NN on Friday NP/NN NN VP/NP DT board Motivation Decoder Integration Results Questions? the president VB NP VP/NN Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents NP VP S/NP NP the board DT president VB the Incomplete constituents can be processed incrementally using a Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al."
[]
"GEM-SciDuet-train-1#paper-954#slide-3"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-3"
"Incremental Parsing using HHMM Schuler et al 2010"
"Hierarchical Hidden Markov Model Circles denote hidden random variables Edges denote conditional dependencies NP/NN NN VP/NP DT board Isomorphic Tree Path DT president VB the Shaded circles denote observed values Motivation Decoder Integration Results Questions? Analogous to Maximally Incremental e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday Push-Down Automata NP VP/NN NN"
"Hierarchical Hidden Markov Model Circles denote hidden random variables Edges denote conditional dependencies NP/NN NN VP/NP DT board Isomorphic Tree Path DT president VB the Shaded circles denote observed values Motivation Decoder Integration Results Questions? Analogous to Maximally Incremental e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday Push-Down Automata NP VP/NN NN"
[]
"GEM-SciDuet-train-1#paper-954#slide-4"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-4"
"Phrase Based Translation"
"Der Prasident trifft am Freitag den Vorstand The president meets the board on Friday s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
"Der Prasident trifft am Freitag den Vorstand The president meets the board on Friday s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
[]
"GEM-SciDuet-train-1#paper-954#slide-5"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-5"
"Phrase Based Translation with Syntactic LM"
"represents parses of the partial translation at node h in stack t s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
"represents parses of the partial translation at node h in stack t s president president Friday s that that president Obama met AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
[]
"GEM-SciDuet-train-1#paper-954#slide-6"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-6"
"Integrate Parser into Phrase based Decoder"
"EAAAAA EEAAAA EEIAAA EEIIAA s the the president president meets meets the Motivation Syntactic LM Results Questions? president meets the board"
"EAAAAA EEAAAA EEIAAA EEIIAA s the the president president meets meets the Motivation Syntactic LM Results Questions? president meets the board"
[]
"GEM-SciDuet-train-1#paper-954#slide-7"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-7"
"Direct Maximum Entropy Model of Translation"
"e argmax exp jhj(e,f) h Distortion model n-gram LM Set of j feature weights Syntactic LM P( th) AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
"e argmax exp jhj(e,f) h Distortion model n-gram LM Set of j feature weights Syntactic LM P( th) AAAAAA EAAAAA EEAAAA EEIAAA s s the the president president meets Stack Stack Stack Stack Motivation Syntactic LM Results Questions?"
[]
"GEM-SciDuet-train-1#paper-954#slide-8"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-8"
"Does an Incremental Syntactic LM Help Translation"
"but will it make my BLEU score go up? Motivation Syntactic LM Decoder Integration Questions? Moses with LM(s) BLEU Using n-gram LM only Using n-gram LM + Syntactic LM NIST OpenMT 2008 Urdu-English data set Moses with standard phrase-based translation model Tuning and testing restricted to sentences 20 words long Results reported on devtest set n-gram LM is WSJ 5-gram LM"
"but will it make my BLEU score go up? Motivation Syntactic LM Decoder Integration Questions? Moses with LM(s) BLEU Using n-gram LM only Using n-gram LM + Syntactic LM NIST OpenMT 2008 Urdu-English data set Moses with standard phrase-based translation model Tuning and testing restricted to sentences 20 words long Results reported on devtest set n-gram LM is WSJ 5-gram LM"
[]
"GEM-SciDuet-train-1#paper-954#slide-9"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-9"
"Perplexity Results"
"Language models trained on WSJ Treebank corpus Motivation Syntactic LM Decoder Integration Questions? WSJ 5-gram + WSJ SynLM ...and n-gram model for larger English Gigaword corpus. Gigaword 5-gram + WSJ SynLM"
"Language models trained on WSJ Treebank corpus Motivation Syntactic LM Decoder Integration Questions? WSJ 5-gram + WSJ SynLM ...and n-gram model for larger English Gigaword corpus. Gigaword 5-gram + WSJ SynLM"
[]
"GEM-SciDuet-train-1#paper-954#slide-10"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-10"
"Summary"
"Straightforward general framework for incorporating any Incremental Syntactic LM into Phrase-based Translation We used an Incremental HHMM Parser as Syntactic LM Syntactic LM shows substantial decrease in perplexity on out-of-domain data over n-gram LM when trained on same data Syntactic LM interpolated with n-gram LM shows even greater decrease in perplexity on both in-domain and out-of-domain data, even when n-gram LM is trained on substantially larger corpus +1 BLEU on Urdu-English task with Syntactic LM All code is open source and integrated into Moses Motivation Syntactic LM Decoder Integration Results"
"Straightforward general framework for incorporating any Incremental Syntactic LM into Phrase-based Translation We used an Incremental HHMM Parser as Syntactic LM Syntactic LM shows substantial decrease in perplexity on out-of-domain data over n-gram LM when trained on same data Syntactic LM interpolated with n-gram LM shows even greater decrease in perplexity on both in-domain and out-of-domain data, even when n-gram LM is trained on substantially larger corpus +1 BLEU on Urdu-English task with Syntactic LM All code is open source and integrated into Moses Motivation Syntactic LM Decoder Integration Results"
[]
"GEM-SciDuet-train-1#paper-954#slide-11"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-11"
"This looks a lot like CCG"
"Our parser performs some CCG-style operations: Type raising in conjunction with forward function composition Motivation Syntactic LM Decoder Integration Results"
"Our parser performs some CCG-style operations: Type raising in conjunction with forward function composition Motivation Syntactic LM Decoder Integration Results"
[]
"GEM-SciDuet-train-1#paper-954#slide-12"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-12"
"Why not just use CCG"
"No probablistic version of incremental CCG Our parser is constrained (we dont have backward composition) We do use those components of CCG (forward function application and forward function composition) which are useful for probabilistic incremental parsing Motivation Syntactic LM Decoder Integration Results"
"No probablistic version of incremental CCG Our parser is constrained (we dont have backward composition) We do use those components of CCG (forward function application and forward function composition) which are useful for probabilistic incremental parsing Motivation Syntactic LM Decoder Integration Results"
[]
"GEM-SciDuet-train-1#paper-954#slide-13"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-13"
"Speed Results"
"Mean per-sentence decoding time Parser beam sizes are indicated for the syntactic LM Parser runs in linear time, but were parsing all paths through the Moses lattice as they are generated by the decoder More informed pruning, but slower decoding Motivation Syntactic LM Decoder Integration Results"
"Mean per-sentence decoding time Parser beam sizes are indicated for the syntactic LM Parser runs in linear time, but were parsing all paths through the Moses lattice as they are generated by the decoder More informed pruning, but slower decoding Motivation Syntactic LM Decoder Integration Results"
[]
"GEM-SciDuet-train-1#paper-954#slide-14"
"954"
"Incremental Syntactic Language Models for Phrase-based Translation"
"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158 ], "paper_content_text": [ "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project.", "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force.", "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010.", "1990).", "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models.", "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output.", "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies.", "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right.", "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) .", "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner.", "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding.", "We directly integrate incremental syntactic parsing into phrase-based translation.", "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations.", "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language.", "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent.", "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality.", "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model.", "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) .", "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model.", "Instead, we incorporate syntax into the language model.", "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language.", "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling.", "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) .", "Hassan et al.", "(2007) and use supertag n-gram LMs.", "Syntactic language models have also been explored with tree-based translation models.", "Charniak et al.", "(2003) use syntactic language models to rescore the output of a tree-based translation system.", "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results.", "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system.", "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models.", "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) .", "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse.", "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation.", "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases.", "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work.", "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 .", ".", ".", "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 .", ".", ".", "president meets τ 3 1 Obama met τ 3 2 .", ".", ".", "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand.", "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h .", "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge.", "We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "sentence e, out of all such possible representations τ .", "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model.", "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3.", "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion.", "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t .", "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed.", "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 ).", "The role of δ is explained in §3.3 below.", "Any parser which implements these two functions can serve as a syntactic language model.", "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) .", "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) .", "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated.", "An n-gram language model history is also maintained at each node in the translation lattice.", "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state.", "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last.", "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last.", "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner.", "Each node in the translation lattice is augmented with a syntactic language model stateτ t .", "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed.", "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words.", "Each node contains a backpointer to its parent node, in whichτ t−1 is stored.", "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t .", "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t .", "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases.", "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node.", "Only the final syntactic language model state in such sequences need be stored in the translation lattice node.", "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments.", "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice.", "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t .", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", ".", "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax.", "Circles denote random variables, and edges denote conditional dependencies.", "Shaded circles denote variables with observed values.", "sive phrase structure trees using the tree transforms in Schuler et al.", "(2010) .", "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) .", "As an example, the parser might consider VP/NN as a possible category for input \"meets the\".", "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3.", "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG).", "Parsing runs in linear time on the length of the input.", "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The parser runs in O(n) time, where n is the number of words in the input.", "This model is shown graphically in Figure 4 and formally defined in §4.1 below.", "The incremental parser assigns a probability (Eq.", "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι .", "The phrase-based decoder uses this probability value as the syntactic language model feature score.", "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g.", "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store.", "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t .", "Figure 5 illustrates this model in action.", "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise.", "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq.", "6, as defined by §4.1), but are not stored.", "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state.", "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks.", "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice.", "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h .", "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM.", "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements.", "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM.", "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq.", "5).", "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses.", "New hypotheses are placed in appropriate hypothesis stacks.", "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word.", "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word.", "This results in a new store of syntactic random variables (Eq.", "6) that are associated with the new stack element.", "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis.", "It is then repeated for the remaining words in the hypothesis extension.", "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis.", "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained.", "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option.", "Our syntactic language model is integrated into the current version of Moses .", "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data.", "Equation 25 calculates ppl using log base b for a test set of T tokens.", "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) .", "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus.", "In all cases, including the HHMM significantly reduces perplexity.", "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data.", "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible.", "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM.", "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight.", "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process.", "Figure 8 illustrates a slowdown around three orders of magnitude.", "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor.", "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words).", "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set.", "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion.", "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing.", "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding.", "We integrated an incremental syntactic language model into Moses.", "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets.", "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) .", "Our n-gram model trained only on WSJ is admittedly small.", "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models.", "The added decoding time cost of our syntactic language model is very high.", "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality.", "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible.", "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.3", "4", "4.1", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Parser as Syntactic Language Model in", "Incremental syntactic language model", "Incorporating a Syntactic Language Model", "Incremental Bounded-Memory Parsing with a Time Series Model", "Formal Parsing Model: Scoring Partial Translation Hypotheses", "Results", "Discussion" ] }
"GEM-SciDuet-train-1#paper-954#slide-14"
"Phrase Based Translation w ntactic"
"e string of n target language words e1. . .en et the first t words in e, where tn t set of all incremental parses of et def t subset of parses t that remain after parser pruning e argmax P( e) t1 t Motivation Syntactic LM Decoder Integration Results"
"e string of n target language words e1. . .en et the first t words in e, where tn t set of all incremental parses of et def t subset of parses t that remain after parser pruning e argmax P( e) t1 t Motivation Syntactic LM Decoder Integration Results"
[]
"GEM-SciDuet-train-2#paper-957#slide-0"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-0"
"Introduction"
"I How far can we go with a language agnostic model? I We experiment with [Enright and Kondrak, 2007]s parallel document identification I We adapt the method to the BUCC-2015 Shared task based on two assumptions: Source documents should be paired 1-to-1 with target documents We have access to comparable documents in several languages"
"I How far can we go with a language agnostic model? I We experiment with [Enright and Kondrak, 2007]s parallel document identification I We adapt the method to the BUCC-2015 Shared task based on two assumptions: Source documents should be paired 1-to-1 with target documents We have access to comparable documents in several languages"
[]
"GEM-SciDuet-train-2#paper-957#slide-1"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-1"
"Method"
"I Fast parallel document identification [Enright and Kondrak, 2007] I Documents = bags of hapax words I Words = blank separated strings that are 4+ characters long I Given a document in language A, the document in language B that shares the largest number of words is considered as parallel I Works very well for parallel documents I 80% precision on Wikipedia [Patry and Langlais, 2011] I We use this approach as baseline for detecting comparable documents"
"I Fast parallel document identification [Enright and Kondrak, 2007] I Documents = bags of hapax words I Words = blank separated strings that are 4+ characters long I Given a document in language A, the document in language B that shares the largest number of words is considered as parallel I Works very well for parallel documents I 80% precision on Wikipedia [Patry and Langlais, 2011] I We use this approach as baseline for detecting comparable documents"
[]
"GEM-SciDuet-train-2#paper-957#slide-2"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-2"
"Improvements using 1 to 1 alignments"
"I In baseline, document pairs are scored independently I Multiple source documents are paired to a same target document I 60% of English pages are paired with multiple pages in French or German I We remove multiply assigned source documents using pigeonhole reasoning I From 60% to 11% of multiply assigned source documents"
"I In baseline, document pairs are scored independently I Multiple source documents are paired to a same target document I 60% of English pages are paired with multiple pages in French or German I We remove multiply assigned source documents using pigeonhole reasoning I From 60% to 11% of multiply assigned source documents"
[]
"GEM-SciDuet-train-2#paper-957#slide-3"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-3"
"Improvements using cross lingual information"
"I Simple document weighting function score ties I We break the remaining score ties using a third language I From 11% to less than 4% of multiply assigned source documents"
"I Simple document weighting function score ties I We break the remaining score ties using a third language I From 11% to less than 4% of multiply assigned source documents"
[]
"GEM-SciDuet-train-2#paper-957#slide-4"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-4"
"Experimental settings"
"I We focus on the French-English and German-English pairs I The following measures are considered relevant I Mean Average Precision (MAP)"
"I We focus on the French-English and German-English pairs I The following measures are considered relevant I Mean Average Precision (MAP)"
[]
"GEM-SciDuet-train-2#paper-957#slide-5"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-5"
"Results FR EN"
"Strategy MAP Succ. P@5 MAP Succ. P@5"
"Strategy MAP Succ. P@5 MAP Succ. P@5"
[]
"GEM-SciDuet-train-2#paper-957#slide-6"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-6"
"Results DE EN"
"Strategy MAP Succ. P@5 MAP Succ. P@5"
"Strategy MAP Succ. P@5 MAP Succ. P@5"
[]
"GEM-SciDuet-train-2#paper-957#slide-7"
"957"
"LINA: Identifying Comparable Documents from Wikipedia"
"This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information."
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53 ], "paper_content_text": [ "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation.", "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) .", "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .", "Identifying comparable resources in a large amount of multilingual data remains a very challenging task.", "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources.", "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.", "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results.", "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification.", "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed.", "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel.", "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) .", "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates.", "An example of hapax words extracted from a document is given in Table 1 .", "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages.", "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents.", "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs.", "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline).", "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document.", "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German).", "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole).", "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%.", "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method.", "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information.", "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual).", "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents.", "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 .", "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2.", "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%.", "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English.", "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems.", "Here, we only focus on the French-English and German-English pairs.", "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP).", "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents.", "• Success (Succ.).", "Precision computed on the first returned paired document.", "• Precision at 5 (P@5).", "Precision computed on the 5 topmost paired documents.", "Results Results are presented in Table 3 .", "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method.", "The largest part of the improvement comes from using pigeonhole reasoning.", "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)", "and precision at 5 (P@5) of our model.", "break ties between the remaining multiply assigned source documents only gives a small improvement.", "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this.", "Interestingly, results are consistent across languages and datasets (test and train).", "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair.", "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results.", "Discussion In this paper we described the LINA system for the BUCC 2015 shared track.", "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "Experimental results show that our system identifies comparable documents with a precision of about 60%.", "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes.", "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes.", "For reasonable computation time, we were unable to include low-frequency words in our system.", "Partial results were very low and we are still in the process of investigating the reasons for this." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "4" ], "paper_header_content": [ "Introduction", "Proposed Method", "Experimental settings", "Results", "Discussion" ] }
"GEM-SciDuet-train-2#paper-957#slide-7"
"Summary"
"I Unsupervised, hapax words-based method I Promising results, about 60% of success using pigeonhole reasoning I Using a third language slightly improves the performance I Finding the optimal alignment across the all languages I Relaxing the hapax-words constraint"
"I Unsupervised, hapax words-based method I Promising results, about 60% of success using pigeonhole reasoning I Using a third language slightly improves the performance I Finding the optimal alignment across the all languages I Relaxing the hapax-words constraint"
[]
"GEM-SciDuet-train-3#paper-964#slide-0"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-0"
"Sentence Representation in Conversations"
"Traditional System: hand-crafted semantic frame Not scalable to complex domains Neural dialog models: continuous hidden vectors Directly output system responses in words Hard to interpret & control [Ritter et al 2011, Vinyals et al"
"Traditional System: hand-crafted semantic frame Not scalable to complex domains Neural dialog models: continuous hidden vectors Directly output system responses in words Hard to interpret & control [Ritter et al 2011, Vinyals et al"
[]
"GEM-SciDuet-train-3#paper-964#slide-1"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-1"
"Why discrete sentence representation"
"1. Inrepteablity & controbility & multimodal distribution 2. Semi-supervised Learning [Kingma et al 2014 NIPS, Zhou et al 2017 ACL] 3. Reinforcement Learning [Wen et al 2017] X = What time do you want to travel? Model Z1Z2Z3 Encoder Decoder"
"1. Inrepteablity & controbility & multimodal distribution 2. Semi-supervised Learning [Kingma et al 2014 NIPS, Zhou et al 2017 ACL] 3. Reinforcement Learning [Wen et al 2017] X = What time do you want to travel? Model Z1Z2Z3 Encoder Decoder"
[]
"GEM-SciDuet-train-3#paper-964#slide-2"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-2"
"Baseline Discrete Variational Autoencoder VAE"
"M discrete K-way latent variables z with RNN recognition & generation network. Reparametrization using Gumbel-Softmax [Jang et al., 2016; Maddison et al., 2016] M discrete K-way latent variables z with GRU encoder & decoder. FAIL to learn meaningful z because of posterior collapse (z is constant regardless of x) MANY prior solution on continuous VAE, e.g. (not exhaustive), yet still open-ended question KL-annealing, decoder word dropout [Bowman et a2015] Bag-of-word loss [Zhao et al 2017] Dilated CNN decoder"
"M discrete K-way latent variables z with RNN recognition & generation network. Reparametrization using Gumbel-Softmax [Jang et al., 2016; Maddison et al., 2016] M discrete K-way latent variables z with GRU encoder & decoder. FAIL to learn meaningful z because of posterior collapse (z is constant regardless of x) MANY prior solution on continuous VAE, e.g. (not exhaustive), yet still open-ended question KL-annealing, decoder word dropout [Bowman et a2015] Bag-of-word loss [Zhao et al 2017] Dilated CNN decoder"
[]
"GEM-SciDuet-train-3#paper-964#slide-3"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-3"
"Anti Info Nature in Evidence Lower Bound ELBO"
"Write ELBO as an expectation over the whole dataset Expand the KL term, and plug back in: Minimize I(Z, X) to 0 Posterior collapse with powerful decoder."
"Write ELBO as an expectation over the whole dataset Expand the KL term, and plug back in: Minimize I(Z, X) to 0 Posterior collapse with powerful decoder."
[]
"GEM-SciDuet-train-3#paper-964#slide-4"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-4"
"Discrete Information VAE DI VAE"
"A natural solution is to maximize both data log likelihood & mutual information. Match prior result for continuous VAE. [Mazhazni et al 2015, Kim et al 2017] Propose Batch Prior Regularization (BPR) to minimize KL [q(z)||p(z)] for discrete latent Fundamentally different from KL-annealing, since"
"A natural solution is to maximize both data log likelihood & mutual information. Match prior result for continuous VAE. [Mazhazni et al 2015, Kim et al 2017] Propose Batch Prior Regularization (BPR) to minimize KL [q(z)||p(z)] for discrete latent Fundamentally different from KL-annealing, since"
[]
"GEM-SciDuet-train-3#paper-964#slide-5"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-5"
"Learning from Context Predicting DI VST"
"Skip-Thought (ST) is well-known distributional sentence representation [Hill et al 2016] The meaning of sentences in dialogs is highly contextual, e.g. dialog acts. We extend DI-VAE to Discrete Information Variational Skip Thought (DI-VST)."
"Skip-Thought (ST) is well-known distributional sentence representation [Hill et al 2016] The meaning of sentences in dialogs is highly contextual, e.g. dialog acts. We extend DI-VAE to Discrete Information Variational Skip Thought (DI-VST)."
[]
"GEM-SciDuet-train-3#paper-964#slide-6"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-6"
"Integration with Encoder Decoders"
"Policy Network z P(z|c) Recognition Network z Generator Optional: penalize decoder if generated x not exhibiting z [Hu et al 2017]"
"Policy Network z P(z|c) Recognition Network z Generator Optional: penalize decoder if generated x not exhibiting z [Hu et al 2017]"
[]
"GEM-SciDuet-train-3#paper-964#slide-7"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-7"
"Evaluation Datasets"
"a. Past evaluation dataset for text VAE [Bowman et al 2015] Stanford Multi-domain Dialog Dataset (SMD) [Eric and Manning 2017] a. 3,031 Human-Woz dialog dataset from 3 domains: weather, navigation & scheduling. Switchboard (SW) [Jurafsky et al 1997] a. 2,400 human-human telephone non-task-oriented dialogues about a given topic. a. 13,188 human-human non-task-oriented dialogs from chat room."
"a. Past evaluation dataset for text VAE [Bowman et al 2015] Stanford Multi-domain Dialog Dataset (SMD) [Eric and Manning 2017] a. 3,031 Human-Woz dialog dataset from 3 domains: weather, navigation & scheduling. Switchboard (SW) [Jurafsky et al 1997] a. 2,400 human-human telephone non-task-oriented dialogues about a given topic. a. 13,188 human-human non-task-oriented dialogs from chat room."
[]
"GEM-SciDuet-train-3#paper-964#slide-8"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions.", "AE-ED predicts more fine-grained but more error-prone actions.", "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation.", "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models.", "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation.", "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks.", "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.1.1", "3.1.2", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Proposed Methods", "Learning Sentence Representations from Auto-Encoding", "Anti-Information Limitation of ELBO", "VAE with Information Maximization and Batch Prior Regularization", "Learning Sentence Representations from the Context", "Integration with Encoder Decoders", "Relationship with Conditional VAEs", "Experiments and Results", "Comparing Discrete Sentence Representation Models", "Interpreting Latent Actions", "Dialog Response Generation with Latent Actions", "Conclusion and Future Work" ] }
"GEM-SciDuet-train-3#paper-964#slide-8"
"The Effectiveness of Batch Prior Regularization BPR"
"DAE: Autoencoder + Gumbel Softmax DVAE: Discrete VAE with ELBO loss DI-VAE: Discrete VAE + BPR DST: Skip thought + Gumbel Softmax DI-VST: Variational Skip Thought + BPR Table 1: Results for various discrete sentence representations."
"DAE: Autoencoder + Gumbel Softmax DVAE: Discrete VAE with ELBO loss DI-VAE: Discrete VAE + BPR DST: Skip thought + Gumbel Softmax DI-VST: Variational Skip Thought + BPR Table 1: Results for various discrete sentence representations."
[]
"GEM-SciDuet-train-3#paper-964#slide-9"
"964"
"Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation"
"The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1"
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230 ], "paper_content_text": [ "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) .", "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) .", "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame.", "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions.", "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains.", "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations.", "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems.", "This inability limits the effectiveness of generative dialog models in several ways.", "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions.", "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) .", "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 .", "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems.", "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g.", "topics, dialog acts and etc.", "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) .", "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations.", "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions.", "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics.", "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models.", "The proposed systems are tested on several realworld dialog datasets.", "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets.", "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations.", "Related Work Our work is closely related to research in latent variable dialog models.", "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses.", "further introduced dialog acts to guide the learning of the CVAEs.", "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention.", "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history.", "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "The proposed method also relates to sentence representation learning using neural networks.", "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g.", "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) .", "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) .", "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs.", "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables.", "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting.", "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) .", "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features.", "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z.", "The context often contains the discourse history in the format of a list of utterances.", "The response is an utterance that contains a list of word tokens.", "The latent action is a set of discrete variables that define high-level attributes of x.", "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x.", "2.", "The meaning of latent symbols z should be independent of the context c. The first property is self-evident.", "The second can be explained: assume z contains a single discrete variable with K classes.", "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K].", "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts.", "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation.", "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) .", "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4.", "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c).", "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c).", "In short, R, G, F and π are the four components that comprise our proposed framework.", "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3.", "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space.", "We use an RNN as the recognition network to encode the response x.", "Its last hidden state h R |x| is used to represent x.", "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables.", "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q ).", "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients.", "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size.", "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m ).", "Finally, the generator RNN is used to reconstruct the response given h G 0 .", "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) .", "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z.", "Since each z m is independent, we can easily extend the results below to multiple variables.", "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue.", "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc.", "(Bowman et al., 2015; Chen et al., 2016; .", "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior.", "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq.", "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X.", "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders.", "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq.", "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term.", "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z)).", "Eq.", "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) .", "Our derivation provides a theoretical justification to their superior performance.", "Notably, Eq.", "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) .", "However, our derivation is different, offering a new way to understand ELBO behavior.", "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x).", "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z).", "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts.", "Let x n be a sample from a batch of N data points.", "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n .", "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq.", "6 as Batch Prior Regularization (BPR).", "When N approaches infinity, q (z) approaches the true marginal distribution of q(z).", "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized.", "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) .", "This is because BPR is a non-linear operation log sum exp.", "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE.", "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence.", "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) .", "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual.", "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) .", "Thus, we introduce a second type of latent action based on sentence-level distributional semantics.", "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) .", "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences.", "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences.", "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x).", "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n .", "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network.", "Let the dialog context c be a sequence of utterances.", "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c).", "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x).", "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training.", "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective.", "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action.", "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word.", "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e.", "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix.", "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED.", "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation.", "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics.", "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics.", "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone.", "Experiments and Results The proposed methods are evaluated on four datasets.", "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) .", "We used the version pre-processed by Mikolov (Mikolov et al., 2010) .", "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) .", "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting.", "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions.", "(Li et al., 2017) .", "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts.", "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g.", "hesitation, self-repair etc.", "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations.", "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) .", "Besides the proposed methods, the following baselines are compared.", "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units.", "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z)).", "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse.", "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations.", "We also include the results for VAE with continuous latent variables reported on the same PTB .", "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) .", "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) .", "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z).", "The discrete latent space for all models are M =20 and K=10.", "Mini-batch size is 30.", "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x).", "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods.", "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE.", "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space.", "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2).", "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss.", "On the other hand, our methods achieve robust performance without the need for additional processing.", "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST.", "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved.", "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x).", "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE).", "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159.", "After N > 30, the performance plateaus.", "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed.", "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space.", "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e.", "K M ≈ 1000.", "We then vary the latent space size and report the same evaluation metrics.", "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables.", "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget.", "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols.", "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n ).", "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g.", "\"How are you?\"", "→ 1-4-2.", "Assuming that we have access to manually clustered data according to certain classes (e.g.", "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions.", "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions.", "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class.", "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 .", "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts.", "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE.", "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances.", "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs.", "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible.", "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles.", "5 workers see the action name and a different group of 5 utterances from that latent action.", "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster.", "Negative samples are included to prevent random selection.", "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE.", "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways.", "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows.", "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder.", "The discourse encoder output its last hidden state h e |x| .", "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods.", "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| ).", "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED.", "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z.", "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action.", "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding.", "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x).", "Table 6 : Results for attribute accuracy with and without attribute loss.", "responses are highly consistent with the given latent actions.", "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction.", "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g.", "SW and DD.", "The accuracy of ST-ED on SW is worse than the other two datasets.", "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker.", "The more complex context pattern in SW may require special treatment.", "We leave it for future work.", "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context.", "We report both accuracy, i.e.", "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c).", "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context .", "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network.", "L attr is included in training.", "the three dialog datasets.", "These scores provide useful insights to understand the complexity of a dialog dataset.", "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data.", "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD.", "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED.", "The reason is related to our previous discussion about the granularity of the latent actions.", "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST.", "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed.", "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 .", "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations.", "c usr: Where does my friend live?", "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block.", "p(z|c)=0.34 -Comfort Inn is at 7 miles away.", "give user info -Your home address is 5671 barringer street.", "p(z|c)=0.22 -Your home is at 10 ames street.", "ST-ED give l