{ "name": "SciDuet-ACL-Train", "data": [ { "slides": { "0": { "title": "Syntax in Statistical Machine Translation", "text": [ "Translation Model vs Language Model", "Syntactic LM Decoder Integration Results Questions?" ], "page_nums": [ 1 ], "images": [] }, "1": { "title": "Syntax in the Language Model", "text": [ "Translation Model vs Language Model", "Syntactic LM Decoder Integration Results Questions?", "An incremental syntactic language model uses an incremental statistical parser to define a probability model over the dependency or phrase structure of target language strings.", "Phrase-based decoder produces translation in the target language incrementally from left-to-right", "Phrase-based syntactic LM parser should parse target language hypotheses incrementally from left-to-right", "Galley & Manning (2009) obtained 1-best dependency parse using a greedy dependency parser", "We use a standard HHMM parser (Schuler et al., 2010)", "Engineering simple model, equivalent to PPDA", "Algorithmic elegant fit into phrase-based decoder", "Cognitive nice psycholinguistic properties" ], "page_nums": [ 3, 4, 5, 6, 7, 8, 9 ], "images": [] }, "2": { "title": "Incremental Parsing", "text": [ "DT NN VP PP", "The president VB NP IN NP", "meets DT NN on Friday NP/NN NN VP/NP DT board", "Motivation Decoder Integration Results Questions?", "the president VB NP VP/NN", "Transform right-expanding sequences of constituents into left-expanding sequences of incomplete constituents", "NP VP S/NP NP", "the board DT president VB the", "Incomplete constituents can be processed incrementally using a", "Hierarchical Hidden Markov Model parser. (Murphy & Paskin, 2001; Schuler et al." ], "page_nums": [ 10, 11, 12, 13, 14 ], "images": [ "figure/image/954-Figure2-1.png" ] }, "3": { "title": "Incremental Parsing using HHMM Schuler et al 2010", "text": [ "Hierarchical Hidden Markov Model", "Circles denote hidden random variables", "Edges denote conditional dependencies", "NP/NN NN VP/NP DT board", "Isomorphic Tree Path DT president VB the", "Shaded circles denote observed values", "Motivation Decoder Integration Results Questions?", "Analogous to Maximally Incremental", "e1 =The e2 =president e3 =meets e4 =the e5 =board e =on e7 =Friday", "Push-Down Automata NP VP/NN NN" ], "page_nums": [ 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33 ], "images": [] }, "4": { "title": "Phrase Based Translation", "text": [ "Der Prasident trifft am Freitag den Vorstand", "The president meets the board on Friday", "s president president Friday", "s that that president Obama met", "AAAAAA EAAAAA EEAAAA EEIAAA", "s s the the president president meets", "Stack Stack Stack Stack", "Motivation Syntactic LM Results Questions?" ], "page_nums": [ 34, 35 ], "images": [] }, "5": { "title": "Phrase Based Translation with Syntactic LM", "text": [ "represents parses of the partial translation at node h in stack t", "s president president Friday", "s that that president Obama met", "AAAAAA EAAAAA EEAAAA EEIAAA", "s s the the president president meets", "Stack Stack Stack Stack", "Motivation Syntactic LM Results Questions?" ], "page_nums": [ 36, 37 ], "images": [ "figure/image/954-Figure1-1.png" ] }, "6": { "title": "Integrate Parser into Phrase based Decoder", "text": [ "EAAAAA EEAAAA EEIAAA EEIIAA", "s the the president president meets meets the", "Motivation Syntactic LM Results Questions?", "president meets the board" ], "page_nums": [ 38, 39 ], "images": [ "figure/image/954-Figure6-1.png" ] }, "7": { "title": "Direct Maximum Entropy Model of Translation", "text": [ "e argmax exp jhj(e,f)", "h Distortion model n-gram LM", "Set of j feature weights", "Syntactic LM P( th)", "AAAAAA EAAAAA EEAAAA EEIAAA", "s s the the president president meets", "Stack Stack Stack Stack", "Motivation Syntactic LM Results Questions?" ], "page_nums": [ 40 ], "images": [ "figure/image/954-Figure1-1.png" ] }, "8": { "title": "Does an Incremental Syntactic LM Help Translation", "text": [ "but will it make my BLEU score go up?", "Motivation Syntactic LM Decoder Integration Questions?", "Moses with LM(s) BLEU", "Using n-gram LM only", "Using n-gram LM + Syntactic LM", "NIST OpenMT 2008 Urdu-English data set", "Moses with standard phrase-based translation model", "Tuning and testing restricted to sentences 20 words long", "Results reported on devtest set", "n-gram LM is WSJ 5-gram LM" ], "page_nums": [ 41, 45, 46, 47 ], "images": [] }, "9": { "title": "Perplexity Results", "text": [ "Language models trained on WSJ Treebank corpus", "Motivation Syntactic LM Decoder Integration Questions?", "WSJ 5-gram + WSJ SynLM", "...and n-gram model for larger English Gigaword corpus.", "Gigaword 5-gram + WSJ SynLM" ], "page_nums": [ 42, 43, 44 ], "images": [] }, "10": { "title": "Summary", "text": [ "Straightforward general framework for incorporating any", "Incremental Syntactic LM into Phrase-based Translation", "We used an Incremental HHMM Parser as Syntactic LM", "Syntactic LM shows substantial decrease in perplexity on out-of-domain data over n-gram LM when trained on same data", "Syntactic LM interpolated with n-gram LM shows even greater decrease in perplexity on both in-domain and out-of-domain data, even when n-gram LM is trained on substantially larger corpus", "+1 BLEU on Urdu-English task with Syntactic LM", "All code is open source and integrated into Moses", "Motivation Syntactic LM Decoder Integration Results" ], "page_nums": [ 48 ], "images": [] }, "11": { "title": "This looks a lot like CCG", "text": [ "Our parser performs some CCG-style operations:", "Type raising in conjunction with forward function composition", "Motivation Syntactic LM Decoder Integration Results" ], "page_nums": [ 50 ], "images": [] }, "12": { "title": "Why not just use CCG", "text": [ "No probablistic version of incremental CCG", "Our parser is constrained", "(we dont have backward composition)", "We do use those components of CCG (forward function application and forward function composition) which are useful for probabilistic incremental parsing", "Motivation Syntactic LM Decoder Integration Results" ], "page_nums": [ 51 ], "images": [] }, "13": { "title": "Speed Results", "text": [ "Mean per-sentence decoding time", "Parser beam sizes are indicated for the syntactic LM", "Parser runs in linear time, but were parsing all paths through the Moses lattice as they are generated by the decoder", "More informed pruning, but slower decoding", "Motivation Syntactic LM Decoder Integration Results" ], "page_nums": [ 52 ], "images": [] }, "14": { "title": "Phrase Based Translation w ntactic", "text": [ "e string of n target language words e1. . .en", "et the first t words in e, where tn", "t set of all incremental parses of et", "def t subset of parses t that remain after parser pruning", "e argmax P( e) t1 t", "Motivation Syntactic LM Decoder Integration Results" ], "page_nums": [ 53 ], "images": [] } }, "paper_title": "Incremental Syntactic Language Models for Phrase-based Translation", "paper_id": "954", "paper": { "title": "Incremental Syntactic Language Models for Phrase-based Translation", "abstract": "This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machine translation through incremental syntactic parsing. Bottom-up and topdown parsers typically require a completed string as input. This requirement makes it difficult to incorporate them into phrase-based translation, which generates partial hypothesized translations from left-to-right. Incremental syntactic language models score sentences in a similar left-to-right fashion, and are therefore a good mechanism for incorporating syntax into phrase-based translation. We give a formal definition of one such lineartime syntactic language model, detail its relation to phrase-based decoding, and integrate the model with the Moses phrase-based translation system. We present empirical results on a constrained Urdu-English translation task that demonstrate a significant BLEU score improvement and a large decrease in perplexity.", "text": [ { "id": 0, "string": "Introduction Early work in statistical machine translation viewed translation as a noisy channel process comprised of a translation model, which functioned to posit adequate translations of source language words, and a target language model, which guided the fluency of generated target language strings (Brown et al., This research was supported by NSF CAREER/PECASE award 0447685, NSF grant IIS-0713448, and the European Commission through the EuroMatrixPlus project." }, { "id": 1, "string": "Opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the sponsors or the United States Air Force." }, { "id": 2, "string": "Cleared for public release (Case Number 88ABW-2010-6489) on 10 Dec 2010." }, { "id": 3, "string": "1990)." }, { "id": 4, "string": "Drawing on earlier successes in speech recognition, research in statistical machine translation has effectively used n-gram word sequence models as language models." }, { "id": 5, "string": "Modern phrase-based translation using large scale n-gram language models generally performs well in terms of lexical choice, but still often produces ungrammatical output." }, { "id": 6, "string": "Syntactic parsing may help produce more grammatical output by better modeling structural relationships and long-distance dependencies." }, { "id": 7, "string": "Bottom-up and top-down parsers typically require a completed string as input; this requirement makes it difficult to incorporate these parsers into phrase-based translation, which generates hypothesized translations incrementally, from left-to-right." }, { "id": 8, "string": "1 As a workaround, parsers can rerank the translated output of translation systems (Och et al., 2004) ." }, { "id": 9, "string": "On the other hand, incremental parsers (Roark, 2001; Henderson, 2004; Schuler et al., 2010; Huang and Sagae, 2010) process input in a straightforward left-to-right manner." }, { "id": 10, "string": "We observe that incremental parsers, used as structured language models, provide an appropriate algorithmic match to incremental phrase-based decoding." }, { "id": 11, "string": "We directly integrate incremental syntactic parsing into phrase-based translation." }, { "id": 12, "string": "This approach re-exerts the role of the language model as a mechanism for encouraging syntactically fluent translations." }, { "id": 13, "string": "The contributions of this work are as follows: • A novel method for integrating syntactic LMs into phrase-based translation ( §3) • A formal definition of an incremental parser for statistical MT that can run in linear-time ( §4) • Integration with Moses ( §5) along with empirical results for perplexity and significant translation score improvement on a constrained Urdu-English task ( §6) Related Work Neither phrase-based (Koehn et al., 2003) nor hierarchical phrase-based translation (Chiang, 2005) take explicit advantage of the syntactic structure of either source or target language." }, { "id": 14, "string": "The translation models in these techniques define phrases as contiguous word sequences (with gaps allowed in the case of hierarchical phrases) which may or may not correspond to any linguistic constituent." }, { "id": 15, "string": "Early work in statistical phrase-based translation considered whether restricting translation models to use only syntactically well-formed constituents might improve translation quality (Koehn et al., 2003) but found such restrictions failed to improve translation quality." }, { "id": 16, "string": "Significant research has examined the extent to which syntax can be usefully incorporated into statistical tree-based translation models: string-to-tree (Yamada and Knight, 2001; Gildea, 2003; Imamura et al., 2004; Galley et al., 2004; Graehl and Knight, 2004; Melamed, 2004; Galley et al., 2006; Huang et al., 2006; Shen et al., 2008) , tree-to-string (Liu et al., 2006; Liu et al., 2007; Huang and Mi, 2010) , tree-to-tree (Abeillé et al., 1990; Shieber and Schabes, 1990; Poutsma, 1998; Eisner, 2003; Shieber, 2004; Cowan et al., 2006; Nesson et al., 2006; Zhang et al., 2007; DeNeefe et al., 2007; DeNeefe and Knight, 2009; Liu et al., 2009; Chiang, 2010) , and treelet (Ding and Palmer, 2005; Quirk et al., 2005) techniques use syntactic information to inform the translation model." }, { "id": 17, "string": "Recent work has shown that parsing-based machine translation using syntax-augmented (Zollmann and Venugopal, 2006) hierarchical translation grammars with rich nonterminal sets can demonstrate substantial gains over hierarchical grammars for certain language pairs (Baker et al., 2009) ." }, { "id": 18, "string": "In contrast to the above tree-based translation models, our approach maintains a standard (non-syntactic) phrase-based translation model." }, { "id": 19, "string": "Instead, we incorporate syntax into the language model." }, { "id": 20, "string": "Traditional approaches to language models in speech recognition and statistical machine translation focus on the use of n-grams, which provide a simple finite-state model approximation of the target language." }, { "id": 21, "string": "Chelba and Jelinek (1998) proposed that syntactic structure could be used as an alternative technique in language modeling." }, { "id": 22, "string": "This insight has been explored in the context of speech recognition (Chelba and Jelinek, 2000; Collins et al., 2005) ." }, { "id": 23, "string": "Hassan et al." }, { "id": 24, "string": "(2007) and use supertag n-gram LMs." }, { "id": 25, "string": "Syntactic language models have also been explored with tree-based translation models." }, { "id": 26, "string": "Charniak et al." }, { "id": 27, "string": "(2003) use syntactic language models to rescore the output of a tree-based translation system." }, { "id": 28, "string": "Post and Gildea (2008) investigate the integration of parsers as syntactic language models during binary bracketing transduction translation (Wu, 1997) ; under these conditions, both syntactic phrase-structure and dependency parsing language models were found to improve oracle-best translations, but did not improve actual translation results." }, { "id": 29, "string": "Post and Gildea (2009) use tree substitution grammar parsing for language modeling, but do not use this language model in a translation system." }, { "id": 30, "string": "Our work, in contrast to the above approaches, explores the use of incremental syntactic language models in conjunction with phrase-based translation models." }, { "id": 31, "string": "Our syntactic language model fits into the family of linear-time dynamic programming parsers described in (Huang and Sagae, 2010) ." }, { "id": 32, "string": "Like (Galley and Manning, 2009 ) our work implements an incremental syntactic language model; our approach differs by calculating syntactic LM scores over all available phrase-structure parses at each hypothesis instead of the 1-best dependency parse." }, { "id": 33, "string": "The syntax-driven reordering model of Ge (2010) uses syntax-driven features to influence word order within standard phrase-based translation." }, { "id": 34, "string": "The syntactic cohesion features of Cherry (2008) encourages the use of syntactically well-formed translation phrases." }, { "id": 35, "string": "These approaches are fully orthogonal to our proposed incremental syntactic language model, and could be applied in concert with our work." }, { "id": 36, "string": "Parser as Syntactic Language Model in Phrase-Based Translation Parsing is the task of selecting the representationτ (typically a tree) that best models the structure of s τ 0 s thẽ τ 1 1 s that τ 1 2 s president τ 1 3 ." }, { "id": 37, "string": "." }, { "id": 38, "string": "." }, { "id": 39, "string": "the president τ 2 1 that president τ 2 2 president Fridaỹ τ 2 3 ." }, { "id": 40, "string": "." }, { "id": 41, "string": "." }, { "id": 42, "string": "president meets τ 3 1 Obama met τ 3 2 ." }, { "id": 43, "string": "." }, { "id": 44, "string": "." }, { "id": 45, "string": "Figure 1 : Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand." }, { "id": 46, "string": "Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model stateτ t h ." }, { "id": 47, "string": "Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge." }, { "id": 48, "string": "We use the English translation The president meets the board on Friday as a running example throughout all Figures." }, { "id": 49, "string": "sentence e, out of all such possible representations τ ." }, { "id": 50, "string": "This set of representations may be all phrase structure trees or all dependency trees allowed by the parsing model." }, { "id": 51, "string": "Typically, treeτ is taken to be: τ = argmax τ P(τ | e) (1) We define a syntactic language model P(e) based on the total probability mass over all possible trees for string e. This is shown in Equation 2 and decomposed in Equation 3." }, { "id": 52, "string": "P(e) = τ ∈τ P(τ, e) (2) P(e) = τ ∈τ P(e | τ )P(τ ) (3) Incremental syntactic language model An incremental parser processes each token of input sequentially from the beginning of a sentence to the end, rather than processing input in a top-down (Earley, 1968) or bottom-up (Cocke and Schwartz, 1970; Kasami, 1965; Younger, 1967) fashion." }, { "id": 53, "string": "After processing the tth token in string e, an incremental parser has some internal representation of possible hypothesized (incomplete) trees, τ t ." }, { "id": 54, "string": "The syntactic language model probability of a partial sentence e 1 ...e t is defined: P(e 1 ...e t ) = τ ∈τt P(e 1 ...e t | τ )P(τ ) (4) In practice, a parser may constrain the set of trees under consideration toτ t , that subset of analyses or partial analyses that remains after any pruning is performed." }, { "id": 55, "string": "An incremental syntactic language model can then be defined by a probability mass function (Equation 5) and a transition function δ (Equation 6 )." }, { "id": 56, "string": "The role of δ is explained in §3.3 below." }, { "id": 57, "string": "Any parser which implements these two functions can serve as a syntactic language model." }, { "id": 58, "string": "P(e 1 ...e t ) ≈ P(τ t ) = τ ∈τ t P(e 1 ...e t | τ )P(τ ) (5) δ(e t ,τ t−1 ) →τ t (6) 3.2 Decoding in phrase-based translation Given a source language input sentence f , a trained source-to-target translation model, and a target language model, the task of translation is to find the maximally probable translationê using a linear combination of j feature functions h weighted according to tuned parameters λ (Och and Ney, 2002) ." }, { "id": 59, "string": "e = argmax e exp( j λ j h j (e, f )) (7) Phrase-based translation constructs a set of translation options -hypothesized translations for contiguous portions of the source sentence -from a trained phrase table, then incrementally constructs a lattice of partial target translations (Koehn, 2010) ." }, { "id": 60, "string": "To prune the search space, lattice nodes are organized into beam stacks (Jelinek, 1969) according to the number of source words translated." }, { "id": 61, "string": "An n-gram language model history is also maintained at each node in the translation lattice." }, { "id": 62, "string": "The search space is further trimmed with hypothesis recombination, which collapses lattice nodes that share a common coverage vector and n-gram state." }, { "id": 63, "string": "Incorporating a Syntactic Language Model Phrase-based translation produces target language words in an incremental left-to-right fashion, generating words at the beginning of a translation first and words at the end of a translation last." }, { "id": 64, "string": "Similarly, incremental parsers process sentences in an incremental fashion, analyzing words at the beginning of a sentence first and words at the end of a sentence last." }, { "id": 65, "string": "As such, an incremental parser with transition function δ can be incorporated into the phrase-based decoding process in a straightforward manner." }, { "id": 66, "string": "Each node in the translation lattice is augmented with a syntactic language model stateτ t ." }, { "id": 67, "string": "The hypothesis at the root of the translation lattice is initialized withτ 0 , representing the internal state of the incremental parser before any input words are processed." }, { "id": 68, "string": "The phrase-based translation decoding process adds nodes to the lattice; each new node contains one or more target language words." }, { "id": 69, "string": "Each node contains a backpointer to its parent node, in whichτ t−1 is stored." }, { "id": 70, "string": "Given a new target language word e t andτ t−1 , the incremental parser's transition function δ calculatesτ t ." }, { "id": 71, "string": "Figure 1 a sample phrase-based decoding lattice where each translation lattice node is augmented with syntactic language model stateτ t ." }, { "id": 72, "string": "In phrase-based translation, many translation lattice nodes represent multi-word target language phrases." }, { "id": 73, "string": "For such translation lattice nodes, δ will be called once for each newly hypothesized target language word in the node." }, { "id": 74, "string": "Only the final syntactic language model state in such sequences need be stored in the translation lattice node." }, { "id": 75, "string": "Incremental Bounded-Memory Parsing with a Time Series Model Having defined the framework by which any incremental parser may be incorporated into phrasebased translation, we now formally define a specific incremental parser for use in our experiments." }, { "id": 76, "string": "The parser must process target language words incrementally as the phrase-based decoder adds hypotheses to the translation lattice." }, { "id": 77, "string": "To facilitate this incremental processing, ordinary phrase-structure trees can be transformed into right-corner recur- r 1 t−1 r 2 t−1 r 3 t−1 s 1 t−1 s 2 t−1 s 3 t−1 r 1 t r 2 t r 3 t s 1 t s 2 t s 3 t e t−1 e t ." }, { "id": 78, "string": "." }, { "id": 79, "string": "." }, { "id": 80, "string": "." }, { "id": 81, "string": "." }, { "id": 82, "string": "." }, { "id": 83, "string": "." }, { "id": 84, "string": "." }, { "id": 85, "string": "." }, { "id": 86, "string": "." }, { "id": 87, "string": "." }, { "id": 88, "string": "." }, { "id": 89, "string": "Figure 4 : Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax." }, { "id": 90, "string": "Circles denote random variables, and edges denote conditional dependencies." }, { "id": 91, "string": "Shaded circles denote variables with observed values." }, { "id": 92, "string": "sive phrase structure trees using the tree transforms in Schuler et al." }, { "id": 93, "string": "(2010) ." }, { "id": 94, "string": "Constituent nonterminals in right-corner transformed trees take the form of incomplete constituents c η /c ηι consisting of an 'active' constituent c η lacking an 'awaited' constituent c ηι yet to come, similar to non-constituent categories in a Combinatory Categorial Grammar (Ades and Steedman, 1982; Steedman, 2000) ." }, { "id": 95, "string": "As an example, the parser might consider VP/NN as a possible category for input \"meets the\"." }, { "id": 96, "string": "A sample phrase structure tree is shown before and after the right-corner transform in Figures 2 and 3." }, { "id": 97, "string": "Our parser operates over a right-corner transformed probabilistic context-free grammar (PCFG)." }, { "id": 98, "string": "Parsing runs in linear time on the length of the input." }, { "id": 99, "string": "This model of incremental parsing is implemented as a Hierarchical Hidden Markov Model (HHMM) (Murphy and Paskin, 2001) , and is equivalent to a probabilistic pushdown automaton with a bounded pushdown store." }, { "id": 100, "string": "The parser runs in O(n) time, where n is the number of words in the input." }, { "id": 101, "string": "This model is shown graphically in Figure 4 and formally defined in §4.1 below." }, { "id": 102, "string": "The incremental parser assigns a probability (Eq." }, { "id": 103, "string": "5) for a partial target language hypothesis, using a bounded store of incomplete constituents c η /c ηι ." }, { "id": 104, "string": "The phrase-based decoder uses this probability value as the syntactic language model feature score." }, { "id": 105, "string": "Formal Parsing Model: Scoring Partial Translation Hypotheses This model is essentially an extension of an HHMM, which obtains a most likely sequence of hidden store states,ŝ 1..D 1..T , of some length T and some maximum depth D, given a sequence of observed tokens (e.g." }, { "id": 106, "string": "generated target language words), e 1..T , using HHMM state transition model θ A and observation symbol model θ B (Rabiner, 1990) : s 1..D 1..T def = argmax s 1..D 1..T T t=1 P θ A (s 1..D t | s 1..D t−1 )·P θ B (e t | s 1..D t ) (8) The HHMM parser is equivalent to a probabilistic pushdown automaton with a bounded pushdown store." }, { "id": 107, "string": "The model generates each successive store (using store model θ S ) only after considering whether each nested sequence of incomplete constituents has completed and reduced (using reduction model θ R ): P θ A (s 1..D t | s 1..D t−1 ) def = r 1 t ..r D t D d=1 P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) · P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) (9) Store elements are defined to contain only the active (c η ) and awaited (c ηι ) constituent categories necessary to compute an incomplete constituent probability: s d t def = c η , c ηι (10) Reduction states are defined to contain only the complete constituent category c r d t necessary to compute an inside likelihood probability, as well as a flag f r d t indicating whether a reduction has taken place (to end a sequence of incomplete constituents): r d t def = c r d t , f r d t (11) The model probabilities for these store elements and reduction states can then be defined (from Murphy and Paskin 2001) to expand a new incomplete constituent after a reduction has taken place (f r d t = 1; using depth-specific store state expansion model θ S-E,d ), transition along a sequence of store elements if no reduction has taken place (f r d t = 0; using depthspecific store state transition model θ S-T,d ): 2 P θ S (s d t | r d+1 t r d t s d t−1 s d−1 t ) def =    if f r d+1 t = 1, f r d t = 1 : P θ S-E,d (s d t | s d−1 t ) if f r d+1 t = 1, f r d t = 0 : P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) if f r d+1 t = 0, f r d t = 0 : s d t = s d t−1 (12) and possibly reduce a store element (terminate a sequence) if the store state below it has reduced (f r d+1 t = 1; using depth-specific reduction model θ R,d ): P θ R (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if f r d+1 t = 0 : r d t = r ⊥ if f r d+1 t = 1 : P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) (13) where r ⊥ is a null state resulting from the failure of an incomplete constituent to complete, and constants are defined for the edge conditions of s 0 t and r D+1 t ." }, { "id": 108, "string": "Figure 5 illustrates this model in action." }, { "id": 109, "string": "These pushdown automaton operations are then refined for right-corner parsing (Schuler, 2009) , distinguishing active transitions (model θ S-T-A,d , in which an incomplete constituent is completed, but not reduced, and then immediately expanded to a 2 An indicator function · is used to denote deterministic probabilities: φ = 1 if φ is true, 0 otherwise." }, { "id": 110, "string": "new incomplete constituent in the same store element) from awaited transitions (model θ S-T-W,d , which involve no completion): P θ S-T,d (s d t | r d+1 t r d t s d t−1 s d−1 t ) def = if r d t = r ⊥ : P θ S-T-A,d (s d t | s d−1 t r d t ) if r d t = r ⊥ : P θ S-T-W,d (s d t | s d t−1 r d+1 t ) (14) P θ R,d (r d t | r d+1 t s d t−1 s d−1 t−1 ) def = if c r d+1 t = x t : r d t = r ⊥ if c r d+1 t = x t : P θ R-R,d (r d t | s d t−1 s d−1 t−1 ) (15) These HHMM right-corner parsing operations are then defined in terms of branch-and depth-specific PCFG probabilities θ G-R,d and θ G-L,d : 3 3 Model probabilities are also defined in terms of leftprogeny probability distribution E θ G-RL * ,d which is itself defined in terms of PCFG probabilities: are calculated by transition function δ (Eq." }, { "id": 111, "string": "6, as defined by §4.1), but are not stored." }, { "id": 112, "string": "Observed random variables (e 3 ..e 5 ) are shown for clarity, but are not explicitly stored in any syntactic language model state." }, { "id": 113, "string": "E θ G-RL * ,d (cη 0 → cη0 ...) def = c η1 P θ G-R,d (cη → cη0 cη1) (16) E θ G-RL * ,d (cη k → c η0 k 0 ...) def = c η0 k E θ G-RL * ,d (cη k−1 → c η0 k ...) · c η0 k 1 P θ G-L,d (c η0 k → c η0 k 0 c η0 k 1 ) (17) E θ G-RL * ,d (cη * → cηι ...) def = ∞ k=0 E θ G-RL * ,d (cη k → cηι ...) (18) E θ G-RL * ,d (cη + → cηι ...) def = E θ G-RL * ,d (cη * → cηι ...) − E θ G-RL * ,d (cη 0 → cηι ...) (19) coder's hypothesis stacks." }, { "id": 114, "string": "Figure 1 illustrates an excerpt from a standard phrase-based translation lattice." }, { "id": 115, "string": "Within each decoder stack t, each hypothesis h is augmented with a syntactic language model stateτ t h ." }, { "id": 116, "string": "Each syntactic language model state is a random variable store, containing a slice of random variables from the HHMM." }, { "id": 117, "string": "Specifically,τ t h contains those random variables s 1..D t that maintain distributions over syntactic elements." }, { "id": 118, "string": "By maintaining these syntactic random variable stores, each hypothesis has access to the current language model probability for the partial translation ending at that hypothesis, as calculated by an incremental syntactic language model defined by the HHMM." }, { "id": 119, "string": "Specifically, the random variable store at hypothesis h provides P(τ t h ) = P(e h 1..t , s 1..D 1..t ), where e h 1..t is the sequence of words in a partial hypothesis ending at h which contains t target words, and where there are D syntactic random variables in each random variable store (Eq." }, { "id": 120, "string": "5)." }, { "id": 121, "string": "During stack decoding, the phrase-based decoder progressively constructs new hypotheses by extending existing hypotheses." }, { "id": 122, "string": "New hypotheses are placed in appropriate hypothesis stacks." }, { "id": 123, "string": "In the simplest case, a new hypothesis extends an existing hypothesis by exactly one target word." }, { "id": 124, "string": "As the new hypothesis is constructed by extending an existing stack element, the store and reduction state random variables are processed, along with the newly hypothesized word." }, { "id": 125, "string": "This results in a new store of syntactic random variables (Eq." }, { "id": 126, "string": "6) that are associated with the new stack element." }, { "id": 127, "string": "When a new hypothesis extends an existing hypothesis by more than one word, this process is first carried out for the first new word in the hypothesis." }, { "id": 128, "string": "It is then repeated for the remaining words in the hypothesis extension." }, { "id": 129, "string": "Once the final word in the hypothesis has been processed, the resulting random variable store is associated with that hypothesis." }, { "id": 130, "string": "The random variable stores created for the non-final words in the extending hypothesis are discarded, and need not be explicitly retained." }, { "id": 131, "string": "Figure 6 illustrates this process, showing how a syntactic language model stateτ 5 1 in a phrase-based decoding lattice is obtained from a previous syntactic language model stateτ 3 1 (from Figure 1) by parsing the target language words from a phrasebased translation option." }, { "id": 132, "string": "Our syntactic language model is integrated into the current version of Moses ." }, { "id": 133, "string": "Results As an initial measure to compare language models, average per-word perplexity, ppl, reports how surprised a model is by test data." }, { "id": 134, "string": "Equation 25 calculates ppl using log base b for a test set of T tokens." }, { "id": 135, "string": "ppl = b −log b P(e 1 ...e T ) T (25) We trained the syntactic language model from §4 (HHMM) and an interpolated n-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998); models were trained on sections 2-21 of the Wall Street Journal (WSJ) treebank (Marcus et al., 1993 HHMM and n-gram LMs (Figure 7) ." }, { "id": 136, "string": "To show the effects of training an LM on more data, we also report perplexity results on the 5-gram LM trained for the GALE Arabic-English task using the English Gigaword corpus." }, { "id": 137, "string": "In all cases, including the HHMM significantly reduces perplexity." }, { "id": 138, "string": "We trained a phrase-based translation model on the full NIST Open MT08 Urdu-English translation model using the full training data." }, { "id": 139, "string": "We trained the HHMM and n-gram LMs on the WSJ data in order to make them as similar as possible." }, { "id": 140, "string": "During tuning, Moses was first configured to use just the n-gram LM, then configured to use both the n-gram LM and the syntactic HHMM LM." }, { "id": 141, "string": "MERT consistently assigned positive weight to the syntactic LM feature, typically slightly less than the n-gram LM weight." }, { "id": 142, "string": "In our integration with Moses, incorporating a syntactic language model dramatically slows the decoding process." }, { "id": 143, "string": "Figure 8 illustrates a slowdown around three orders of magnitude." }, { "id": 144, "string": "Although speed remains roughly linear to the size of the source sentence (ruling out exponential behavior), it is with an extremely large constant time factor." }, { "id": 145, "string": "Due to this slowdown, we tuned the parameters using a constrained dev set (only sentences with 1-20 words), and tested using a constrained devtest set (only sentences with 1-20 words)." }, { "id": 146, "string": "Figure 9 shows a statistically significant improvement to the BLEU score when using the HHMM and the n-gram LMs together on this reduced test set." }, { "id": 147, "string": "Discussion This paper argues that incremental syntactic languages models are a straightforward and appro-Moses LM(s) BLEU n-gram only 18.78 HHMM + n-gram 19.78 Figure 9 : Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20. priate algorithmic fit for incorporating syntax into phrase-based statistical machine translation, since both process sentences in an incremental left-toright fashion." }, { "id": 148, "string": "This means incremental syntactic LM scores can be calculated during the decoding process, rather than waiting until a complete sentence is posited, which is typically necessary in top-down or bottom-up parsing." }, { "id": 149, "string": "We provided a rigorous formal definition of incremental syntactic languages models, and detailed what steps are necessary to incorporate such LMs into phrase-based decoding." }, { "id": 150, "string": "We integrated an incremental syntactic language model into Moses." }, { "id": 151, "string": "The translation quality significantly improved on a constrained task, and the perplexity improvements suggest that interpolating between n-gram and syntactic LMs may hold promise on larger data sets." }, { "id": 152, "string": "The use of very large n-gram language models is typically a key ingredient in the best-performing machine translation systems (Brants et al., 2007) ." }, { "id": 153, "string": "Our n-gram model trained only on WSJ is admittedly small." }, { "id": 154, "string": "Our future work seeks to incorporate largescale n-gram language models in conjunction with incremental syntactic language models." }, { "id": 155, "string": "The added decoding time cost of our syntactic language model is very high." }, { "id": 156, "string": "By increasing the beam size and distortion limit of the baseline system, future work may examine whether a baseline system with comparable runtimes can achieve comparable translation quality." }, { "id": 157, "string": "A more efficient implementation of the HHMM parser would speed decoding and make more extensive and conclusive translation experiments possible." }, { "id": 158, "string": "Various additional improvements could include caching the HHMM LM calculations, and exploiting properties of the right-corner transform that limit the number of decisions between successive time steps." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 12 }, { "section": "Related Work", "n": "2", "start": 13, "end": 35 }, { "section": "Parser as Syntactic Language Model in", "n": "3", "start": 36, "end": 51 }, { "section": "Incremental syntactic language model", "n": "3.1", "start": 52, "end": 62 }, { "section": "Incorporating a Syntactic Language Model", "n": "3.3", "start": 63, "end": 74 }, { "section": "Incremental Bounded-Memory Parsing with a Time Series Model", "n": "4", "start": 75, "end": 104 }, { "section": "Formal Parsing Model: Scoring Partial Translation Hypotheses", "n": "4.1", "start": 105, "end": 132 }, { "section": "Results", "n": "6", "start": 133, "end": 146 }, { "section": "Discussion", "n": "7", "start": 147, "end": 158 } ], "figures": [ { "filename": "../figure/image/954-Figure5-1.png", "caption": "Figure 5: Graphical representation of the Hierarchic Hidden Markov Model after parsing input sentence The president meets the board on Friday. The shaded path through the parse lattice illustrates the recognized right-corner tree structure of Figure 3.", "page": 5, "bbox": { "x1": 72.0, "x2": 539.52, "y1": 57.599999999999994, "y2": 209.28 } }, { "filename": "../figure/image/954-Figure6-1.png", "caption": "Figure 6: A hypothesis in the phrase-based decoding lattice from Figure 1 is expanded using translation option the board of source phrase den Vorstand. Syntactic language model state τ̃31 contains random variables s1..33 ; likewise τ̃51 contains s 1..3", "page": 6, "bbox": { "x1": 170.88, "x2": 441.12, "y1": 72.0, "y2": 300.0 } }, { "filename": "../figure/image/954-Figure1-1.png", "caption": "Figure 1: Partial decoding lattice for standard phrase-based decoding stack algorithm translating the German sentence Der Präsident trifft am Freitag den Vorstand. Each node h in decoding stack t represents the application of a translation option, and includes the source sentence coverage vector, target language ngram state, and syntactic language model state τ̃th . Hypothesis combination is also shown, indicating where lattice paths with identical n-gram histories converge. We use the English translation The president meets the board on Friday as a running example throughout all Figures.", "page": 2, "bbox": { "x1": 147.84, "x2": 465.12, "y1": 72.0, "y2": 261.12 } }, { "filename": "../figure/image/954-Figure7-1.png", "caption": "Figure 7: Average per-word perplexity values. HHMM was run with beam size of 2000. Bold indicates best single-model results for LMs trained on WSJ sections 2-21. Best overall in italics.", "page": 7, "bbox": { "x1": 312.96, "x2": 545.28, "y1": 58.559999999999995, "y2": 277.44 } }, { "filename": "../figure/image/954-Figure3-1.png", "caption": "Figure 3: Sample binarized phrase structure tree after application of right-corner transform.", "page": 3, "bbox": { "x1": 316.32, "x2": 535.68, "y1": 208.32, "y2": 334.08 } }, { "filename": "../figure/image/954-Figure2-1.png", "caption": "Figure 2: Sample binarized phrase structure tree.", "page": 3, "bbox": { "x1": 316.32, "x2": 516.48, "y1": 61.919999999999995, "y2": 154.07999999999998 } }, { "filename": "../figure/image/954-Figure8-1.png", "caption": "Figure 8: Mean per-sentence decoding time (in seconds) for dev set using Moses with and without syntactic language model. HHMM parser beam sizes are indicated for the syntactic LM.", "page": 8, "bbox": { "x1": 80.64, "x2": 290.4, "y1": 57.599999999999994, "y2": 144.0 } }, { "filename": "../figure/image/954-Figure9-1.png", "caption": "Figure 9: Results for Ur-En devtest (only sentences with 1-20 words) with HHMM beam size of 2000 and Moses settings of distortion limit 10, stack size 200, and ttable limit 20.", "page": 8, "bbox": { "x1": 360.0, "x2": 493.44, "y1": 57.599999999999994, "y2": 102.24 } }, { "filename": "../figure/image/954-Figure4-1.png", "caption": "Figure 4: Graphical representation of the dependency structure in a standard Hierarchic Hidden Markov Model with D = 3 hidden levels that can be used to parse syntax. Circles denote random variables, and edges denote conditional dependencies. Shaded circles denote variables with observed values.", "page": 4, "bbox": { "x1": 81.11999999999999, "x2": 295.2, "y1": 57.599999999999994, "y2": 188.16 } } ] }, "gem_id": "GEM-SciDuet-train-1" }, { "slides": { "0": { "title": "Introduction", "text": [ "I How far can we go with a language agnostic model?", "I We experiment with [Enright and Kondrak, 2007]s parallel document identification", "I We adapt the method to the BUCC-2015 Shared task based on two assumptions:", "Source documents should be paired 1-to-1 with target documents", "We have access to comparable documents in several languages" ], "page_nums": [ 1 ], "images": [] }, "1": { "title": "Method", "text": [ "I Fast parallel document identification [Enright and Kondrak, 2007]", "I Documents = bags of hapax words", "I Words = blank separated strings that are 4+ characters long", "I Given a document in language A, the document in language B that shares the largest", "number of words is considered as parallel", "I Works very well for parallel documents", "I 80% precision on Wikipedia [Patry and Langlais, 2011]", "I We use this approach as baseline for detecting comparable documents" ], "page_nums": [ 3 ], "images": [] }, "2": { "title": "Improvements using 1 to 1 alignments", "text": [ "I In baseline, document pairs are scored independently", "I Multiple source documents are paired to a same target document", "I 60% of English pages are paired with multiple pages in French or German", "I We remove multiply assigned source documents using pigeonhole reasoning", "I From 60% to 11% of multiply assigned source documents" ], "page_nums": [ 4 ], "images": [] }, "3": { "title": "Improvements using cross lingual information", "text": [ "I Simple document weighting function score ties", "I We break the remaining score ties using a third language", "I From 11% to less than 4% of multiply assigned source documents" ], "page_nums": [ 5 ], "images": [] }, "4": { "title": "Experimental settings", "text": [ "I We focus on the French-English and German-English pairs", "I The following measures are considered relevant", "I Mean Average Precision (MAP)" ], "page_nums": [ 7 ], "images": [] }, "5": { "title": "Results FR EN", "text": [ "Strategy MAP Succ. P@5 MAP Succ. P@5" ], "page_nums": [ 8 ], "images": [] }, "6": { "title": "Results DE EN", "text": [ "Strategy MAP Succ. P@5 MAP Succ. P@5" ], "page_nums": [ 9 ], "images": [] }, "7": { "title": "Summary", "text": [ "I Unsupervised, hapax words-based method", "I Promising results, about 60% of success using pigeonhole reasoning", "I Using a third language slightly improves the performance", "I Finding the optimal alignment across the all languages", "I Relaxing the hapax-words constraint" ], "page_nums": [ 11 ], "images": [] } }, "paper_title": "LINA: Identifying Comparable Documents from Wikipedia", "paper_id": "957", "paper": { "title": "LINA: Identifying Comparable Documents from Wikipedia", "abstract": "This paper describes the LINA system for the BUCC 2015 shared track. Following (Enright and Kondrak, 2007), our system identify comparable documents by collecting counts of hapax words. We extend this method by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information.", "text": [ { "id": 0, "string": "Introduction Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation." }, { "id": 1, "string": "Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) ." }, { "id": 2, "string": "Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) ." }, { "id": 3, "string": "Identifying comparable resources in a large amount of multilingual data remains a very challenging task." }, { "id": 4, "string": "The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources." }, { "id": 5, "string": "More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages." }, { "id": 6, "string": "1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results." }, { "id": 7, "string": "Proposed Method The method we propose is based on (Enright and Kondrak, 2007) 's approach to parallel document identification." }, { "id": 8, "string": "Documents are treated as bags of words, in which only blank separated strings that are at least four characters long and that appear only once in the document (hapax words) are indexed." }, { "id": 9, "string": "Given a document in language A, the document in language B that share the largest number of these words is considered as parallel." }, { "id": 10, "string": "Although very simple, this approach was shown to perform very well in detecting parallel documents in Wikipedia (Patry and Langlais, 2011) ." }, { "id": 11, "string": "The reason for this is that most hapax words are in practice proper nouns or numerical entities, which are often cognates." }, { "id": 12, "string": "An example of hapax words extracted from a document is given in Table 1 ." }, { "id": 13, "string": "We purposely keep urls and special characters, as these are useful clues for identifying translated Wikipedia pages." }, { "id": 14, "string": "website major gaston links flutist marcel debost states sources college crunelle conservatoire principal rampal united currently recorded chastain competitions music http://www.oberlin.edu/faculty/mdebost/ under international flutists jean-pierre profile moyse french repertoire amazon lives external *http://www.amazon.com/micheldebost/dp/b000s9zsk0 known teaches conservatory school professor studied kathleen orchestre replaced michel Here, we experiment with this approach for detecting near-parallel (comparable) documents." }, { "id": 15, "string": "Following (Patry and Langlais, 2011) , we first search for the potential source-target document pairs." }, { "id": 16, "string": "To do so, we select for each document in the source language, the N = 20 documents in the target language that share the largest number of hapax words (hereafter baseline)." }, { "id": 17, "string": "Scoring each pair of documents independently of other candidate pairs leads to several source documents being paired to a same target document." }, { "id": 18, "string": "As indicated in Table 2 , the percentage of English articles that are paired with multiple source documents is high (57.3% for French and 60.4% for German)." }, { "id": 19, "string": "To address this problem, we remove potential multiple source documents by keeping the document pairs with the highest number of shared words (hereafter pigeonhole)." }, { "id": 20, "string": "This strategy greatly reduces the number of multiply assigned source documents from roughly 60% to 10%." }, { "id": 21, "string": "This in turn removes needlessly paired documents and greatly improves the effectiveness of the method." }, { "id": 22, "string": "In an attempt to break the remaining score ties between document pairs, we further extend our model to exploit cross-lingual information." }, { "id": 23, "string": "When multiple source documents are paired to a given English document with the same score, we use the paired documents in a third language to order them (hereafter cross-lingual)." }, { "id": 24, "string": "Here we make two assumptions that are valid for the BUCC 2015 shared Task: (1) we have access to comparable documents in a third language, and (2) source documents should be paired 1-to-1 with target documents." }, { "id": 25, "string": "Strategy An example of two French documents (doc fr 1 and doc fr 2) being paired to the same English document (doc en ) is given in Figure 1 ." }, { "id": 26, "string": "We use the German document (doc de ) paired with doc en and select the French document that shares the largest number of hapax words, which for this example is doc fr 2." }, { "id": 27, "string": "This strategy further reduces the number of multiply assigned source documents from 10% to less than 4%." }, { "id": 28, "string": "Experiments Experimental settings The BUCC 2015 shared task consists in returning for each Wikipedia page in a source language, up to five ranked suggestions to its linked page in English." }, { "id": 29, "string": "Inter-language links, that is, links from a page in one language to an equivalent page in another language, are used to evaluate the effectiveness of the systems." }, { "id": 30, "string": "Here, we only focus on the French-English and German-English pairs." }, { "id": 31, "string": "Following the task guidelines, we use the following evaluation measures investigate the effectiveness of our method: • Mean Average Precision (MAP)." }, { "id": 32, "string": "Average of precisions computed at the point of each correctly paired document in the ranked list of paired documents." }, { "id": 33, "string": "• Success (Succ.)." }, { "id": 34, "string": "Precision computed on the first returned paired document." }, { "id": 35, "string": "• Precision at 5 (P@5)." }, { "id": 36, "string": "Precision computed on the 5 topmost paired documents." }, { "id": 37, "string": "Results Results are presented in Table 3 ." }, { "id": 38, "string": "Overall, we observe that the two strategies that filter out multiply assigned source documents improve the performance of the method." }, { "id": 39, "string": "The largest part of the improvement comes from using pigeonhole reasoning." }, { "id": 40, "string": "The use of cross-lingual information to Table 3 : Performance in terms of MAP, success (Succ.)" }, { "id": 41, "string": "and precision at 5 (P@5) of our model." }, { "id": 42, "string": "break ties between the remaining multiply assigned source documents only gives a small improvement." }, { "id": 43, "string": "We assume that the limited number of potential source-target document pairs we use in our experiments (N = 20) is a reason for this." }, { "id": 44, "string": "Interestingly, results are consistent across languages and datasets (test and train)." }, { "id": 45, "string": "Our best configuration, that is, with pigeonhole and crosslingual, achieves nearly 60% of success for the first returned pair." }, { "id": 46, "string": "Here we show that a simple and straightforward approach that requires no language-specific resources still yields some interesting results." }, { "id": 47, "string": "Discussion In this paper we described the LINA system for the BUCC 2015 shared track." }, { "id": 48, "string": "We proposed to extend (Enright and Kondrak, 2007) 's approach to parallel document identification by filtering out document pairs sharing target documents using pigeonhole reasoning and cross-lingual information." }, { "id": 49, "string": "Experimental results show that our system identifies comparable documents with a precision of about 60%." }, { "id": 50, "string": "Scoring document pairs using the number of shared hapax words was first intended to be a baseline for comparison purposes." }, { "id": 51, "string": "We tried a finer grained scoring approach relying on bilingual dictionaries and information retrieval weighting schemes." }, { "id": 52, "string": "For reasonable computation time, we were unable to include low-frequency words in our system." }, { "id": 53, "string": "Partial results were very low and we are still in the process of investigating the reasons for this." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 6 }, { "section": "Proposed Method", "n": "2", "start": 7, "end": 27 }, { "section": "Experimental settings", "n": "3.1", "start": 28, "end": 36 }, { "section": "Results", "n": "3.2", "start": 37, "end": 46 }, { "section": "Discussion", "n": "4", "start": 47, "end": 53 } ], "figures": [ { "filename": "../figure/image/957-Table3-1.png", "caption": "Table 3: Performance in terms of MAP, success (Succ.) and precision at 5 (P@5) of our model.", "page": 2, "bbox": { "x1": 78.72, "x2": 518.4, "y1": 63.36, "y2": 162.23999999999998 } }, { "filename": "../figure/image/957-Table2-1.png", "caption": "Table 2: Percentage of English articles that are paired with multiple French or German articles on the training data.", "page": 1, "bbox": { "x1": 88.8, "x2": 273.12, "y1": 388.32, "y2": 455.03999999999996 } }, { "filename": "../figure/image/957-Figure1-1.png", "caption": "Figure 1: Example of the use of cross-lingual information to order multiple documents that received the same scores. The number of shared words are labelled on the edges.", "page": 1, "bbox": { "x1": 344.64, "x2": 488.15999999999997, "y1": 112.8, "y2": 240.0 } }, { "filename": "../figure/image/957-Table1-1.png", "caption": "Table 1: Example of indexed document as bag of hapax words (en-bacde.txt).", "page": 0, "bbox": { "x1": 307.68, "x2": 525.12, "y1": 572.64, "y2": 729.12 } } ] }, "gem_id": "GEM-SciDuet-train-2" }, { "slides": { "0": { "title": "Sentence Representation in Conversations", "text": [ "Traditional System: hand-crafted semantic frame", "Not scalable to complex domains", "Neural dialog models: continuous hidden vectors", "Directly output system responses in words", "Hard to interpret & control", "[Ritter et al 2011, Vinyals et al" ], "page_nums": [ 1 ], "images": [] }, "1": { "title": "Why discrete sentence representation", "text": [ "1. Inrepteablity & controbility & multimodal distribution", "2. Semi-supervised Learning [Kingma et al 2014 NIPS, Zhou et al 2017 ACL]", "3. Reinforcement Learning [Wen et al 2017]", "X = What time do you want to travel?", "Model Z1Z2Z3 Encoder Decoder" ], "page_nums": [ 2, 3 ], "images": [] }, "2": { "title": "Baseline Discrete Variational Autoencoder VAE", "text": [ "M discrete K-way latent variables z with RNN recognition & generation network.", "Reparametrization using Gumbel-Softmax [Jang et al., 2016; Maddison et al., 2016]", "M discrete K-way latent variables z with GRU encoder & decoder.", "FAIL to learn meaningful z because of posterior collapse (z is constant regardless of x)", "MANY prior solution on continuous VAE, e.g. (not exhaustive), yet still open-ended question", "KL-annealing, decoder word dropout [Bowman et a2015] Bag-of-word loss [Zhao et al 2017] Dilated CNN decoder" ], "page_nums": [ 4, 5 ], "images": [ "figure/image/964-Figure1-1.png" ] }, "3": { "title": "Anti Info Nature in Evidence Lower Bound ELBO", "text": [ "Write ELBO as an expectation over the whole dataset", "Expand the KL term, and plug back in:", "Minimize I(Z, X) to 0", "Posterior collapse with powerful decoder." ], "page_nums": [ 6, 7 ], "images": [] }, "4": { "title": "Discrete Information VAE DI VAE", "text": [ "A natural solution is to maximize both data log likelihood & mutual information.", "Match prior result for continuous VAE. [Mazhazni et al 2015, Kim et al 2017]", "Propose Batch Prior Regularization (BPR) to minimize KL [q(z)||p(z)] for discrete latent", "Fundamentally different from KL-annealing, since" ], "page_nums": [ 8, 9 ], "images": [] }, "5": { "title": "Learning from Context Predicting DI VST", "text": [ "Skip-Thought (ST) is well-known distributional sentence representation [Hill et al 2016]", "The meaning of sentences in dialogs is highly contextual, e.g. dialog acts.", "We extend DI-VAE to Discrete Information Variational Skip Thought (DI-VST)." ], "page_nums": [ 10 ], "images": [ "figure/image/964-Figure1-1.png" ] }, "6": { "title": "Integration with Encoder Decoders", "text": [ "Policy Network z P(z|c)", "Recognition Network z Generator", "Optional: penalize decoder if generated x not exhibiting z [Hu et al 2017]" ], "page_nums": [ 11, 12 ], "images": [] }, "7": { "title": "Evaluation Datasets", "text": [ "a. Past evaluation dataset for text VAE [Bowman et al 2015]", "Stanford Multi-domain Dialog Dataset (SMD) [Eric and Manning 2017]", "a. 3,031 Human-Woz dialog dataset from 3 domains: weather, navigation & scheduling.", "Switchboard (SW) [Jurafsky et al 1997]", "a. 2,400 human-human telephone non-task-oriented dialogues about a given topic.", "a. 13,188 human-human non-task-oriented dialogs from chat room." ], "page_nums": [ 13 ], "images": [] }, "8": { "title": "The Effectiveness of Batch Prior Regularization BPR", "text": [ "DAE: Autoencoder + Gumbel Softmax", "DVAE: Discrete VAE with ELBO loss", "DI-VAE: Discrete VAE + BPR", "DST: Skip thought + Gumbel Softmax", "DI-VST: Variational Skip Thought + BPR Table 1: Results for various discrete sentence representations." ], "page_nums": [ 14, 15, 16 ], "images": [ "figure/image/964-Table1-1.png" ] }, "9": { "title": "How large should the batch size be", "text": [ "When batch size N = 0", "A large batch size leads to", "more meaningful latent action z", "I(x,z) is not the final goal" ], "page_nums": [ 17 ], "images": [ "figure/image/964-Figure2-1.png" ] }, "11": { "title": "Differences between DI VAE DI VST", "text": [ "DI-VAE cluster utterances based on the", "More error-prone since harder to predict", "Utterance used in the similar context", "Easier to get agreement." ], "page_nums": [ 19 ], "images": [] }, "12": { "title": "Interpreting Latent Actions", "text": [ "M=3, K=5. The trained R will map any utterance into a1 -a2 -a3 . E.g. How are you?", "Automatic Evaluation on SW & DD", "Compare latent actions with", "The higher the more correlated", "Human Evaluation on SMD", "Expert look at 5 examples and give a", "name to the latent actions", "5 workers look at the expert name and", "Select the ones that match the expert" ], "page_nums": [ 20, 21 ], "images": [ "figure/image/964-Table3-1.png", "figure/image/964-Table4-1.png" ] }, "13": { "title": "Predict Latent Action by the Policy Network", "text": [ "Provide useful measure about the", "complexity of the domain.", "Usr > Sys & Chat > Task", "Predict latent actions from DI-VAE is harder", "than the ones from DI-VST", "Two types of latent actions has their own", "pros & cons. Which one is better is" ], "page_nums": [ 22 ], "images": [ "figure/image/964-Table7-1.png" ] }, "14": { "title": "Interpretable Response Generation", "text": [ "Examples of interpretable dialog", "First time, a neural dialog system" ], "page_nums": [ 23 ], "images": [ "figure/image/964-Table8-1.png" ] }, "15": { "title": "Conclusions and Future Work", "text": [ "An analysis of ELBO that explains the posterior collapse issue for sentence VAE.", "DI-VAE and DI-VST for learning rich sentence latent representation and integration", "Learn better context-based latent actions", "Encode human knowledge into the learning process.", "Learn structured latent action space for complex domains.", "Evaluate dialog generation performance in human-study." ], "page_nums": [ 24 ], "images": [] }, "16": { "title": "Semantic Consistency of the Generation", "text": [ "Use the recognition network as a classifier to", "predict the latent action z based on the", "Report accuracy by comparing z and z.", "DI-VAE has higher consistency than DI-VST", "L helps more in complex domain attr", "L helps DI-VST more than DI-VAE attr", "DI-VST is not directly helping generating x", "ST-ED doesnt work well on SW due to complex", "Spoken language and turn taking" ], "page_nums": [ 26 ], "images": [ "figure/image/964-Table6-1.png" ] }, "17": { "title": "What defines Interpretable Latent Actions", "text": [ "Definition: Latent action is a set of discrete variable that define the high-level attributes of", "an utterance (sentence) X. Latent action is denoted as Z.", "Z should capture salient sentence-level features about the response X.", "The meaning of latent symbols Z should be independent of the context C.", "If meaning of Z depends on C, then often impossible to interpret Z", "Since the possible space of C is huge!", "Conclusion: context-independent semantic ensures each assignment of z has the same", "meaning in all context." ], "page_nums": [ 27 ], "images": [] } }, "paper_title": "Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation", "paper_id": "964", "paper": { "title": "Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation", "abstract": "The encoder-decoder dialog model is one of the most prominent methods used to build dialog systems in complex domains. Yet it is limited because it cannot output interpretable actions as in traditional systems, which hinders humans from understanding its generation process. We present an unsupervised discrete sentence representation learning method that can integrate with any existing encoderdecoder dialog models for interpretable response generation. Building upon variational autoencoders (VAEs), we present two novel models, DI-VAE and DI-VST that improve VAEs and can discover interpretable semantics via either auto encoding or context predicting. Our methods have been validated on real-world dialog datasets to discover semantic representations and enhance encoder-decoder models with interpretable generation. 1", "text": [ { "id": 0, "string": "Introduction Classic dialog systems rely on developing a meaning representation to represent the utterances from both the machine and human users (Larsson and Traum, 2000; Bohus et al., 2007) ." }, { "id": 1, "string": "The dialog manager of a conventional dialog system outputs the system's next action in a semantic frame that usually contains hand-crafted dialog acts and slot values (Williams and Young, 2007) ." }, { "id": 2, "string": "Then a natural language generation module is used to generate the system's output in natural language based on the given semantic frame." }, { "id": 3, "string": "This approach suffers from generalization to more complex domains because it soon become intractable to man-ually design a frame representation that covers all of the fine-grained system actions." }, { "id": 4, "string": "The recently developed neural dialog system is one of the most prominent frameworks for developing dialog agents in complex domains." }, { "id": 5, "string": "The basic model is based on encoder-decoder networks and can learn to generate system responses without the need for hand-crafted meaning representations and other annotations." }, { "id": 6, "string": "Although generative dialog models have advanced rapidly (Serban et al., 2016; Li et al., 2016; , they cannot provide interpretable system actions as in the conventional dialog systems." }, { "id": 7, "string": "This inability limits the effectiveness of generative dialog models in several ways." }, { "id": 8, "string": "First, having interpretable system actions enables human to understand the behavior of a dialog system and better interpret the system intentions." }, { "id": 9, "string": "Also, modeling the high-level decision-making policy in dialogs enables useful generalization and dataefficient domain adaptation (Gašić et al., 2010) ." }, { "id": 10, "string": "Therefore, the motivation of this paper is to develop an unsupervised neural recognition model that can discover interpretable meaning representations of utterances (denoted as latent actions) as a set of discrete latent variables from a large unlabelled corpus as shown in Figure 1 ." }, { "id": 11, "string": "The discovered meaning representations will then be integrated with encoder decoder networks to achieve interpretable dialog generation while preserving all the merit of neural dialog systems." }, { "id": 12, "string": "We focus on learning discrete latent representations instead of dense continuous ones because discrete variables are easier to interpret (van den Oord et al., 2017) and can naturally correspond to categories in natural languages, e.g." }, { "id": 13, "string": "topics, dialog acts and etc." }, { "id": 14, "string": "Despite the difficulty of learning discrete latent variables in neural networks, the recently proposed Gumbel-Softmax offers a reliable way to back-propagate through discrete variables (Maddison et al., 2016; Jang et al., 2016) ." }, { "id": 15, "string": "However, we found a simple combination of sentence variational autoencoders (VAEs) (Bowman et al., 2015) and Gumbel-Softmax fails to learn meaningful discrete representations." }, { "id": 16, "string": "We then highlight the anti-information limitation of the evidence lowerbound objective (ELBO) in VAEs and improve it by proposing Discrete Information VAE (DI-VAE) that maximizes the mutual information between data and latent actions." }, { "id": 17, "string": "We further enrich the learning signals beyond auto encoding by extending Skip Thought (Kiros et al., 2015) to Discrete Information Variational Skip Thought (DI-VST) that learns sentence-level distributional semantics." }, { "id": 18, "string": "Finally, an integration mechanism is presented that combines the learned latent actions with encoder decoder models." }, { "id": 19, "string": "The proposed systems are tested on several realworld dialog datasets." }, { "id": 20, "string": "Experiments show that the proposed methods significantly outperform the standard VAEs and can discover meaningful latent actions from these datasets." }, { "id": 21, "string": "Also, experiments confirm the effectiveness of the proposed integration mechanism and show that the learned latent actions can control the sentence-level attributes of the generated responses and provide humaninterpretable meaning representations." }, { "id": 22, "string": "Related Work Our work is closely related to research in latent variable dialog models." }, { "id": 23, "string": "The majority of models are based on Conditional Variational Autoencoders (CVAEs) (Serban et al., 2016; Cao and Clark, 2017) with continuous latent variables to better model the response distribution and encourage diverse responses." }, { "id": 24, "string": "further introduced dialog acts to guide the learning of the CVAEs." }, { "id": 25, "string": "Discrete latent variables have also been used for task-oriented dialog systems (Wen et al., 2017) , where the latent space is used to represent intention." }, { "id": 26, "string": "The second line of related work is enriching the dialog context encoder with more fine-grained information than the dialog history." }, { "id": 27, "string": "Li et al., (2016) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings." }, { "id": 28, "string": "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses." }, { "id": 29, "string": "The proposed method also relates to sentence representation learning using neural networks." }, { "id": 30, "string": "Most work learns continuous distributed representations of sentences from various learning signals (Hill et al., 2016) , e.g." }, { "id": 31, "string": "the Skip Thought learns representations by predicting the previous and next sentences (Kiros et al., 2015) ." }, { "id": 32, "string": "Another area of work focused on learning regularized continuous sentence representation, which enables sentence generation by sampling the latent space (Bowman et al., 2015; Kim et al., 2017) ." }, { "id": 33, "string": "There is less work on discrete sentence representations due to the difficulty of passing gradients through discrete outputs." }, { "id": 34, "string": "The recently developed Gumbel Softmax (Jang et al., 2016; Maddison et al., 2016) and vector quantization (van den Oord et al., 2017) enable us to train discrete variables." }, { "id": 35, "string": "Notably, discrete variable models have been proposed to discover document topics (Miao et al., 2016) and semi-supervised sequence transaction (Zhou and Neubig, 2017) Our work differs from these as follows: (1) we focus on learning interpretable variables; in prior research the semantics of latent variables are mostly ignored in the dialog generation setting." }, { "id": 36, "string": "(2) we improve the learning objective for discrete VAEs and overcome the well-known posterior collapsing issue (Bowman et al., 2015; Chen et al., 2016) ." }, { "id": 37, "string": "(3) we focus on unsupervised learning of salient features in dialog responses instead of hand-crafted features." }, { "id": 38, "string": "Proposed Methods Our formulation contains three random variables: the dialog context c, the response x and the latent action z." }, { "id": 39, "string": "The context often contains the discourse history in the format of a list of utterances." }, { "id": 40, "string": "The response is an utterance that contains a list of word tokens." }, { "id": 41, "string": "The latent action is a set of discrete variables that define high-level attributes of x." }, { "id": 42, "string": "Before introducing the proposed framework, we first identify two key properties that are essential in or-der for z to be interpretable: 1. z should capture salient sentence-level features about the response x." }, { "id": 43, "string": "2." }, { "id": 44, "string": "The meaning of latent symbols z should be independent of the context c. The first property is self-evident." }, { "id": 45, "string": "The second can be explained: assume z contains a single discrete variable with K classes." }, { "id": 46, "string": "Since the context c can be any dialog history, if the meaning of each class changes given a different context, then it is difficult to extract an intuitive interpretation by only looking at all responses with class k ∈ [1, K]." }, { "id": 47, "string": "Therefore, the second property looks for latent actions that have context-independent semantics so that each assignment of z conveys the same meaning in all dialog contexts." }, { "id": 48, "string": "With the above definition of interpretable latent actions, we first introduce a recognition network R : q R (z|x) and a generation network G. The role of R is to map an sentence to the latent variable z and the generator G defines the learning signals that will be used to train z's representation." }, { "id": 49, "string": "Notably, our recognition network R does not depend on the context c as has been the case in prior work (Serban et al., 2016) ." }, { "id": 50, "string": "The motivation of this design is to encourage z to capture context-independent semantics, which are further elaborated in Section 3.4." }, { "id": 51, "string": "With the z learned by R and G, we then introduce an encoder decoder network F : p F (x|z, c) and and a policy network π : p π (z|c)." }, { "id": 52, "string": "At test time, given a context c, the policy network and encoder decoder will work together to generate the next response viã x = p F (x|z ∼ p π (z|c), c)." }, { "id": 53, "string": "In short, R, G, F and π are the four components that comprise our proposed framework." }, { "id": 54, "string": "The next section will first focus on developing R and G for learning interpretable z and then will move on to integrating R with F and π in Section 3.3." }, { "id": 55, "string": "Learning Sentence Representations from Auto-Encoding Our baseline model is a sentence VAE with discrete latent space." }, { "id": 56, "string": "We use an RNN as the recognition network to encode the response x." }, { "id": 57, "string": "Its last hidden state h R |x| is used to represent x." }, { "id": 58, "string": "We define z to be a set of K-way categorical variables z = {z 1 ...z m ...z M }, where M is the number of variables." }, { "id": 59, "string": "For each z m , its posterior distribution is defined as q R (z m |x) = Softmax(W q h R |x| + b q )." }, { "id": 60, "string": "During training, we use the Gumbel-Softmax trick to sample from this distribution and obtain lowvariance gradients." }, { "id": 61, "string": "To map the latent samples to the initial state of the decoder RNN, we define {e 1 ...e m ...e M } where e m ∈ R K×D and D is the generator cell size." }, { "id": 62, "string": "Thus the initial state of the generator is: h G 0 = M m=1 e m (z m )." }, { "id": 63, "string": "Finally, the generator RNN is used to reconstruct the response given h G 0 ." }, { "id": 64, "string": "VAEs is trained to maxmimize the evidence lowerbound objective (ELBO) (Kingma and Welling, 2013) ." }, { "id": 65, "string": "For simplicity, later discussion drops the subscript m in z m and assumes a single latent z." }, { "id": 66, "string": "Since each z m is independent, we can easily extend the results below to multiple variables." }, { "id": 67, "string": "Anti-Information Limitation of ELBO It is well-known that sentence VAEs are hard to train because of the posterior collapse issue." }, { "id": 68, "string": "Many empirical solutions have been proposed: weakening the decoder, adding auxiliary loss etc." }, { "id": 69, "string": "(Bowman et al., 2015; Chen et al., 2016; ." }, { "id": 70, "string": "We argue that the posterior collapse issue lies in ELBO and we offer a novel decomposition to understand its behavior." }, { "id": 71, "string": "First, instead of writing ELBO for a single data point, we write it as an expectation over a dataset: L VAE = E x [E q R (z|x) [log p G (x|z)] − KL(q R (z|x) p(z))] (1) We can expand the KL term as Eq." }, { "id": 72, "string": "2 (derivations in Appendix A.1) and rewrite ELBO as: E x [KL(q R (z|x) p(z))] = (2) I(Z, X)+KL(q(z) p(z)) L VAE = E q(z|x)p(x) [log p(x|z)] − I(Z, X) − KL(q(z) p(z)) (3) where q(z) = E x [q R (z|x)] and I(Z, X) is the mutual information between Z and X." }, { "id": 73, "string": "This expansion shows that the KL term in ELBO is trying to reduce the mutual information between latent variables and the input data, which explains why VAEs often ignore the latent variable, especially when equipped with powerful decoders." }, { "id": 74, "string": "VAE with Information Maximization and Batch Prior Regularization A natural solution to correct the anti-information issue in Eq." }, { "id": 75, "string": "3 is to maximize both the data likeli-hood lowerbound and the mutual information between z and the input data: L VAE + I(Z, X) = E q R (z|x)p(x) [log p G (x|z)] − KL(q(z) p(z)) (4) Therefore, jointly optimizing ELBO and mutual information simply cancels out the informationdiscouraging term." }, { "id": 76, "string": "Also, we can still sample from the prior distribution for generation because of KL(q(z) p(z))." }, { "id": 77, "string": "Eq." }, { "id": 78, "string": "4 is similar to the objectives used in adversarial autoencoders (Makhzani et al., 2015; Kim et al., 2017) ." }, { "id": 79, "string": "Our derivation provides a theoretical justification to their superior performance." }, { "id": 80, "string": "Notably, Eq." }, { "id": 81, "string": "4 arrives at the same loss function proposed in infoVAE (Zhao S et al., 2017) ." }, { "id": 82, "string": "However, our derivation is different, offering a new way to understand ELBO behavior." }, { "id": 83, "string": "The remaining challenge is how to minimize KL(q(z) p(z)), since q(z) is an expectation over q(z|x)." }, { "id": 84, "string": "When z is continuous, prior work has used adversarial training (Makhzani et al., 2015; Kim et al., 2017) or Maximum Mean Discrepancy (MMD) (Zhao S et al., 2017) to regularize q(z)." }, { "id": 85, "string": "It turns out that minimizing KL(q(z) p(z)) for discrete z is much simpler than its continuous counterparts." }, { "id": 86, "string": "Let x n be a sample from a batch of N data points." }, { "id": 87, "string": "Then we have: q(z) ≈ 1 N N n=1 q(z|x n ) = q (z) (5) where q (z) is a mixture of softmax from the posteriors q(z|x n ) of each x n ." }, { "id": 88, "string": "We can approximate KL(q(z) p(z)) by: KL(q (z) p(z)) = K k=1 q (z = k) log q (z = k) p(z = k) (6) We refer to Eq." }, { "id": 89, "string": "6 as Batch Prior Regularization (BPR)." }, { "id": 90, "string": "When N approaches infinity, q (z) approaches the true marginal distribution of q(z)." }, { "id": 91, "string": "In practice, we only need to use the data from each mini-batch assuming that the mini batches are randomized." }, { "id": 92, "string": "Last, BPR is fundamentally different from multiplying a coefficient < 1 to anneal the KL term in VAE (Bowman et al., 2015) ." }, { "id": 93, "string": "This is because BPR is a non-linear operation log sum exp." }, { "id": 94, "string": "For later discussion, we denote our discrete infoVAE with BPR as DI-VAE." }, { "id": 95, "string": "Learning Sentence Representations from the Context DI-VAE infers sentence representations by reconstruction of the input sentence." }, { "id": 96, "string": "Past research in distributional semantics has suggested the meaning of language can be inferred from the adjacent context (Harris, 1954; Hill et al., 2016) ." }, { "id": 97, "string": "The distributional hypothesis is especially applicable to dialog since the utterance meaning is highly contextual." }, { "id": 98, "string": "For example, the dialog act is a wellknown utterance feature and depends on dialog state (Austin, 1975; Stolcke et al., 2000) ." }, { "id": 99, "string": "Thus, we introduce a second type of latent action based on sentence-level distributional semantics." }, { "id": 100, "string": "Skip thought (ST) is a powerful sentence representation that captures contextual information (Kiros et al., 2015) ." }, { "id": 101, "string": "ST uses an RNN to encode a sentence, and then uses the resulting sentence representation to predict the previous and next sentences." }, { "id": 102, "string": "Inspired by ST's robust performance across multiple tasks (Hill et al., 2016) , we adapt our DI-VAE to Discrete Information Variational Skip Thought (DI-VST) to learn discrete latent actions that model distributional semantics of sentences." }, { "id": 103, "string": "We use the same recognition network from DI-VAE to output z's posterior distribution q R (z|x)." }, { "id": 104, "string": "Given the samples from q R (z|x), two RNN generators are used to predict the previous sentence x p and the next sentences x n ." }, { "id": 105, "string": "Finally, the learning objective is to maximize: L DI-VST = E q R (z|x)p(x)) [log(p n G (x n |z)p p G (x p |z))] − KL(q(z) p(z)) (7) Integration with Encoder Decoders We now describe how to integrate a given q R (z|x) with an encoder decoder and a policy network." }, { "id": 106, "string": "Let the dialog context c be a sequence of utterances." }, { "id": 107, "string": "Then a dialog context encoder network can encode the dialog context into a distributed representation h e = F e (c)." }, { "id": 108, "string": "The decoder F d can generate the responsesx = F d (h e , z) using samples from q R (z|x)." }, { "id": 109, "string": "Meanwhile, we train π to predict the aggregated posterior E p(x|c) [q R (z|x)] from c via maximum likelihood training." }, { "id": 110, "string": "This model is referred as Latent Action Encoder Decoder (LAED) with the following objective." }, { "id": 111, "string": "L LAED (θ F , θ π ) = E q R (z|x)p(x,c) [logp π (z|c) + log p F (x|z, c)] (8) Also simply augmenting the inputs of the decoders with latent action does not guarantee that the generated response exhibits the attributes of the give action." }, { "id": 112, "string": "Thus we use the controllable text generation framework (Hu et al., 2017) by introducing L Attr , which reuses the same recognition network q R (z|x) as a fixed discriminator to penalize the decoder if its generated responses do not reflect the attributes in z. L Attr (θ F ) = E q R (z|x)p(c,x) [log q R (z|F(c, z))] (9) Since it is not possible to propagate gradients through the discrete outputs at F d at each word step, we use a deterministic continuous relaxation (Hu et al., 2017) by replacing output of F d with the probability of each word." }, { "id": 113, "string": "Let o t be the normalized probability at step t ∈ [1, |x|], the inputs to q R at time t are then the sum of word embeddings weighted by o t , i.e." }, { "id": 114, "string": "h R t = RNN(h R t−1 , Eo t ) and E is the word embedding matrix." }, { "id": 115, "string": "Finally this loss is combined with L LAED and a hyperparameter λ to have Attribute Forcing LAED." }, { "id": 116, "string": "L attrLAED = L LAED + λL Attr (10) Relationship with Conditional VAEs It is not hard to see L LAED is closely related to the objective of CVAEs for dialog generation (Serban et al., 2016; , which is: L CVAE = E q [log p(x|z, c)]−KL(q(z|x, c) p(z|c)) (11) Despite their similarities, we highlight the key differences that prohibit CVAE from achieving interpretable dialog generation." }, { "id": 117, "string": "First L CVAE encourages I(x, z|c) (Agakov, 2005), which learns z that capture context-dependent semantics." }, { "id": 118, "string": "More intuitively, z in CVAE is trained to generate x via p(x|z, c) so the meaning of learned z can only be interpreted along with its context c. Therefore this violates our goal of learning context-independent semantics." }, { "id": 119, "string": "Our methods learn q R (z|x) that only depends on x and trains q R separately to ensure the semantics of z are interpretable standalone." }, { "id": 120, "string": "Experiments and Results The proposed methods are evaluated on four datasets." }, { "id": 121, "string": "The first corpus is Penn Treebank (PTB) (Marcus et al., 1993) used to evaluate sentence VAEs (Bowman et al., 2015) ." }, { "id": 122, "string": "We used the version pre-processed by Mikolov (Mikolov et al., 2010) ." }, { "id": 123, "string": "The second dataset is the Stanford Multi-Domain Dialog (SMD) dataset that contains 3,031 human-Woz, task-oriented dialogs collected from 3 different domains (navigation, weather and scheduling) (Eric and Manning, 2017) ." }, { "id": 124, "string": "The other two datasets are chat-oriented data: Daily Dialog (DD) and Switchboard (SW) (Godfrey and Holliman, 1997), which are used to test whether our methods can generalize beyond task-oriented dialogs but also to to open-domain chatting." }, { "id": 125, "string": "DD contains 13,118 multi-turn human-human dialogs annotated with dialog acts and emotions." }, { "id": 126, "string": "(Li et al., 2017) ." }, { "id": 127, "string": "SW has 2,400 human-human telephone conversations that are annotated with topics and dialog acts." }, { "id": 128, "string": "SW is a more challenging dataset because it is transcribed from speech which contains complex spoken language phenomenon, e.g." }, { "id": 129, "string": "hesitation, self-repair etc." }, { "id": 130, "string": "Comparing Discrete Sentence Representation Models The first experiment used PTB and DD to evaluate the performance of the proposed methods in learning discrete sentence representations." }, { "id": 131, "string": "We implemented DI-VAE and DI-VST using GRU-RNN (Chung et al., 2014) and trained them using Adam (Kingma and Ba, 2014) ." }, { "id": 132, "string": "Besides the proposed methods, the following baselines are compared." }, { "id": 133, "string": "Unregularized models: removing the KL(q|p) term from DI-VAE and DI-VST leads to a simple discrete autoencoder (DAE) and discrete skip thought (DST) with stochastic discrete hidden units." }, { "id": 134, "string": "ELBO models: the basic discrete sentence VAE (DVAE) or variational skip thought (DVST) that optimizes ELBO with regularization term KL(q(z|x) p(z))." }, { "id": 135, "string": "We found that standard training failed to learn informative latent actions for either DVAE or DVST because of the posterior collapse." }, { "id": 136, "string": "Therefore, KL-annealing (Bowman et al., 2015) and bag-of-word loss are used to force these two models learn meaningful representations." }, { "id": 137, "string": "We also include the results for VAE with continuous latent variables reported on the same PTB ." }, { "id": 138, "string": "Additionally, we report the perplexity from a standard GRU-RNN language model (Zaremba et al., 2014) ." }, { "id": 139, "string": "The evaluation metrics include reconstruction perplexity (PPL), KL(q(z) p(z)) and the mutual information between input data and latent vari-ables I (x, z) ." }, { "id": 140, "string": "Intuitively a good model should achieve low perplexity and KL distance, and simultaneously achieve high I(x, z)." }, { "id": 141, "string": "The discrete latent space for all models are M =20 and K=10." }, { "id": 142, "string": "Mini-batch size is 30." }, { "id": 143, "string": "Table 1 shows that all models achieve better perplexity than an RNNLM, which shows they manage to learn meaningful q(z|x)." }, { "id": 144, "string": "First, for autoencoding models, DI-VAE is able to achieve the best results in all metrics compared other methods." }, { "id": 145, "string": "We found DAEs quickly learn to reconstruct the input but they are prone to overfitting during training, which leads to lower performance on the test data compared to DI-VAE." }, { "id": 146, "string": "Also, since there is no regularization term in the latent space, q(z) is very different from the p(z) which prohibits us from generating sentences from the latent space." }, { "id": 147, "string": "In fact, DI-VAE enjoys the same linear interpolation properties reported in (Bowman et al., 2015) (See Appendix A.2)." }, { "id": 148, "string": "As for DVAEs, it achieves zero I(x, z) in standard training and only manages to learn some information when training with KL-annealing and bag-of-word loss." }, { "id": 149, "string": "On the other hand, our methods achieve robust performance without the need for additional processing." }, { "id": 150, "string": "Similarly, the proposed DI-VST is able to achieve the lowest PPL and similar KL compared to the strongly regularized DVST." }, { "id": 151, "string": "Interestingly, although DST is able to achieve the highest I(x, z), but PPL is not further improved." }, { "id": 152, "string": "These results confirm the effectiveness of the proposed BPR in terms of regularizing q(z) while learning meaningful posterior q(z|x)." }, { "id": 153, "string": "In order to understand BPR's sensitivity to batch size N , a follow-up experiment varied the batch size from 2 to 60 (If N =1, DI-VAE is equivalent to DVAE)." }, { "id": 154, "string": "Figure 2 show that as N increases, perplexity, I(x, z) monotonically improves, while KL(q p) only increases from 0 to 0.159." }, { "id": 155, "string": "After N > 30, the performance plateaus." }, { "id": 156, "string": "Therefore, using mini-batch is an efficient trade-off between q(z) estimation and computation speed." }, { "id": 157, "string": "The last experiment in this section investigates the relation between representation learning and the dimension of the latent space." }, { "id": 158, "string": "We set a fixed budget by restricting the maximum number of modes to be about 1000, i.e." }, { "id": 159, "string": "K M ≈ 1000." }, { "id": 160, "string": "We then vary the latent space size and report the same evaluation metrics." }, { "id": 161, "string": "Table 2 shows that models with multiple small latent variables perform significantly better than those with large and few latent variables." }, { "id": 162, "string": "K, M K M PPL KL(q p) I(x, z) 1000, 1 1000 75.61 0.032 0.335 10, 3 1000 71.42 0.071 0.607 4, 5 1024 68.43 0.088 0.809 Table 2 : DI-VAE on PTB with different latent dimensions under the same budget." }, { "id": 163, "string": "Interpreting Latent Actions The next question is to interpret the meaning of the learned latent action symbols." }, { "id": 164, "string": "To achieve this, the latent action of an utterance x n is obtained from a greedy mapping: a n = argmax k q R (z = k|x n )." }, { "id": 165, "string": "We set M =3 and K=5, so that there are at most 125 different latent actions, and each x n can now be represented by a 1 -a 2 -a 3 , e.g." }, { "id": 166, "string": "\"How are you?\"" }, { "id": 167, "string": "→ 1-4-2." }, { "id": 168, "string": "Assuming that we have access to manually clustered data according to certain classes (e.g." }, { "id": 169, "string": "dialog acts), it is unfair to use classic cluster measures (Vinh et al., 2010) to evaluate the clusters from latent actions." }, { "id": 170, "string": "This is because the uniform prior p(z) evenly distribute the data to all possible latent actions, so that it is expected that frequent classes will be assigned to several latent actions." }, { "id": 171, "string": "Thus we utilize the homogeneity metric (Rosenberg and Hirschberg, 2007 ) that measures if each latent action contains only members of a single class." }, { "id": 172, "string": "We tested this on the SW and DD, which contain human annotated features and we report the latent actions' homogeneity w.r.t these features in Table 3 ." }, { "id": 173, "string": "On DD, results show DI-VST SW DD Act Topic Act Emotion DI-VAE 0.48 0.08 0.18 0.09 DI-VST 0.33 0.13 0.34 0.12 works better than DI-VAE in terms of creating actions that are more coherent for emotion and dialog acts." }, { "id": 174, "string": "The results are interesting on SW since DI-VST performs worse on dialog acts than DI-VAE." }, { "id": 175, "string": "One reason is that the dialog acts in SW are more fine-grained (42 acts) than the ones in DD (5 acts) so that distinguishing utterances based on words in x is more important than the information in the neighbouring utterances." }, { "id": 176, "string": "We then apply the proposed methods to SMD which has no manual annotation and contains taskoriented dialogs." }, { "id": 177, "string": "Two experts are shown 5 randomly selected utterances from each latent action and are asked to give an action name that can describe as many of the utterances as possible." }, { "id": 178, "string": "Then an Amazon Mechanical Turk study is conducted to evaluate whether other utterances from the same latent action match these titles." }, { "id": 179, "string": "5 workers see the action name and a different group of 5 utterances from that latent action." }, { "id": 180, "string": "They are asked to select all utterances that belong to the given actions, which tests the homogeneity of the utterances falling in the same cluster." }, { "id": 181, "string": "Negative samples are included to prevent random selection." }, { "id": 182, "string": "Table 4 shows that both methods work well and DI-VST achieved better homogeneity than DI-VAE." }, { "id": 183, "string": "Since DI-VAE is trained to reconstruct its input and DI-VST is trained to model the context, they group utterances in different ways." }, { "id": 184, "string": "For example, DI-VST would group \"Can I get a restaurant\", \"I am looking for a restaurant\" into one action where Dialog Response Generation with Latent Actions Finally we implement an LAED as follows." }, { "id": 185, "string": "The encoder is a hierarchical recurrent encoder (Serban et al., 2016) with bi-directional GRU-RNNs as the utterance encoder and a second GRU-RNN as the discourse encoder." }, { "id": 186, "string": "The discourse encoder output its last hidden state h e |x| ." }, { "id": 187, "string": "The decoder is another GRU-RNN and its initial state of the decoder is obtained by h d 0 = h e |x| + M m=1 e m (z m ), where z comes from the recognition network of the proposed methods." }, { "id": 188, "string": "The policy network π is a 2-layer multi-layer perceptron (MLP) that models p π (z|h e |x| )." }, { "id": 189, "string": "We use up to the previous 10 utterances as the dialog context and denote the LAED using DI-VAE latent actions as AE-ED and the one uses DI-VST as ST-ED." }, { "id": 190, "string": "First we need to confirm whether an LAED can generate responses that are consistent with the semantics of a given z." }, { "id": 191, "string": "To answer this, we use a pre-trained recognition network R to check if a generated response carries the attributes in the given action." }, { "id": 192, "string": "We generate dialog responses on a test dataset viax = F(z ∼ π(c), c) with greedy RNN decoding." }, { "id": 193, "string": "The generated responses are passed into the R and we measure attribute accuracy by countingx as correct if z = argmax k q R (k|x)." }, { "id": 194, "string": "Table 6 : Results for attribute accuracy with and without attribute loss." }, { "id": 195, "string": "responses are highly consistent with the given latent actions." }, { "id": 196, "string": "Also, latent actions from DI-VAE achieve higher attribute accuracy than the ones from DI-VST, because z from auto-encoding is explicitly trained for x reconstruction." }, { "id": 197, "string": "Adding L attr is effective in forcing the decoder to take z into account during its generation, which helps the most in more challenging open-domain chatting data, e.g." }, { "id": 198, "string": "SW and DD." }, { "id": 199, "string": "The accuracy of ST-ED on SW is worse than the other two datasets." }, { "id": 200, "string": "The reason is that SW contains many short utterances that can be either a continuation of the same speaker or a new turn from the other speaker, whereas the responses in the other two domains are always followed by a different speaker." }, { "id": 201, "string": "The more complex context pattern in SW may require special treatment." }, { "id": 202, "string": "We leave it for future work." }, { "id": 203, "string": "The second experiment checks if the policy network π is able to predict the right latent action given just the dialog context." }, { "id": 204, "string": "We report both accuracy, i.e." }, { "id": 205, "string": "argmax k q R (k|x) = argmax k p π (k |c) and perplexity of p π (z|c)." }, { "id": 206, "string": "The perplexity measure is more useful for open domain dialogs because decision-making in complex dialogs is often one-to-many given a similar context ." }, { "id": 207, "string": "Table 7 shows the prediction scores on SMD AE-ED 3.045 (51.5% sys 52.4% usr 50.5%) ST-ED 1.695 (75.5% sys 82.1% usr 69.2%) DD SW AE-ED 4.47 (35.8%) 4.46 (31.68%) ST-ED 3.89 (47.5%) 3.68 (33.2%) Table 7 : Performance of policy network." }, { "id": 208, "string": "L attr is included in training." }, { "id": 209, "string": "the three dialog datasets." }, { "id": 210, "string": "These scores provide useful insights to understand the complexity of a dialog dataset." }, { "id": 211, "string": "For example, accuracy on opendomain chatting is harder than the task-oriented SMD data." }, { "id": 212, "string": "Also, it is intuitive that predicting system actions is easier than predicting user actions on SMD." }, { "id": 213, "string": "Also, in general the prediction scores for ST-ED are higher the ones for AE-ED." }, { "id": 214, "string": "The reason is related to our previous discussion about the granularity of the latent actions." }, { "id": 215, "string": "Since latent actions from DI-VST mainly model the the type of utterances used in certain types of context, it is easier for the policy network to predict latent actions from DI-VST." }, { "id": 216, "string": "Therefore, choosing the type of latent actions is a design choice and depends on the type of interpretability that is needed." }, { "id": 217, "string": "We finish with an example generated from the two variants of LAED on SMD as shown in Table 8 ." }, { "id": 218, "string": "Given a dialog context, our systems are able to output a probability distribution over different latent actions that have interpretable meaning along with their natural language realizations." }, { "id": 219, "string": "c usr: Where does my friend live?" }, { "id": 220, "string": "Model Action Generated Responses AE-ED give loc info -Tom is at 753 University Ave, and a road block." }, { "id": 221, "string": "p(z|c)=0.34 -Comfort Inn is at 7 miles away." }, { "id": 222, "string": "give user info -Your home address is 5671 barringer street." }, { "id": 223, "string": "p(z|c)=0.22 -Your home is at 10 ames street." }, { "id": 224, "string": "ST-ED give loc info -Jill's house is 8 miles away at 347 Alta Mesa Ave. p(z|c)=0.93 -Jill lives at 347 Alta Mesa Ave. Table 8 : Interpretable dialog generation on SMD with top probable latent actions." }, { "id": 225, "string": "AE-ED predicts more fine-grained but more error-prone actions." }, { "id": 226, "string": "Conclusion and Future Work This paper presents a novel unsupervised framework that enables the discovery of discrete latent actions and interpretable dialog response generation." }, { "id": 227, "string": "Our main contributions reside in the two sentence representation models DI-VAE and DI-VST, and their integration with the encoder decoder models." }, { "id": 228, "string": "Experiments show the proposed methods outperform strong baselines in learning discrete latent variables and showcase the effectiveness of interpretable dialog response generation." }, { "id": 229, "string": "Our findings also suggest promising future research directions, including learning better context-based latent actions and using reinforce-ment learning to adapt policy networks." }, { "id": 230, "string": "We believe that this work is an important step forward towards creating generative dialog models that can not only generalize to large unlabelled datasets in complex domains but also be explainable to human users." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 21 }, { "section": "Related Work", "n": "2", "start": 22, "end": 37 }, { "section": "Proposed Methods", "n": "3", "start": 38, "end": 54 }, { "section": "Learning Sentence Representations from Auto-Encoding", "n": "3.1", "start": 55, "end": 66 }, { "section": "Anti-Information Limitation of ELBO", "n": "3.1.1", "start": 67, "end": 73 }, { "section": "VAE with Information Maximization and Batch Prior Regularization", "n": "3.1.2", "start": 74, "end": 94 }, { "section": "Learning Sentence Representations from the Context", "n": "3.2", "start": 95, "end": 104 }, { "section": "Integration with Encoder Decoders", "n": "3.3", "start": 105, "end": 115 }, { "section": "Relationship with Conditional VAEs", "n": "3.4", "start": 116, "end": 119 }, { "section": "Experiments and Results", "n": "4", "start": 120, "end": 129 }, { "section": "Comparing Discrete Sentence Representation Models", "n": "4.1", "start": 130, "end": 162 }, { "section": "Interpreting Latent Actions", "n": "4.2", "start": 163, "end": 183 }, { "section": "Dialog Response Generation with Latent Actions", "n": "4.3", "start": 184, "end": 225 }, { "section": "Conclusion and Future Work", "n": "5", "start": 226, "end": 230 } ], "figures": [ { "filename": "../figure/image/964-Table1-1.png", "caption": "Table 1: Results for various discrete sentence representations. The KL for VAE is KL(q(z|x)‖p(z)) instead of KL(q(z)‖p(z)) (Zhao et al., 2017)", "page": 5, "bbox": { "x1": 72.0, "x2": 291.36, "y1": 141.6, "y2": 321.12 } }, { "filename": "../figure/image/964-Table2-1.png", "caption": "Table 2: DI-VAE on PTB with different latent dimensions under the same budget.", "page": 5, "bbox": { "x1": 310.56, "x2": 522.24, "y1": 514.56, "y2": 571.1999999999999 } }, { "filename": "../figure/image/964-Figure2-1.png", "caption": "Figure 2: Perplexity and I(x, z) on PTB by varying batch size N . BPR works better for larger N .", "page": 5, "bbox": { "x1": 313.92, "x2": 519.36, "y1": 126.72, "y2": 253.92 } }, { "filename": "../figure/image/964-Table6-1.png", "caption": "Table 6: Results for attribute accuracy with and without attribute loss.", "page": 7, "bbox": { "x1": 73.92, "x2": 289.44, "y1": 156.48, "y2": 213.12 } }, { "filename": "../figure/image/964-Table7-1.png", "caption": "Table 7: Performance of policy network. Lattr is included in training.", "page": 7, "bbox": { "x1": 73.92, "x2": 289.44, "y1": 620.64, "y2": 705.12 } }, { "filename": "../figure/image/964-Table8-1.png", "caption": "Table 8: Interpretable dialog generation on SMD with top probable latent actions. AE-ED predicts more fine-grained but more error-prone actions.", "page": 7, "bbox": { "x1": 306.71999999999997, "x2": 530.4, "y1": 357.59999999999997, "y2": 500.15999999999997 } }, { "filename": "../figure/image/964-Table4-1.png", "caption": "Table 4: Human evaluation results on judging the homogeneity of latent actions in SMD.", "page": 6, "bbox": { "x1": 306.71999999999997, "x2": 531.36, "y1": 62.879999999999995, "y2": 105.11999999999999 } }, { "filename": "../figure/image/964-Table3-1.png", "caption": "Table 3: Homogeneity results (bounded [0, 1]).", "page": 6, "bbox": { "x1": 82.56, "x2": 280.32, "y1": 251.51999999999998, "y2": 308.15999999999997 } }, { "filename": "../figure/image/964-Figure1-1.png", "caption": "Figure 1: Our proposed models learn a set of discrete variables to represent sentences by either autoencoding or context prediction.", "page": 0, "bbox": { "x1": 312.0, "x2": 521.28, "y1": 354.71999999999997, "y2": 412.32 } } ] }, "gem_id": "GEM-SciDuet-train-3" }, { "slides": { "0": { "title": "Lemmatization", "text": [ "INST ar celu ar celiem", "Latvian: cels (English: road)" ], "page_nums": [ 1 ], "images": [] }, "1": { "title": "Previous work", "text": [ "sentence context helps to lemmatize", "ambiguous and unseen words", "Bergmanis and Goldwater, 2018" ], "page_nums": [ 2 ], "images": [] }, "2": { "title": "Ambiguous words", "text": [ "A cels (road): NOUN, sing., ACC", "B celis (knee): NOUN, plur., DAT" ], "page_nums": [ 3 ], "images": [] }, "3": { "title": "Learning from sentences", "text": [ "Lemma annotated sentences are scarce for low resource languages annotating sentences is slow", "N types > N (contiguous) tokens" ], "page_nums": [ 4, 5, 6 ], "images": [] }, "4": { "title": "N types N tokens", "text": [ "Training on 1k UDT tokens/types" ], "page_nums": [ 7 ], "images": [] }, "5": { "title": "Types in context", "text": [ "algorithms get smarter computers faster", "Bergmanis and Goldwater, 2018" ], "page_nums": [ 8 ], "images": [] }, "6": { "title": "Proposal Data Augmentation", "text": [ "...to get types in context" ], "page_nums": [ 9 ], "images": [] }, "7": { "title": "Method Data Augmentation", "text": [ "Inflection cels cela N;LOC;SG", "Dzives pedeja cela pavadot musu cels", "Context cels cela N;LOC;SG", "Lemma cels cela N;LOC;SG" ], "page_nums": [ 10, 11, 12 ], "images": [] }, "8": { "title": "Inflection Tables", "text": [ "INST ar celu ar celiem", "Latvian: cels (English: road)", "ACC celu celiem celus", "celt (build) celot (travel) celis (knee)" ], "page_nums": [ 13, 14, 15, 16 ], "images": [] }, "9": { "title": "Key question", "text": [ "If ambiguous words enforce the use of context:", "Is context still useful in the absence of ambiguous forms?" ], "page_nums": [ 17 ], "images": [] }, "10": { "title": "Experiments", "text": [ "Train: 1k types from universal dependency corpus", "UniMorph in Wikipedia contexts", "Estonian, Finnish, Latvian, Polish,", "Romanian, Russian, Swedish, Turkish", "Metric: type level macro average accuracy", "Test: on standard splits of universal dependency corpus" ], "page_nums": [ 18, 19 ], "images": [] }, "12": { "title": "Does model learn from context", "text": [ "context vs no context" ], "page_nums": [ 21 ], "images": [] }, "13": { "title": "Afix ambiguity wuger", "text": [ "Lemma depends on context:", "A if wuger is adjective then lemma could be wug", "B if wuger is noun then lemma could be wuger" ], "page_nums": [ 22 ], "images": [] }, "14": { "title": "Takeaways conclusions", "text": [ "Despite biased data and divergent lemmatization standards", "Type based data augmentation helps", "Even without the ambiguous types that enforce the use of context", "Model use context to disambiguate affixes of unseen words" ], "page_nums": [ 23, 24 ], "images": [] } }, "paper_title": "Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text", "paper_id": "965", "paper": { "title": "Data Augmentation for Context-Sensitive Neural Lemmatization Using Inflection Tables and Raw Text", "abstract": "Lemmatization aims to reduce the sparse data problem by relating the inflected forms of a word to its dictionary form. Using context can help, both for unseen and ambiguous words. Yet most context-sensitive approaches require full lemma-annotated sentences for training, which may be scarce or unavailable in lowresource languages. In addition (as shown here), in a low-resource setting, a lemmatizer can learn more from n labeled examples of distinct words (types) than from n (contiguous) labeled tokens, since the latter contain far fewer distinct types. To combine the efficiency of type-based learning with the benefits of context, we propose a way to train a context-sensitive lemmatizer with little or no labeled corpus data, using inflection tables from the UniMorph project and raw text examples from Wikipedia that provide sentence contexts for the unambiguous UniMorph examples. Despite these being unambiguous examples, the model successfully generalizes from them, leading to improved results (both overall, and especially on unseen words) in comparison to a baseline that does not use context. Method Lematus (Bergmanis and Goldwater, 2018) is a neural sequence-to-sequence model with attention 2 Garrette et al. (2013) found the same for POS tagging. 3 Code and data: https://bitbucket.org/ tomsbergmanis/data_augumentation_um_wiki 4", "text": [ { "id": 0, "string": "Introduction Many lemmatizers work on isolated wordforms (Wicentowski, 2002; Dreyer et al., 2008; Rastogi et al., 2016; Makarov and Clematide, 2018b,a) ." }, { "id": 1, "string": "Lemmatizing in context can improve accuracy on ambiguous and unseen words (Bergmanis and Goldwater, 2018) , but most systems for contextsensitive lemmatization must train on complete sentences labeled with POS and/or morphological tags as well as lemmas, and have only been tested with 20k-300k training tokens (Chrupała et al., 2008; Müller et al., 2015; Chakrabarty et al., 2017) ." }, { "id": 2, "string": "1 1 The smallest of these corpora contains 20k tokens of Bengali annotated only with lemmas, which Chakrabarty et al." }, { "id": 3, "string": "(2017) reported took around two person months to create." }, { "id": 4, "string": "Intuitively, though, sentence-annotated data is inefficient for training a lemmatizer, especially in low-resource settings." }, { "id": 5, "string": "Training on (say) 1000 word types will provide far more information about a language's morphology than training on 1000 contiguous tokens, where fewer types are represented." }, { "id": 6, "string": "As noted above, sentence data can help with ambiguous and unseen words, but we show here that when data is scarce, this effect is small relative to the benefit of seeing more word types." }, { "id": 7, "string": "2 Motivated by this result, we propose a training data augmentation method that combines the efficiency of type-based learning and the expressive power of a context-sensitive model." }, { "id": 8, "string": "3 We use Lematus (Bergmanis and Goldwater, 2018), a state-of-theart lemmatizer that learns from lemma-annotated words in their N -character contexts." }, { "id": 9, "string": "No predictions about surrounding words are used, so fully annotated training sentences are not needed." }, { "id": 10, "string": "We exploit this fact by combining two sources of training data: 1k lemma-annotated types (with contexts) from the Universal Dependency Treebank (UDT) v2.2 4 (Nivre et al., 2017) , plus examples obtained by finding unambiguous word-lemma pairs in inflection tables from the Universal Morphology (UM) project 5 and collecting sentence contexts for them from Wikipedia." }, { "id": 11, "string": "Although these examples are noisy and biased, we show that they improve lemmatization accuracy in experiments on 10 languages, and that the use of context helps, both overall and especially on unseen words." }, { "id": 12, "string": "inspired by the re-inflection model of Kann and Schütze (2016) , which won the 2016 SIGMOR-PHON shared task (Cotterell et al., 2016) ." }, { "id": 13, "string": "It is built using the Nematus machine translation toolkit, 6 which uses the architecture of Sennrich et al." }, { "id": 14, "string": "(2017) : a 2-layer bidirectional GRU encoder and a 2-layer decoder with a conditional GRU (Sennrich et al., 2017) in the first layer and a GRU in the second layer." }, { "id": 15, "string": "Lematus takes as input a character sequence representing the wordform in its N -character context, and outputs the characters of the lemma." }, { "id": 16, "string": "Special input symbols are used to represent the left and right boundary of the target wordform (, ) and other word boundaries ()." }, { "id": 17, "string": "For example, if N = 15, the system trained on Latvian would be expected to produce the characters of the lemma ceļš (meaning road) given input such as: s a k a p aš v a l dī b u c e ļ u u n i e l u r eǵ i s t r When N = 0 (Lematus 0-ch), no context is used, making Lematus 0-ch comparable to other systems that do not model context (Dreyer et al., 2008; Rastogi et al., 2016; Makarov and Clematide, 2018b,a) ." }, { "id": 18, "string": "In our experiments we use both Lematus 0-ch and Lematus 20-ch (20 characters of context), which was the best-performing system reported by Bergmanis and Goldwater (2018)." }, { "id": 19, "string": "Data Augmentation Our data augmentation method uses UM inflection tables and creates additional training examples by finding Wikipedia sentences that use the inflected wordforms in context, pairing them with their lemma as shown in the inflection table." }, { "id": 20, "string": "However, we cannot use all the words in the tables because some of them are ambiguous: for example, Figure 1 shows that the form ceļi could be lemmatized either as ceļš or celis." }, { "id": 21, "string": "SG PL SG PL NOM ceļš ceļi celis ceļi GEN ceļa ceļu ceļa ceļu DAT ceļam ceļiem celim ceļiem ACC ceļu ceļus celi ceļus INS ceļu ceļiem celi ceļiem LOC ceļā ceļos celī ceļos VOC ceļ ceļi celi ceļi There are several other issues with this method that could potentially limit its usefulness." }, { "id": 22, "string": "First, the UM tables only include verbs, nouns and adjectives, whereas we test the system on UDT data, which includes all parts of speech." }, { "id": 23, "string": "Second, by excluding ambiguous forms, we may be restricting the added examples to a non-representative subset of the potential inflections, or the system may simply ignore the context because it isn't needed for these examples." }, { "id": 24, "string": "Finally, there are some annotation differences between UM and UDT." }, { "id": 25, "string": "7 Despite all of these issues, however, we show below that the added examples and their contexts do actually help." }, { "id": 26, "string": "Experimental Setup Baselines and Training Parameters We use four baselines: (1) Lemming 8 (Müller et al., 2015) is a context-sensitive system that uses log-linear models to jointly tag and lemmatize the data, and is trained on sentences annotated with both lemmas and POS tags." }, { "id": 27, "string": "(2) The hard monotonic attention model (HMAM) 9 (Makarov and Clematide, 2018b) is a neural sequence-tosequence model with a hard attention mechanism that advances through the sequence monotonically." }, { "id": 28, "string": "It is trained on word-lemma pairs (without context) 7 Recent efforts to unify the two resources have mostly focused on validating dataset schema (McCarthy et al., 2018) , leaving conflicts in word lemmas unresolved." }, { "id": 29, "string": "We estimated (by counting types that are unambiguous in each dataset but have different lemmas across them) that annotation inconsistencies affect up to 1% of types in the languages we used." }, { "id": 30, "string": "8 http://cistern.cis.lmu.de/lemming 9 https://github.com/ZurichNLP/ coling2018-neural-transition-basedmorphology with character-level alignments learned in a preprocessing step using an alignment model, and it has proved to be competitive in low resource scenarios." }, { "id": 31, "string": "(3) Our naive Baseline outputs the most frequent lemma (or one lemma at random from the options that are equally frequent) for words observed in training." }, { "id": 32, "string": "For unseen words it outputs the wordform itself." }, { "id": 33, "string": "(4) We also try a baseline data augmentation approach (AE Aug Baseline) inspired by Bergmanis et al." }, { "id": 34, "string": "(2017) and Kann and Schütze (2017) , who showed that adding training examples where the network simply learns to auto-encode corpus words can improve morphological inflection results in low-resource settings." }, { "id": 35, "string": "The AE Aug Baseline is a variant of Lematus 0-ch which augments the UDT lemmatization examples by auto-encoding the inflected forms of the UM examples (i.e., it just treats them as corpus words)." }, { "id": 36, "string": "Comparing AE Aug Baseline to Lematus 0-ch augmented with UM lemma-inflection examples tells us whether using the UM lemma information helps more than simply auto-encoding more inflected examples." }, { "id": 37, "string": "To train the models we use the default settings for Lemming and the suggested lemmatization parameters for HMAM." }, { "id": 38, "string": "We mainly follow the hyperparameters used by Bergmanis and Goldwater (2018) for Lematus; details are in Appendix B." }, { "id": 39, "string": "Languages and Training Data We conduct preliminary experiments on five development languages: Estonian, Finnish, Latvian, Polish, and Russian." }, { "id": 40, "string": "In our final experiments we also add Bulgarian, Czech, Romanian, Swedish and Turkish." }, { "id": 41, "string": "We vary the amount and type of training data (types vs. tokens, UDT only, UM only, or UDT plus up to 10k UM examples), as described in Section 4." }, { "id": 42, "string": "To obtain N UM-based training examples, we select the first N unambiguous UM types (with their sentence contexts) from shuffled Wikipedia sentences." }, { "id": 43, "string": "For experiments with j > 1 examples per type, we first find all UM types with at least j sentence contexts in Wikipedia and then choose the N distinct types and their j contexts uniformly at random." }, { "id": 44, "string": "Evaluation To evaluate models' ability to lemmatize wordforms in their sentence context we follow Bergmanis and Goldwater (2018) and use the full UDT development and test sets." }, { "id": 45, "string": "Unlike Bergmanis and Goldwater (2018) who reported token level lemmatization exact match accuracy, we report type-level micro averaged lemmatization ex- act match accuracy." }, { "id": 46, "string": "This measure better reflects improvements on unseen words, which tend to be rare but are more important (since a most-frequentlemma baseline does very well on seen words, as shown by Bergmanis and Goldwater (2018) )." }, { "id": 47, "string": "We separately report performance on unseen and ambiguous tokens." }, { "id": 48, "string": "For a fair comparison across scenarios with different training sets, we count as unseen only words that are not ambiguous and are absent from all training sets/scenarios introduced in Section 4." }, { "id": 49, "string": "Due to the small training sets, between 70-90% of dev set types are classed as unseen in each language." }, { "id": 50, "string": "We define a type as ambiguous if the empirical entropy over its lemmas is greater than 0.1 in the full original UDT training splits." }, { "id": 51, "string": "10 According to this measure, only 1.2-5.3% of dev set types are classed as ambiguous in each language." }, { "id": 52, "string": "Significance Testing All systems are trained and tested on ten languages." }, { "id": 53, "string": "To test for statistically significant differences between the results of two systems we use a Monte Carlo method: for each set of results (i.e." }, { "id": 54, "string": "a set of 10 numerical values) we generate 10000 random samples, where each sample swaps the results of the two systems for each language with a probability of 0.5." }, { "id": 55, "string": "We then obtain a p-value as the proportion of samples for which the difference on average was at least as large as the difference observed in our experiments." }, { "id": 56, "string": "1k tokens vs. first 1k distinct types of the UDT training sets." }, { "id": 57, "string": "Table 2 shows that if only 1k examples are available, using types is clearly better for all systems." }, { "id": 58, "string": "Although Lematus does relatively poorly on the token data, it benefits the most from switching to types, putting it on par with HMAM and suggesting is it likely to benefit more from additional type data." }, { "id": 59, "string": "Lemming requires token-based data, but does worse than HMAM (a context-free method) in the token-based setting, and we also see no benefit from context in comparing Lematus 20-ch vs Lematus 0-ch." }, { "id": 60, "string": "So overall, in this very low-resource scenario with no data augmentation, context does not appear to help." }, { "id": 61, "string": "Using UM + Wikipedia Only We now try training only on UM + Wikipedia examples, rather than examples from UDT." }, { "id": 62, "string": "We use 1k, 2k or 5k unambiguous types from UM with a single example context from Wikipedia for each." }, { "id": 63, "string": "With 5k types we also try adding more example contexts (2, 3, or 5 examples for each type)." }, { "id": 64, "string": "Figure 1 presents the results (for unseen words only)." }, { "id": 65, "string": "As with the UDT experiments, there is little difference between Lematus 20-ch and Lematus 0ch in the smallest data setting." }, { "id": 66, "string": "However, when the number of training types increases to 5k, the benefits of context begin to show, with Lematus 20-ch yielding a 1.6% statistically significant (p < 0.001) improvement over Lematus 0-ch." }, { "id": 67, "string": "The results for increasing the number of examples per type are numerically higher than the one-example case, but the differences are not statistically significant." }, { "id": 68, "string": "It is worth noting that the accuracy even with 5k UM types is considerably lower than the accuracy of the model trained on only 1k UDT types (see Table 2 )." }, { "id": 69, "string": "We believe this discrepancy is due to the issues of biased/incomplete data noted above." }, { "id": 70, "string": "types with contexts from Wikipedia." }, { "id": 71, "string": "Table 3 summarizes the results, showing that despite the lower quality of the UM + Wikipedia examples, using them improves results of all systems, and more so with more examples." }, { "id": 72, "string": "Improvements are especially strong for unseen types, which constitute more than 70% of types in the dev set." }, { "id": 73, "string": "Furthermore, the benefit of the additional UM examples is above and beyond the effect of auto-encoding (AE Aug Baseline) for all systems in all data scenarios." }, { "id": 74, "string": "Considering the two context-free models, HMAM does better on the un-augmented 1k UDT data, but (as predicted by our results above) it benefits less from data augmentation than does Lematus 0-ch, so with added data they are statistically equivalent (p = 0.07 on the test set with 10k UM)." }, { "id": 75, "string": "More importantly, Lematus 20-ch begins to outperform the context-free models with as few as 1k UM + Wikipedia examples, and the difference increases with more examples, eventually reaching over 4% better on the test set than the next best model (Lematus 0-ch) when 10k UM + Wikipedia examples are used (p < 0.001) This indicates that the system can learn useful contextual cues even from unambiguous training examples." }, { "id": 76, "string": "Finally, Figure 2 gives a breakdown of Lematus 20-ch dev set accuracy for individual languages, showing that data augmentation helps consistently, although results suggest diminishing returns." }, { "id": 77, "string": "Data Augmentation in Medium Resource Setting To examine the extent to which augmented data can help in the medium resource setting of 10k continuous tokens of UDT used in previous work, we follow Bergmanis and Goldwater (2018) and train Lematus 20-ch models for all ten languages using the first 10k tokens of UDT and compare them with models trained on 10k tokens of UDT augmented with 10k UM types." }, { "id": 78, "string": "To provide a better comparison of our results, we report both the type and the token level development set accuracy." }, { "id": 79, "string": "First of all, Table 4 shows that training on 10k continuous tokens of UDT yields a token level accuracy that is about 8% higher than when using the 1k types of UDT augmented with 10k UM types-the best-performing data augmentation systems (see Table 3 )." }, { "id": 80, "string": "Again, we believe this performance gap is due to the issues with the biased/incomplete data noted above." }, { "id": 81, "string": "For example, we analyzed errors that were unique to the model trained on the Latvian augmented data and found that 41% of the errors were due to wrongly lemmatized words other than nouns, verbs, and adjectives-the three POSs with available inflection tables in UM." }, { "id": 82, "string": "For instance, improperly lemmatized pronouns amounted to 14% of the errors on the Latvian dev set." }, { "id": 83, "string": "Table 4 also shows that UM examples with Wikipedia contexts benefit lemmatization not only in the low but also the medium resource setting, yielding statistically significant type and token level accuracy gains over models trained on 10k UDT continuous tokens alone (for both Unseen and All p < 0.001)." }, { "id": 84, "string": "Conclusion We proposed a training data augmentation method that combines the efficiency of type-based learning and the expressive power of a context-sensitive lemmatization model." }, { "id": 85, "string": "The proposed method uses Wikipedia sentences to provide contextualized examples for unambiguous inflection-lemma pairs from UniMorph tables." }, { "id": 86, "string": "These examples are noisy and biased, but nevertheless they improve lemmatization accuracy on all ten languages both in low (1k) and medium (10k) resource settings." }, { "id": 87, "string": "In particular, we showed that context is helpful, both overall and especially on unseen words-the first work we know of to demonstrate improvements from context in a very low-resource setting." }, { "id": 88, "string": "A Lematus Training Lematus is implemented using the Nematus machine translation toolkit 11 ." }, { "id": 89, "string": "We use default training parameters of Lematus as specified by Bergmanis and Goldwater (2018) except for early stopping with patience (Prechelt, 1998) which we increase to 20." }, { "id": 90, "string": "Similar to Bergmanis and Goldwater (2018) we use the first epochs as a burn-in period, after which we validate the current model by its lemmatization exact match accuracy on the first 3k instances of development set and save this model if it performs better than the previous best model." }, { "id": 91, "string": "We choose a burn-in period of 20 and validation interval of 5 epochs for models that we train on datasets up to 2k instances and a burn-in period of 10 and validation interval of 2 epochs for others." }, { "id": 92, "string": "As we work with considerably smaller datasets than Bergmanis and Goldwater (2018) we reduce the effective model size and increase the rate of convergence by tying the input embeddings of the encoder, the decoder and the softmax output embeddings (Press and Wolf, 2017)." }, { "id": 93, "string": "B Data Preparation Wikipedia database dumps contain XML structured articles that are formatted using the wikitext markup language." }, { "id": 94, "string": "To obtain wordforms in their sentence context we 1) use WikiExtractor 12 to extract plain text from Wikipedia database dumps, followed by scripts from Moses statistical machine translation system 13 (Koehn et al., 2007) to 2) split text into sentences (split-sentences.perl), and 3) extract separate tokens (tokenizer.perl)." }, { "id": 95, "string": "Finally, we shuffle the extracted sentences to encourage homogeneous type distribution across the entire text." }, { "id": 96, "string": "Table 3 ." }, { "id": 97, "string": "C Result Breakdown by Language" } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 18 }, { "section": "Data Augmentation", "n": "2.1", "start": 19, "end": 25 }, { "section": "Experimental Setup", "n": "3", "start": 26, "end": 83 }, { "section": "Conclusion", "n": "5", "start": 84, "end": 97 } ], "figures": [ { "filename": "../figure/image/965-Table1-1.png", "caption": "Table 1: Example UM inflection tables for Latvian nouns ceļš (road) and celis (knee). The crossed out forms are examples of evidently ambiguous forms that are not used for data augmentation because of being shared by the two lemmas. The underlined forms appear unambiguous in this toy example but actually conflict with inflections of the verb celt (to lift).", "page": 1, "bbox": { "x1": 306.71999999999997, "x2": 532.3199999999999, "y1": 65.75999999999999, "y2": 190.07999999999998 } }, { "filename": "../figure/image/965-Table9-1.png", "caption": "Table 9: Individual type and token level lemmatization accuracy for all 10 languages on development set for Lematus 20-ch models trained on 10k UDT tokens and 10k UDT tokens plus 10k UM types with contexts from Wikipedia. The numerically highest scores for each language are bold. For the summary of results see Table 4.", "page": 9, "bbox": { "x1": 102.72, "x2": 494.4, "y1": 250.56, "y2": 527.52 } }, { "filename": "../figure/image/965-Table2-1.png", "caption": "Table 2: Average type level lemmatization exact match accuracy on five development languages in type and token based training data scenarios. Colour-scale is computed over the whole Ambig. column and over all but Baseline rows for the other columns.", "page": 2, "bbox": { "x1": 319.68, "x2": 513.12, "y1": 62.4, "y2": 188.16 } }, { "filename": "../figure/image/965-Table6-1.png", "caption": "Table 6: Individual type level lemmatization accuracy for all 10 languages on development set, trained on 1k UDT types plus 1k UM types with contexts from Wikipedia. The numerically highest scores for each language are bold. For the summary of results see Table 3.", "page": 7, "bbox": { "x1": 306.71999999999997, "x2": 508.32, "y1": 113.75999999999999, "y2": 662.4 } }, { "filename": "../figure/image/965-Table5-1.png", "caption": "Table 5: Individual type level lemmatization accuracy for all 10 languages on development set, trained on 1k UDT types (no augmentation) with contexts from Wikipedia. The numerically highest scores for each language are bold. For the summary of results see Table 3.", "page": 7, "bbox": { "x1": 85.92, "x2": 289.44, "y1": 97.44, "y2": 662.4 } }, { "filename": "../figure/image/965-Figure1-1.png", "caption": "Figure 1: Average type level lemmatization exact match accuracy on unseen words of five development languages. X-axis: thousands of types in training data.", "page": 3, "bbox": { "x1": 72.0, "x2": 291.36, "y1": 61.44, "y2": 192.0 } }, { "filename": "../figure/image/965-Table3-1.png", "caption": "Table 3: Average lemmatization accuracy for all 10 languages, trained on 1k UDT types (No aug.), or 1k UDT plus 1k, 5k, or 10k UM types with contexts from Wikipedia. The numerically highest scores in each data setting are bold; ∗, †, and ‡ indicate statistically significant improvements over HMAM (Makarov and Clematide, 2018b), Lematus 0-ch and 20-ch, respectively (all p < 0.05; see text for details). Colour-scale is computed over the whole Ambig. column and over all but Baseline rows for the other columns.", "page": 3, "bbox": { "x1": 306.71999999999997, "x2": 528.0, "y1": 62.4, "y2": 393.12 } }, { "filename": "../figure/image/965-Table7-1.png", "caption": "Table 7: Individual type level lemmatization accuracy for all 10 languages on development set, trained on 1k UDT types plus 5k UM types with contexts from Wikipedia. The numerically highest scores for each language are bold. For the summary of results see Table 3.", "page": 8, "bbox": { "x1": 85.92, "x2": 296.15999999999997, "y1": 88.32, "y2": 652.8 } }, { "filename": "../figure/image/965-Table8-1.png", "caption": "Table 8: Individual type level lemmatization accuracy for all 10 languages on development set, trained on 1k UDT types plus 10k UM types with contexts from Wikipedia. The numerically highest scores for each language are bold. For the summary of results see Table 3.", "page": 8, "bbox": { "x1": 306.71999999999997, "x2": 510.24, "y1": 88.32, "y2": 652.8 } }, { "filename": "../figure/image/965-Table4-1.png", "caption": "Table 4: Lematus 20-ch average lemmatization type and token accuracy for all 10 languages, trained on 1k UDT types, 1k UDT augmented with 10k UM types, 10k UDT continuous tokens, or 10k UDT continuous tokens augmented with 10k UM types. Unless specified otherwise data consists of distinct types.", "page": 4, "bbox": { "x1": 310.56, "x2": 519.36, "y1": 62.4, "y2": 160.32 } }, { "filename": "../figure/image/965-Figure2-1.png", "caption": "Figure 2: Lematus 20-ch lemmatization accuracy for each language on all types in the dev sets.", "page": 4, "bbox": { "x1": 72.0, "x2": 291.36, "y1": 61.44, "y2": 175.2 } } ] }, "gem_id": "GEM-SciDuet-train-4" }, { "slides": { "0": { "title": "What is Automated Essay Scoring AES", "text": [ "Computer produces summative assessment for evaluation", "Aim: reduce human workload", "AES has been put into practical use by ETS from 1999" ], "page_nums": [ 2 ], "images": [] }, "1": { "title": "Prompt specific and Independent AES", "text": [ "Most existing AES approaches are prompt-specific", "Require human labels for each prompt to train", "Can achieve satisfying human-machine agreement", "Prompt-independent AES remains a challenge", "Only non-target human labels are available" ], "page_nums": [ 3 ], "images": [] }, "2": { "title": "Challenges in Prompt independent AES", "text": [ "Source Prompts Target Prompt", "Learn essays Predict target", "Previous approaches learn on source prompts", "Domain adaption [Phandi et al. EMNLP 2015] Cross-domain learning [Dong & Zhang, EMNLP", "Achieved Avg. QWK = 0.6395 at best with up to 100 labeled target essays", "Off-topic: essays written for source prompts are mostly irrelevant" ], "page_nums": [ 4, 5, 6, 7 ], "images": [] }, "3": { "title": "TDNN A Two stage Deep Neural Network for Prompt", "text": [ "Based on the idea of transductive transfer learning", "Learn on target essays", "Utilize the content of target essays to rate" ], "page_nums": [ 9 ], "images": [] }, "4": { "title": "The Two stage Architecture", "text": [ "Prompt-independent stage: train a shallow model to create pseudo labels on the target prompt", "Prompt-dependent stage: learn an end-to-end model to predict essay ratings for the target prompts" ], "page_nums": [ 10, 11 ], "images": [ "figure/image/966-Figure1-1.png" ] }, "5": { "title": "Prompt independent stage", "text": [ "Train a robust prompt-independent AES model", "Learning algorithm: RankSVM for AES", "Select confident essays written for the target prompt", "Predicted ratings in as negative examples", "Predicted ratings in as positive examples", "Converted to 0/1 labels", "Common sense: 8 is good, <5 is bad" ], "page_nums": [ 12, 13, 14, 15, 16, 17 ], "images": [] }, "6": { "title": "Prompt dependent stage", "text": [ "Train a hybrid deep model for a prompt-", "An end-to-end neural network with three parts" ], "page_nums": [ 18 ], "images": [] }, "7": { "title": "Architecture of the hybrid deep model", "text": [ "Multi-layer structure: Words (phrases) - Sentences Essay" ], "page_nums": [ 19, 20, 21, 22, 23, 24 ], "images": [ "figure/image/966-Figure2-1.png" ] }, "8": { "title": "Model Training", "text": [ "Training loss: MSE on 0/1 pseudo labels", "Validation metric: Kappa on 30% non-target essays", "Select the model that can best rate" ], "page_nums": [ 25 ], "images": [] }, "9": { "title": "Dataset and Metrics", "text": [ "We use the standard ASAP corpus", "8 prompts with >10K essays in total", "Prompt-independent AES: 7 prompts are used for training, 1 for testing", "Report on common human-machine agreement metrics", "Pearsons correlation coefficient (PCC)", "Spearmans correlation coefficient (SCC)", "Quadratic weighted Kappa (QWK)" ], "page_nums": [ 27 ], "images": [] }, "10": { "title": "Baselines", "text": [ "RankSVM based on prompt-independent handcrafted", "Also used in the prompt-independent stage in TDNN", "Two LSTM layer + linear layer", "CNN + LSTM + linear layer" ], "page_nums": [ 28 ], "images": [] }, "11": { "title": "RankSVM is the most robust baseline", "text": [ "High variance of DNN models performance on all 8 prompts", "Possibly caused by learning on non-target prompts RankSVM appears to be the most stable baseline Justifies the use of RankSVM in the first stage of TDNN" ], "page_nums": [ 29 ], "images": [] }, "12": { "title": "Comparison to the best baseline", "text": [ "TDNN outperforms the best baseline on 7 out of 8 prompts Performance improvements gained by learning on the target prompt" ], "page_nums": [ 30 ], "images": [] }, "13": { "title": "Average performance on 8 prompts", "text": [ "Method QWK PCC SCC" ], "page_nums": [ 31, 32, 33 ], "images": [] }, "14": { "title": "Sanity Check Relative Precision", "text": [ "How the quality of pseudo examples affects the performance of", "The sanctity of the selected essays, namely, the number of positive", "(negative) essays that are better (worse) than all negative (positive)", "Such relative precision is at least 80% and mostly beyond 90% on different prompts", "TDNN can at least learn", "from correct 0/1 labels" ], "page_nums": [ 34 ], "images": [] }, "15": { "title": "Conclusions", "text": [ "It is beneficial to learn an AES model on the target prompt", "Syntactic features are useful addition to the widely used Word2Vec embeddings", "Sanity check: small overlap between pos/neg examples", "Prompt-independent AES remains an open problem", "TDNN can achieve 0.68 at best" ], "page_nums": [ 35 ], "images": [] } }, "paper_title": "TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring", "paper_id": "966", "paper": { "title": "TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring", "abstract": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.", "text": [ { "id": 0, "string": "Introduction Automated essay scoring (AES) utilizes natural language processing and machine learning techniques to automatically rate essays written for a target prompt (Dikli, 2006) ." }, { "id": 1, "string": "Currently, the AES systems have been widely used in large-scale English writing tests, e.g." }, { "id": 2, "string": "Graduate Record Examination (GRE), to reduce the human efforts in the writing assessments (Attali and Burstein, 2006) ." }, { "id": 3, "string": "Existing AES approaches are promptdependent, where, given a target prompt, rated essays for this particular prompt are required for training (Dikli, 2006; Williamson, 2009; Foltz et al., 1999) ." }, { "id": 4, "string": "While the established models are effective (Chen and He, 2013; Taghipour and Ng, 2016; Alikaniotis et al., 2016; Cummins et al., 2016; , we argue that the models for prompt-independent AES are also desirable to allow for better feasibility and flexibility of AES systems especially when the rated essays for a target prompt are difficult to obtain or even unaccessible." }, { "id": 5, "string": "For example, in a writing test within a small class, students are asked to write essays for a target prompt without any rated examples, where the prompt-dependent methods are unlikely to provide effective AES due to the lack of training data." }, { "id": 6, "string": "Prompt-independent AES, however, has drawn little attention in the literature, where there only exists unrated essays written for the target prompt, as well as the rated essays for several non-target prompts." }, { "id": 7, "string": "We argue that it is not straightforward, if possible, to apply the established promptdependent AES methods for the mentioned prompt-independent scenario." }, { "id": 8, "string": "On one hand, essays for different prompts may differ a lot in the uses of vocabulary, the structure, and the grammatic characteristics; on the other hand, however, established prompt-dependent AES models are designed to learn from these prompt-specific features, including the on/off-topic degree, the tfidf weights of topical terms (Attali and Burstein, 2006; Dikli, 2006) , and the n-gram features extracted from word semantic embeddings (Dong and Zhang, 2016; Alikaniotis et al., 2016) ." }, { "id": 9, "string": "Consequently, the prompt-dependent models can hardly learn generalized rules from rated essays for nontarget prompts, and are not suitable for the promptindependent AES." }, { "id": 10, "string": "Being aware of this difficulty, to this end, a twostage deep neural network, coined as TDNN, is proposed to tackle the prompt-independent AES problem." }, { "id": 11, "string": "In particular, to mitigate the lack of the prompt-dependent labeled data, at the first stage, a shallow model is trained on a number of rated essays for several non-target prompts; given a target prompt and a set of essays to rate, the trained model is employed to generate pseudo training data by selecting essays with the extreme quality." }, { "id": 12, "string": "At the second stage, a novel end-to-end hybrid deep neural network learns prompt-dependent features from these selected training data, by considering semantic, part-of-speech, and syntactic features." }, { "id": 13, "string": "The contributions in this paper are threefold: 1) a two-stage learning framework is proposed to bridge the gap between the target and non-target prompts, by only consuming rated essays for nontarget prompts as training data; 2) a novel deep model is proposed to learn from pseudo labels by considering semantic, part-of-speech, and syntactic features; and most importantly, 3) to the best of our knowledge, the proposed TDNN is actually the first approach dedicated to addressing the prompt-independent AES." }, { "id": 14, "string": "Evaluation on the standard ASAP dataset demonstrates the effectiveness of the proposed method." }, { "id": 15, "string": "The rest of this paper is organized as follows." }, { "id": 16, "string": "In Section 2, we describe our novel TDNN model, including the two-stage framework and the proposed deep model." }, { "id": 17, "string": "Following that, we describe the setup of our empirical study in Section 3, thereafter present the results and provide analyzes in Section 4." }, { "id": 18, "string": "Section 5 recaps existing literature and put our work in context, before drawing final conclusions in Section 6." }, { "id": 19, "string": "Two-stage Deep Neural Network for AES In this section, the proposed two-stage deep neural network (TDNN) for prompt-independent AES is described." }, { "id": 20, "string": "To accurately rate an essay, on one hand, we need to consider its pertinence to the given prompt; on the other hand, the organization, the analyzes, as well as the uses of the vocabulary are all crucial for the assessment." }, { "id": 21, "string": "Henceforth, both prompt-dependent and -independent factors should be considered, but the latter ones actually do not require prompt-dependent training data." }, { "id": 22, "string": "Accordingly, in the proposed framework, a supervised ranking model is first trained to learn from prompt-independent data, hoping to roughly assess essays without considering the prompt; subsequently, given the test dataset, namely, a set of essays for a target prompt, a subset of essays are selected as positive and negative training data based on the prediction of the trained model from the first stage; ultimately, a novel deep model is proposed to learn both prompt-dependent and -independent factors on this selected subset." }, { "id": 23, "string": "As indicated in Figure 1 , the proposed framework includes two stages." }, { "id": 24, "string": "Prompt-independent stage." }, { "id": 25, "string": "Only the promptindependent factors are considered to train a shallow model, aiming to recognize the essays with the extreme quality in the test dataset, where the rated essays for non-target prompts are used for training." }, { "id": 26, "string": "Intuitively, one could recognize essays with the highest and the lowest scores correctly by solely examining their quality of writing, e.g., the number of typos, without even understanding them, and the prompt-independent features such as the number of grammatic and spelling errors should be sufficient to fulfill this screening procedure." }, { "id": 27, "string": "Accordingly, a supervised model trained solely on prompt-independent features is employed to identify the essays with the highest and lowest scores in a given set of essays for the target prompt, which are used as the positive and negative training data in the follow-up prompt-dependent learning phase." }, { "id": 28, "string": "Overview Prompt-dependent stage." }, { "id": 29, "string": "Intuitively, most essays are with a quality in between the extremes, requiring a good understanding of their meaning to make an accurate assessment, e.g., whether the examples from the essay are convincing or whether the analyzes are insightful, making the consideration of prompt-dependent features crucial." }, { "id": 30, "string": "To achieve that, a model is trained to learn from the comparison between essays with the highest and lowest scores for the target prompt according to the predictions from the first step." }, { "id": 31, "string": "Akin to the settings in transductive transfer learning (Pan and Yang, 2010), given essays for a particular prompt, quite a few confident essays at two extremes are selected and are used to train another model for a fine-grained content-based prompt-dependent assessment." }, { "id": 32, "string": "To enable this, a powerful deep model is proposed to consider the content of the essays from different perspectives using semantic, part-of-speech (POS) and syntactic network." }, { "id": 33, "string": "After being trained with the selected essays, the deep model is expected to memorize the properties of a good essay in response to the target prompt, thereafter accurately assessing all essays for it." }, { "id": 34, "string": "In Section 2.2, building blocks for the selection of the training data and the proposed deep model are described in details." }, { "id": 35, "string": "Building Blocks Select confident essays as training data." }, { "id": 36, "string": "The identification of the extremes is relatively simple, where a RankSVM (Joachims, 2002) is trained on essays for different non-target prompts, avoiding the risks of over-fitting some particular prompts." }, { "id": 37, "string": "A set of established prompt-independent features are employed, which are listed in Table 2 ." }, { "id": 38, "string": "Given a prompt and a set of essays for evaluation, to begin with, the trained RankSVM is used to assign prediction scores to individual prompt-essay pairs, which are uniformly transformed into a 10point scale." }, { "id": 39, "string": "Thereafter, the essays with predicted scores in [0, 4] and [8, 10] are selected as negative and positive examples respectively, serving as the bad and good templates for training in the next stage." }, { "id": 40, "string": "Intuitively, an essay with a score beyond eight out of a 10-point scale is considered good, while the one receiving less than or equal to four, is considered to be with a poor quality." }, { "id": 41, "string": "A hybrid deep model for fine-grained assessment." }, { "id": 42, "string": "To enable a prompt-dependent assessment, a model is desired to comprehensively capture the ways in which a prompt is described or discussed in an essay." }, { "id": 43, "string": "In this paper, semantic meaning, part-of-speech (POS), and the syntactic taggings of the token sequence from an essay are considered, grasping the quality of an essay for a target prompt." }, { "id": 44, "string": "The model architecture is summarized in Figure 2 ." }, { "id": 45, "string": "Intuitively, the model learns the semantic meaning of an essay by encoding it in terms of a sequence of word embeddings, denoted as − → e sem , hoping to understand what the essay is about; in addition, the part-of-speech information is encoded as a sequence of POS tag-gings, coined as − → e pos ; ultimately, the structural connections between different components in an essay (e.g., terms or phrases) are further captured via syntactic network, leading to − → e synt , where the model learns the organization of the essay." }, { "id": 46, "string": "Akin to (Li et al., 2015) and (Zhou and Xu, 2015) , bi-LSTM is employed as a basic component to encode a sequence." }, { "id": 47, "string": "Three features are separately captured using the stacked bi-LSTM layers as building blocks to encode different embeddings, whose outputs are subsequently concatenated and fed into several dense layers, generating the ultimate rating." }, { "id": 48, "string": "In the following, the architecture of the model is described in details." }, { "id": 49, "string": "-Semantic embedding." }, { "id": 50, "string": "Akin to the existing works (Alikaniotis et al., 2016; Taghipour and Ng, 2016) , semantic word embeddings, namely, the pre-trained 50-dimension GloVe (Pennington et al., 2014) , are employed." }, { "id": 51, "string": "On top of the word embeddings, two bi-LSTM layers are stacked, namely, the essay layer is constructed on top of the sentence layer, ending up with the semantic representation of the whole essay, which is denoted as − → e sem in Figure 2 ." }, { "id": 52, "string": "-Part-Of-Speech (POS) embeddings for individual terms are first generated by the Stanford Tagger (Toutanova et al., 2003) , where 36 different POS tags present." }, { "id": 53, "string": "Accordingly, individual words are embedded with 36-dimensional one-hot representation, and is transformed to a 50-dimensional vector through a lookup layer." }, { "id": 54, "string": "After that, two bi-LSTM layers are stacked, leading to − → e pos ." }, { "id": 55, "string": "Take Figure 3 for example, given a sentence \"Attention please, here is an example." }, { "id": 56, "string": "\", it is first converted into a POS sequence using the tagger, namely, VB, VBP, RB, VBZ, DT, NN; thereafter it is further mapped to vector space through one-hot embedding and a lookup layer." }, { "id": 57, "string": "-Syntactic embedding aims at encoding an essay in terms of the syntactic relationships among different syntactic components, by encoding an essay recursively." }, { "id": 58, "string": "The Stanford Parser (Socher et al., 2013) is employed to label the syntactic structure of words and phrases in sentences, accounting for 59 different types in total." }, { "id": 59, "string": "Similar to (Tai et al., 2015) , we opt for three stacked bi-LSTM, aiming at encoding individual phrases, sentences, and ultimately the whole essay in sequence." }, { "id": 60, "string": "In particular, according to the hierarchical structure from a parsing tree, the phrase-level bi-LSTM first encodes different phrases by consuming syntactic Figure 2 ) from a lookup table of individual syntactic units in the tree; thereafter, the encoded dense layers in individual sentences are further consumed by a sentence-level bi-LSTM, ending up with sentence-level syntactic representations, which are ultimately combined by the essay-level bi-LSTM, resulting in − → e synt ." }, { "id": 61, "string": "For example, the parsed tree for a sentence \"Attention please, here is an example.\"" }, { "id": 62, "string": "is displayed in Figure 3 ." }, { "id": 63, "string": "To start with, the sentence is parsed into ((NP VP)(NP VP NP)), and the dense embeddings are fetched from a lookup table for all tokens, namely, NP and VP; thereafter, the phraselevel bi-LSTM encodes (NP VP) and (NP VP N-P) separately, which are further consumed by the sentence-level bi-LSTM." }, { "id": 64, "string": "Afterward, essay-level bi-LSTM further combines the representations of different sentences into − → e synt ." }, { "id": 65, "string": "(ROOT (S (S (NP (VB Attention)) (VP (VBP please))) (, ,) (NP (RB here)) (VP (VBZ is) (NP (DT an) (NN example))) (." }, { "id": 66, "string": ".)))" }, { "id": 67, "string": "Figure 3: An example of the context-free phrase structure grammar tree." }, { "id": 68, "string": "-Combination." }, { "id": 69, "string": "A feed-forward network linearly transforms the concatenated representations of an essay from the mentioned three perspectives into a scalar, which is further normalized into [0, 1] with a sigmoid function." }, { "id": 70, "string": "Objective and Training Objective." }, { "id": 71, "string": "Mean square error (MSE) is optimized, which is widely used as a loss function in regression tasks." }, { "id": 72, "string": "Given N pairs of a target prompt p i and an essay e i , MSE measures the average value of square error between the normalized gold standard rating r * (p i , e i ) and the predicted rating r(p i , e i ) assigned by the AES model, as summarized in Equation 1." }, { "id": 73, "string": "1 N N ∑ i=1 ( r(p i , e i ) − r * (p i , e i ) ) 2 (1) Optimization." }, { "id": 74, "string": "Adam (Kingma and Ba, 2014) is employed to minimize the loss over the training data." }, { "id": 75, "string": "The initial learning rate η is set to 0.01 and the gradient is clipped between [−10, 10] during training." }, { "id": 76, "string": "In addition, dropout (Srivastava et al., 2014) is introduced for regularization with a dropout rate of 0.5, and 64 samples are used in each batch with batch normalization (Ioffe and Szegedy, 2015) ." }, { "id": 77, "string": "30% of the training data are reserved for validation." }, { "id": 78, "string": "In addition, early stopping (Yao et al., 2007) is employed according to the validation loss, namely, the training is terminated if no decrease of the loss is observed for ten consecutive epochs." }, { "id": 79, "string": "Once training is finished, Prompt #Essays Avg Length Score Range 1 1783 350 2-12 2 1800 350 1-6 3 1726 150 0-3 4 1772 150 0-3 5 1805 150 0-4 6 1800 150 0-4 7 1569 250 0-30 8 723 650 0-60 Table 1 : Statistics for the ASAP dataset." }, { "id": 80, "string": "akin to , the model with the best quadratic weighted kappa on the validation set is selected." }, { "id": 81, "string": "3 Experimental Setup Dataset." }, { "id": 82, "string": "The Automated Student Assessment Prize (ASAP) dataset has been widely used for AES (Alikaniotis et al., 2016; Chen and He, 2013; , and is also employed as the prime evaluation instrument herein." }, { "id": 83, "string": "In total, AS-AP consists of eight sets of essays, each of which associates to one prompt, and is originally written by students between Grade 7 and Grade 10." }, { "id": 84, "string": "As summarized in Table 1 , essays from different sets differ in their rating criteria, length, as well as the rating distribution 1 ." }, { "id": 85, "string": "Cross-validation." }, { "id": 86, "string": "To fully employ the rated data, a prompt-wise eight-fold cross validation on the ASAP is used for evaluation." }, { "id": 87, "string": "In each fold, essays corresponding to a prompt is reserved for testing, and the remaining essays are used as training data." }, { "id": 88, "string": "Evaluation metric." }, { "id": 89, "string": "The model outputs are first uniformly re-scaled into [0, 10], mirroring the range of ratings in practice." }, { "id": 90, "string": "Thereafter, akin to (Yannakoudakis et al., 2011; Chen and He, 2013; Alikaniotis et al., 2016) , we report our results primarily based on the quadratic weighted Kappa (QWK), examining the agreement between the predicted ratings and the ground truth." }, { "id": 91, "string": "Pearson correlation coefficient (PCC) and Spearman rankorder correlation coefficient (SCC) are also reported." }, { "id": 92, "string": "The correlations obtained from individual folds, as well as the average over all eight folds, are reported as the ultimate results." }, { "id": 93, "string": "Competing models." }, { "id": 94, "string": "Since the promptindependent AES is of interests in this work, the existing AES models are adapted for prompt-independent rating prediction, serving as baselines." }, { "id": 95, "string": "This is due to the facts that the 1 Details of this dataset can be found at https://www." }, { "id": 96, "string": "kaggle.com/c/asap-aes." }, { "id": 97, "string": "No." }, { "id": 98, "string": "Feature 1 Mean & variance of word length in characters 2 Mean & variance of sentence length in words 3 Essay length in characters and words 4 Number of prepositions and commas 5 Number of unique words in an essay 6 Mean number of clauses per sentence 7 Mean length of clauses 8 Maximum number of clauses of a sentence in an essay 9 Number of spelling errors 10 Average depth of the parser tree of each sentence in an essay 11 Average depth of each leaf node in the parser tree of each sentence prompt-dependent and -independent models differ a lot in terms of problem settings and model designs, especially in their requirements for the training data, where the latter ones release the prompt-dependent requirements and thereby are accessible to more data." }, { "id": 99, "string": "-RankSVM, using handcrafted features for AES (Yannakoudakis et al., 2011; Chen et al., 2014) , is trained on a set of pre-defined promptindependent features as listed in Table 2 , where the features are standardized beforehand to remove the mean and variance." }, { "id": 100, "string": "The RankSVM is also used for the prompt-independent stage in our proposed TDNN model." }, { "id": 101, "string": "In particular, the linear kernel RankSVM 2 is employed, where C is set to 5 according to our pilot experiments." }, { "id": 102, "string": "-2L-LSTM." }, { "id": 103, "string": "Two-layer bi-LSTM with GloVe for AES (Alikaniotis et al., 2016) is employed as another baseline." }, { "id": 104, "string": "Regularized word embeddings are dropped to avoid over-fitting the prompt-specific features." }, { "id": 105, "string": "-CNN-LSTM." }, { "id": 106, "string": "This model (Taghipour and Ng, 2016 ) employs a convolutional (CNN) layer over one-hot representations of words, followed by an LSTM layer to encode word sequences in a given essay." }, { "id": 107, "string": "A linear layer with sigmoid activation function is then employed to predict the essay rating." }, { "id": 108, "string": "-CNN-LSTM-ATT." }, { "id": 109, "string": "This model ) employs a CNN layer to encode word sequences into sentences, followed by an LSTM layer to generate the essay representation." }, { "id": 110, "string": "An attention mechanism is added to model the influence of individual sentences on the final essay representation." }, { "id": 111, "string": "For the proposed TDNN model, as introduced in Section 2.2, different variants of TDNN are examined by using one or multiple components out of the semantic, POS and the syntactic networks." }, { "id": 112, "string": "The combinations being considered are listed in the following." }, { "id": 113, "string": "In particular, the dimensions of POS tags and syntactic network are fixed to 50, whereas the sizes of the hidden units in LSTM, as well as the output units of the linear layers are tuned by grid search." }, { "id": 114, "string": "-TDNN(Sem) only includes the semantic building block, which is similar to the two-layer LSTM neural network from (Alikaniotis et al., 2016) but without regularizing the word embeddings; -TDNN(Sem+POS) employs the semantic and the POS building blocks; -TDNN(Sem+Synt) uses the semantic and the syntactic network building blocks; -TDNN(POS+Synt) includes the POS and the syntactic network building blocks; -TDNN(ALL) employs all three building blocks." }, { "id": 115, "string": "The use of POS or syntactic network alone is not presented for brevity given the facts that they perform no better than TDNN(POS+Synt) in our pilot experiments." }, { "id": 116, "string": "Source code of the TDNN model is publicly available to enable further comparison 3 ." }, { "id": 117, "string": "Results and Analyzes In this section, the evaluation results for different competing methods are compared and analyzed in terms of their agreements with the manual ratings using three correlation metrics, namely, QWK, PCC and SCC, where the best results for each prompt is highlighted in bold in Table 3 ." }, { "id": 118, "string": "It can be seen that, for seven out of all eight prompts, the proposed TDNN variants outperform the baselines by a margin in terms of QWK, and the TDNN variant with semantic and syntactic features, namely, TDNN(Sem+Synt), consistently performs the best among different competing methods." }, { "id": 119, "string": "More precisely, as indicated in the bottom right corner in Table 3 , on average, TDNN(Sem+Synt) outperforms the baselines by at least 25.52% under QWK, by 10.28% under PCC, and by 15.66% under SCC, demonstrating that the proposed model not only correlates better with the manual ratings in terms of QWK, but also linearly (PCC) and monotonically (SCC) correlates better with the manual ratings." }, { "id": 120, "string": "As for the four baselines, note that, the relatively underperformed deep models suffer from larger variances of performance under different prompts, e.g., for prompts two and eight, 2L-LSTM's QWK is lower than 0.3." }, { "id": 121, "string": "This actually confirms our choice of RankSVM for the first stage in TDNN, since a more complicated model (like 2L-LSTM) may end up with learning prompt-dependent signals, making it unsuitable for the prompt-independent rating prediction." }, { "id": 122, "string": "As a comparison, RankSVM performs more stable among different prompts." }, { "id": 123, "string": "As for the different TDNN variants, it turns out that the joint uses of syntactic network with semantic or POS features can lead to better performances." }, { "id": 124, "string": "This indicates that, when learning the prompt-dependent signals, apart from the widelyused semantic features, POS features and the sentence structure taggings (syntactic network) are also essential in learning the structure and the arrangement of an essay in response to a particular prompt, thereby being able to improve the results." }, { "id": 125, "string": "It is also worth mentioning, however, when using all three features, the TDNN actually performs worse than when only using (any) two features." }, { "id": 126, "string": "One possible explanation is that the uses of all three features result in a more complicated model, which over-fits the training data." }, { "id": 127, "string": "In addition, recall that the prompt-independent RankSVM model from the first stage enables the proposed TDNN in learning prompt-dependent information without manual ratings for the target prompt." }, { "id": 128, "string": "Therefore, one would like to understand how good the trained RankSVM is in feeding training data for the model in the second stage." }, { "id": 129, "string": "In particular, the precision, recall and F-score (P/R/F) of the essays selected by RanknSVM, namely, the negative ones rated between [0, 4], and the positive ones rated between [8, 10] , are displayed in Figure 4 ." }, { "id": 130, "string": "It can be seen that the P/R/F scores of both positive and negative classes differ a lot among different prompts." }, { "id": 131, "string": "Moreover, it turns out that the P/R/F scores do not necessarily correlate with the performance of the TDNN model." }, { "id": 132, "string": "Take TDNN(Sem+Synt), the best TDNN variant, as an example: as indicated in Table 4 , the performance and the P/R/F scores of the pseudo examples are only weakly correlated in most cases." }, { "id": 133, "string": "To gain a better understanding in how the quality of pseudo examples affects the performance of TDNN, the sanctity of the selected essays are examined." }, { "id": 134, "string": "In Figure 5 , the relative precision of 7616 .7492 .7366 .7993 .7960 .6752 .6903 .7434 TDNN(POS+Synt) .7561 .7591 .7440 .7332 .7983 .7866 .6593 .6759 .7354 TDNN(All) ." }, { "id": 135, "string": "7527 .7609 .7251 .7302 .7974 .7794 .6557 .6874 .7350 Method Prompt 7 Prompt 8 Average RankSVM ." }, { "id": 136, "string": "5858 .6436 .6429 .4075 .5889 .6087 .5462 .6072 .5976 2L-LSTM .6690 .7637 .7607 .2486 .5137 .4979 .4687 .6548 .6214 CNN-LSTM .6609 .6849 .6865 .3812 .4666 .3872 .5362 .6569 .6139 CNN-LSTM-ATT .6002 .6314 .6223 .4468 .5358 .4536 .5057 .6535 .6368 TDNN(Sem) ." }, { "id": 137, "string": "5482 .6957 .6902 .5003 .6083 .6545 .5875 .6779 .6795 TDNN(Sem+POS) .6239 .7111 .7243 .5519 .6219 .6614 .6582 .7103 .7130 TDNN(Sem+Synt) .6587 .7201 .7380 .5741 .6324 .6713 .6856 .7244 .7365 TDNN(POS+Synt) .6464 .7172 .7349 .5631 .6281 .6698 .6784 .7189 .7322 TDNN(All) ." }, { "id": 138, "string": "6396 .7114 .7300 .5622 .6267 .6631 .6682 .7176 .7258 the selected positive and negative training data by RankSVM are displayed for all eight prompts in terms of their concordance with the manual ratings, by computing the number of positive (negative) essays that are better (worse) than all negative (positive) essays." }, { "id": 139, "string": "It can be seen that, such relative precision is at least 80% and mostly beyond 90% on different prompts, indicating that the overlap of the selected positive and negative essays are fairly small, guaranteeing that the deep model in the second stage at least learns from correct labels, which are crucial for the success of our TDNN model." }, { "id": 140, "string": "Beyond that, we further investigate the class balance of the selected training data from the first stage, which could also influence the ultimate results." }, { "id": 141, "string": "The number of selected positive and negative essays are reported in Table 5 , where for prompts three and eight the training data suffers from serious imbalanced problem, which may explain their lower performance (namely, the two lowest QWKs among different prompts)." }, { "id": 142, "string": "On one hand, this is actually determined by real distribution of ratings for a particular prompt, e.g., how many essays are with an extreme quality for a given prompt in the target data." }, { "id": 143, "string": "On the other hand, a fine-grained tuning of the RankSVM (e.g., tuning C + and C − for positive and negative exam-ples separately) may partially resolve the problem, which is left for the future work." }, { "id": 144, "string": "Related Work Classical regression and classification algorithms are widely used for learning the rating model based on a variety of text features including lexical, syntactic, discourse and semantic features (Larkey, 1998; Rudner, 2002; Attali and Burstein, 2006; Mcnamara et al., 2015; Phandi et al., 2015) ." }, { "id": 145, "string": "There are also approaches that see AES as a preference ranking problem by applying learning to ranking algorithms to learn the rating model." }, { "id": 146, "string": "Results show improvement of learning to rank approaches over classical regression and classification algorithms (Chen et al., 2014; Yannakoudakis et al., 2011) ." }, { "id": 147, "string": "In addition, Chen & He propose to incorporate the evaluation metric into the loss function of listwise learning to rank for AES (Chen and He, 2013) ." }, { "id": 148, "string": "Recently, there have been efforts in developing AES approaches based on deep neural networks (DNN), for which feature engineering is not required." }, { "id": 149, "string": "Taghipour & Ng explore a variety of neural network model architectures based on recurrent neural networks which can effectively encode the information required for essay scoring and learn the complex connections in the data through the non-linear neural layers (Taghipour and Ng, 2016) ." }, { "id": 150, "string": "Alikaniotis et al." }, { "id": 151, "string": "introduce a neural network model to learn the extent to which specific words contribute to the text's score, which is embedded in the word representations." }, { "id": 152, "string": "Then a two-layer bi-directional Long-Short Term Memory networks (bi-LSTM) is used to learn the meaning of texts, and finally the essay score is predicted through a mutli-layer feed-forward network (Alikaniotis et al., 2016) ." }, { "id": 153, "string": "Dong & Zhang employ a hierarchical convolutional neural network (CN-N) model, with a lower layer representing sentence structure and an upper layer representing essay structure based on sentence representations, to learn features automatically (Dong and Zhang, 2016) ." }, { "id": 154, "string": "This model is later improved by employing attention layers." }, { "id": 155, "string": "Specifically, the model learns text representation with LSTMs which can model the coherence and co-reference among sequences of words and sentences, and uses attention pooling to capture more relevant words and sentences that contribute to the final quality of essays ." }, { "id": 156, "string": "Song et al." }, { "id": 157, "string": "propose a deep model for identifying discourse modes in an essay (Song et al., 2017) ." }, { "id": 158, "string": "While the literature has shown satisfactory performance of prompt-dependent AES, how to achieve effective essay scoring in a promptindependent setting remains to be explored." }, { "id": 159, "string": "Chen & He studied the usefulness of promptindependent text features and achieved a humanmachine rating agreement slightly lower than the use of all text features (Chen and He, 2013) for prompt-dependent essay scoring prediction." }, { "id": 160, "string": "A constrained multi-task pairwise preference learning approach was proposed in (Cummins et al., 2016) to combine essays from multiple prompts for training." }, { "id": 161, "string": "However, as shown by (Dong and Zhang, 2016; Zesch et al., 2015; Phandi et al., 2015) , straightforward applications of existing AES methods for prompt-independent AES lead to a poor performance." }, { "id": 162, "string": "Conclusions & Future Work This study aims at addressing the promptindependent automated essay scoring (AES), where no rated essay for the target prompt is available." }, { "id": 163, "string": "As demonstrated in the experiments, two kinds of established prompt-dependent AES models, namely, RankSVM for AES (Yannakoudakis et al., 2011; Chen et al., 2014) and the deep models for AES (Alikaniotis et al., 2016; Taghipour and Ng, 2016; , fail to provide satisfactory performances, justifying our arguments in Section 1 that the application of estab-lished prompt-dependent AES models on promptindependent AES is not straightforward." }, { "id": 164, "string": "Therefore, a two-stage TDNN learning framework was proposed to utilize the prompt-independent features to generate pseudo training data for the target prompt, on which a hybrid deep neural network model is proposed to learn a rating model consuming semantic, part-of-speech, and syntactic signals." }, { "id": 165, "string": "Through the experiments on the ASAP dataset, the proposed TDNN model outperforms the baselines, and leads to promising improvement in the human-machine agreement." }, { "id": 166, "string": "Given that our approach in this paper is similar to the methods for transductive transfer learning (Pan and Yang, 2010), we argue that the proposed TDNN could be further improved by migrating the non-target training data to the target prompt (Busto and Gall, 2017) ." }, { "id": 167, "string": "Further study of the uses of transfer learning algorithms on promptindependent AES needs to be undertaken." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 18 }, { "section": "Two-stage Deep Neural Network for AES", "n": "2", "start": 19, "end": 27 }, { "section": "Overview", "n": "2.1", "start": 28, "end": 34 }, { "section": "Building Blocks", "n": "2.2", "start": 35, "end": 69 }, { "section": "Objective and Training", "n": "2.3", "start": 70, "end": 116 }, { "section": "Results and Analyzes", "n": "4", "start": 117, "end": 143 }, { "section": "Related Work", "n": "5", "start": 144, "end": 161 }, { "section": "Conclusions & Future Work", "n": "6", "start": 162, "end": 167 } ], "figures": [ { "filename": "../figure/image/966-Figure1-1.png", "caption": "Figure 1: The architecture of the TDNN framework for prompt-independent AES.", "page": 1, "bbox": { "x1": 313.92, "x2": 519.36, "y1": 176.64, "y2": 294.24 } }, { "filename": "../figure/image/966-Table4-1.png", "caption": "Table 4: Linear correlations between the performance of TDNN(Sem+Synt) and the precision, recall, and F-score of the selected pseudo examples.", "page": 6, "bbox": { "x1": 72.0, "x2": 286.08, "y1": 565.4399999999999, "y2": 643.1999999999999 } }, { "filename": "../figure/image/966-Table3-1.png", "caption": "Table 3: Correlations between AES and manual ratings for different competing methods are reported for individual prompts. The average results among different prompts are summarized in the bottom right. The best results are highlighted in bold for individual prompts.", "page": 6, "bbox": { "x1": 72.0, "x2": 503.03999999999996, "y1": 62.879999999999995, "y2": 498.24 } }, { "filename": "../figure/image/966-Table5-1.png", "caption": "Table 5: The numbers of the selected positive and negative essays for each prompt.", "page": 6, "bbox": { "x1": 73.92, "x2": 288.0, "y1": 692.64, "y2": 732.0 } }, { "filename": "../figure/image/966-Figure4-1.png", "caption": "Figure 4: The precision, recall and F-score of the pseudo negative or positive examples, which are rated within [0, 4] or [8, 10] by RankSVM.", "page": 7, "bbox": { "x1": 56.64, "x2": 519.36, "y1": 66.72, "y2": 262.56 } }, { "filename": "../figure/image/966-Figure5-1.png", "caption": "Figure 5: The sanctity of the selected positive and negative essays by RankSVM. The x-axis indicates different prompts and the y-axis is the relative precision.", "page": 7, "bbox": { "x1": 45.6, "x2": 310.08, "y1": 316.8, "y2": 516.0 } }, { "filename": "../figure/image/966-Figure3-1.png", "caption": "Figure 3: An example of the context-free phrase structure grammar tree.", "page": 3, "bbox": { "x1": 72.0, "x2": 292.32, "y1": 580.8, "y2": 724.3199999999999 } }, { "filename": "../figure/image/966-Figure2-1.png", "caption": "Figure 2: The model architecture of the proposed hybrid deep learning model.", "page": 3, "bbox": { "x1": 72.0, "x2": 526.0799999999999, "y1": 61.44, "y2": 276.0 } }, { "filename": "../figure/image/966-Table1-1.png", "caption": "Table 1: Statistics for the ASAP dataset.", "page": 4, "bbox": { "x1": 84.96, "x2": 277.44, "y1": 62.4, "y2": 165.12 } }, { "filename": "../figure/image/966-Table2-1.png", "caption": "Table 2: Handcrafted features used in learning the prompt-independent RankSVM.", "page": 4, "bbox": { "x1": 306.71999999999997, "x2": 515.04, "y1": 62.4, "y2": 225.12 } } ] }, "gem_id": "GEM-SciDuet-train-5" }, { "slides": { "0": { "title": "The task", "text": [ "Why AstraZeneca plc Dixons Carphone PLC Are Red-Hot Growth", "Training data: 1142 samples, 960 headlines/sentences.", "Testing data: 491 samples, 461 headlines/sentences." ], "page_nums": [ 2 ], "images": [] }, "1": { "title": "Models", "text": [ "1. Support Vector Regression (SVR) [1]", "2. Bi-directional Long Short-Term Memory BLSTM [2][3]" ], "page_nums": [ 4 ], "images": [] }, "2": { "title": "Pre Processing and Additional data used", "text": [ "Used 189, 206 financial articles (e.g. Financial Times) that were", "manually downloaded from Factiva1 to create a Word2Vec model [5]2.", "These were created using Gensim3." ], "page_nums": [ 5 ], "images": [] }, "3": { "title": "Support Vector Regression SVR 1", "text": [ "Features and settings that we changed", "1. Tokenisation - Whitespace or Unitok4", "2. N-grams - uni-grams, bi-grams and both.", "3. SVR settings - penalty parameter C and epsilon parameter." ], "page_nums": [ 6 ], "images": [] }, "4": { "title": "Word Replacements", "text": [ "AstraZeneca PLC had an improved performance where as Dixons", "companyname had an posword performance where as companyname" ], "page_nums": [ 7 ], "images": [] }, "5": { "title": "Two BLSTM models", "text": [ "Drop out between layers", "25 times trained over", "Early stopping used to" ], "page_nums": [ 8 ], "images": [] }, "7": { "title": "SVR best features", "text": [ "Using uni-grams and bi-grams to be the best. 2.4% improvement", "Using a tokeniser always better. Affects bi-gram results the most.", "1% improvement using Unitok5 over whitespace.", "SVR parameter settings important 8% difference between using", "Incorporating the target aspect increased performance. 0.3%", "Using all word replacements. N=10 for POS and NEG words and", "N=0 for company. 0.8% improvement using company and 0.2% for" ], "page_nums": [ 11 ], "images": [] }, "8": { "title": "Results across the different metrics", "text": [ "Metric 1 was the final metric used." ], "page_nums": [ 13 ], "images": [] }, "9": { "title": "Future Work", "text": [ "1. Incorporate aspects into the BLSTMs shown to be useful by Wang", "2. Improve BLSTMs by using an attention model Wang et al. [7].", "3. Add known financial sentiment lexicon into the LSTM model [6]." ], "page_nums": [ 16 ], "images": [] }, "10": { "title": "Summary", "text": [ "1. BLSTM outperform SVRs with minimal feature engineering.", "2. The future is to incorporate more financial information into the" ], "page_nums": [ 17 ], "images": [] } }, "paper_title": "Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines", "paper_id": "970", "paper": { "title": "Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines", "abstract": "This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.", "text": [ { "id": 0, "string": "Introduction The objective of Task 5 Track 2 of SemEval (2017) was to predict the sentiment of news headlines with respect to companies mentioned within the headlines." }, { "id": 1, "string": "This task can be seen as a financespecific aspect-based sentiment task (Nasukawa and Yi, 2003) ." }, { "id": 2, "string": "The main motivations of this task is to find specific features and learning algorithms that will perform better for this domain as aspect based sentiment analysis tasks have been conducted before at SemEval (Pontiki et al., 2014) ." }, { "id": 3, "string": "Domain specific terminology is expected to play a key part in this task, as reporters, investors and analysts in the financial domain will use a specific set of terminology when discussing financial performance." }, { "id": 4, "string": "Potentially, this may also vary across different financial domains and industry sectors." }, { "id": 5, "string": "Therefore, we took an exploratory approach and investigated how various features and learning algorithms perform differently, specifically SVR and BLSTMs." }, { "id": 6, "string": "We found that BLSTMs outperform an SVR without having any knowledge of the company that the sentiment is with respect to." }, { "id": 7, "string": "For replicability purposes, with this paper we are releasing our source code 1 and the finance specific BLSTM word embedding model 2 ." }, { "id": 8, "string": "Related Work There is a growing amount of research being carried out related to sentiment analysis within the financial domain." }, { "id": 9, "string": "This work ranges from domainspecific lexicons (Loughran and McDonald, 2011) and lexicon creation (Moore et al., 2016) to stock market prediction models (Peng and Jiang, 2016; Kazemian et al., 2016) ." }, { "id": 10, "string": "Peng and Jiang (2016) used a multi layer neural network to predict the stock market and found that incorporating textual features from financial news can improve the accuracy of prediction." }, { "id": 11, "string": "Kazemian et al." }, { "id": 12, "string": "(2016) showed the importance of tuning sentiment analysis to the task of stock market prediction." }, { "id": 13, "string": "However, much of the previous work was based on numerical financial stock market data rather than on aspect level financial textual data." }, { "id": 14, "string": "In aspect based sentiment analysis, there have been many different techniques used to predict the polarity of an aspect as shown in SemEval-2016 task 5 (Pontiki et al., 2014 )." }, { "id": 15, "string": "The winning system (Brun et al., 2016 ) used many different linguistic features and an ensemble model, and the runner up (Kumar et al., 2016) used uni-grams, bi-grams and sentiment lexicons as features for a Support Vector Machine (SVM)." }, { "id": 16, "string": "Deep learning methods have also been applied to aspect polarity prediction." }, { "id": 17, "string": "Ruder et al." }, { "id": 18, "string": "(2016) created a hierarchical BLSTM with a sentence level BLSTM inputting into a review level BLSTM thus allowing them to take into account inter-and intra-sentence context." }, { "id": 19, "string": "They used only word embeddings making their system less dependent on extensive feature engineering or manual feature creation." }, { "id": 20, "string": "This system outperformed all others on certain languages on the SemEval-2016 task 5 dataset (Pontiki et al., 2014) and on other languages performed close to the best systems." }, { "id": 21, "string": "Wang et al." }, { "id": 22, "string": "(2016) also created an LSTM based model using word embeddings but instead of a hierarchical model it was a one layered LSTM with attention which puts more emphasis on learning the sentiment of words specific to a given aspect." }, { "id": 23, "string": "Data The training data published by the organisers for this track was a set of headline sentences from financial news articles where each sentence was tagged with the company name (which we treat as the aspect) and the polarity of the sentence with respect to the company." }, { "id": 24, "string": "There is the possibility that the same sentence occurs more than once if there is more than one company mentioned." }, { "id": 25, "string": "The polarity was a real value between -1 (negative sentiment) and 1 (positive sentiment)." }, { "id": 26, "string": "We additionally trained a word2vec (Mikolov et al., 2013) word embedding model 3 on a set of 189,206 financial articles containing 161,877,425 tokens, that were manually downloaded from Factiva 4 ." }, { "id": 27, "string": "The articles stem from a range of sources including the Financial Times and relate to companies from the United States only." }, { "id": 28, "string": "We trained the model on domain specific data as it has been shown many times that the financial domain can contain very different language." }, { "id": 29, "string": "System description Even though we have outlined this task as an aspect based sentiment task, this is instantiated in only one of the features in the SVR." }, { "id": 30, "string": "The following two subsections describe the two approaches, first SVR and then BLSTM." }, { "id": 31, "string": "Key implementation details are exposed here in the paper, but we have released the source code and word embedding models to aid replicability and further experimentation." }, { "id": 32, "string": "SVR The system was created using ScitKit learn (Pedregosa et al., 2011) linear Support Vector Regression model (Drucker et al., 1997) ." }, { "id": 33, "string": "We exper-imented with the following different features and parameter settings: Tokenisation For comparison purposes, we tested whether or not a simple whitespace tokeniser can perform just as well as a full tokeniser, and in this case we used Unitok 5 ." }, { "id": 34, "string": "N-grams We compared word-level uni-grams and bi-grams separately and in combination." }, { "id": 35, "string": "SVR parameters We tested different penalty parameters C and different epsilon parameters of the SVR." }, { "id": 36, "string": "Word Replacements We tested replacements to see if generalising words by inserting special tokens would help to reduce the sparsity problem." }, { "id": 37, "string": "We placed the word replacements into three separate groups: 1." }, { "id": 38, "string": "Company -When a company was mentioned in the input headline from the list of companies in the training data marked up as aspects, it was replaced by a company special token." }, { "id": 39, "string": "2." }, { "id": 40, "string": "Positive -When a positive word was mentioned in the input headline from a list of positive words (which was created using the N most similar words based on cosine distance) to 'excellent' using the pre-trained word2vec model." }, { "id": 41, "string": "3." }, { "id": 42, "string": "Negative -The same as the positive group however the word used was 'poor' instead of 'excellent'." }, { "id": 43, "string": "In the positive and negative groups, we chose the words 'excellent' and 'poor' following Turney (2002) to group the terms together under nondomain specific sentiment words." }, { "id": 44, "string": "Target aspect In order to incorporated the company as an aspect, we employed a boolean vector to represent the sentiment of the sentence." }, { "id": 45, "string": "This was done in order to see if the system could better differentiate the sentiment when the sentence was the same but the company was different." }, { "id": 46, "string": "BLSTM We created two different Bidirectional (Graves and Schmidhuber, 2005 ) Long Short-Term Memory (Hochreiter and Schmidhuber, 1997) using the Python Keras library (Chollet, 2015) with tensor flow backend (Abadi et al., 2016) ." }, { "id": 47, "string": "We choose an LSTM model as it solves the vanishing gradients problem of Recurrent Neural Networks." }, { "id": 48, "string": "We used a bidirectional model as it allows us to capture information that came before and after instead of just before, thereby allowing us to capture more relevant context within the model." }, { "id": 49, "string": "Practically, a BLSTM is two LSTMs one going forward through the tokens the other in reverse order and in our models concatenating the resulting output vectors together at each time step." }, { "id": 50, "string": "The BLSTM models take as input a headline sentence of size L tokens 6 where L is the length of the longest sentence in the training texts." }, { "id": 51, "string": "Each word is converted into a 300 dimension vector using the word2vec model trained over the financial text 7 ." }, { "id": 52, "string": "Any text that is not recognised by the word2vec model is represented as a vector of zeros; this is also used to pad out the sentence if it is shorter than L. Both BLSTM models have the following similar properties: 1." }, { "id": 53, "string": "Gradient clipping value of 5 -This was to help with the exploding gradients problem." }, { "id": 54, "string": "2." }, { "id": 55, "string": "Minimised the Mean Square Error (MSE) loss using RMSprop with a mini batch size of 32." }, { "id": 56, "string": "The output activation function is linear." }, { "id": 57, "string": "The main difference between the two models is the use of drop out and when they stop training over the data (epoch)." }, { "id": 58, "string": "Both models architectures can be seen in figure 1." }, { "id": 59, "string": "Standard LSTM (SLSTM) The BLSTMs do contain drop out in both the input and between the connections of 0.2 each." }, { "id": 60, "string": "Finally the epoch is fixed at 25." }, { "id": 61, "string": "Early LSTM (ELSTM) As can be seen from figure 1, the drop out of 0.5 only happens between the layers and not the 6 Tokenised by Unitok 7 See the following link for detailed implementation details https://github.com/apmoore1/semeval# finance-word2vec-model connections as in the SLSTM." }, { "id": 62, "string": "Also the epoch is not fixed, it uses early stopping with a patience of 10." }, { "id": 63, "string": "We expect that this model can generalise better than the standard one due to the higher drop out and that the epoch is based on early stopping which relies on a validation set to know when to stop training." }, { "id": 64, "string": "Results We first present our findings on the best performing parameters and features for the SVRs." }, { "id": 65, "string": "These were determined by cross validation (CV) scores on the provided training data set using cosine similarity as the evaluation metric." }, { "id": 66, "string": "8 We found that using uni-grams and bi-grams performs best and using only bi-grams to be the worst." }, { "id": 67, "string": "Using the Unitok tokeniser always performed better than simple whitespace tokenisation." }, { "id": 68, "string": "The binary presence of tokens over frequency did not alter performance." }, { "id": 69, "string": "The C parameter was tested for three values; 0.01, 0.1 and 1." }, { "id": 70, "string": "We found very little difference between 0.1 and 1, but 0.01 produced much poorer results." }, { "id": 71, "string": "The eplison parameter was tested for 0.001, 0.01 and 0.1 the performance did not differ much but the lower the higher the performance but the more likely to overfit." }, { "id": 72, "string": "Using word replacements was effective for all three types (company, positive and negative) but using a value N=10 performed best for both positive and negative words." }, { "id": 73, "string": "Using target aspects also improved results." }, { "id": 74, "string": "Therefore, the best SVR model comprised of: Unitok tokenisation, uni-and bi-grams, word representation, C=0.1, eplison=0.01, company, positive, and negative word replacements and target aspects." }, { "id": 75, "string": "The main evaluation over the test data is based on the best performing SVR and the two BLSTM models once trained on all of the training data." }, { "id": 76, "string": "The result table 1 shows three columns based on the three evaluation metrics that the organisers have used." }, { "id": 77, "string": "Metric 1 is the original metric, weighted cosine similarity (the metric used to evaluate the final version of the results, where we were ranked 5th; metric provided on the task website 9 )." }, { "id": 78, "string": "This was then changed after the evaluation deadline to equation 1 10 (which we term metric 2; this is what the first version of the results were actually based on, where we were ranked 4th), which then changed by the organisers to their equation as presented in Cortis et al." }, { "id": 79, "string": "(2017) (which we term metric 3 and what the second version of the results were based on, where we were ranked 5th)." }, { "id": 80, "string": "Model Metric 1 As you can see from the results table 1, the difference between the metrics is quite substantial." }, { "id": 81, "string": "This is due to the system's optimisation being based on metric 1 rather than 2." }, { "id": 82, "string": "Metric 2 is a classification metric for sentences with one aspect as it penalises values that are of opposite sign (giving -1 score) and rewards values with the same sign (giving +1 score)." }, { "id": 83, "string": "Our systems are not optimised for this because it would predict scores of -0.01 and true value of 0.01 as very close (within vector of other results) with low error whereas metric 2 would give this the highest error rating of -1 as they are not the same sign." }, { "id": 84, "string": "Metric 3 is more similar to metric 1 as shown by the results, however the crucial difference is that again if you get opposite signs it will penalise more." }, { "id": 85, "string": "We analysed the top 50 errors based on Mean Absolute Error (MAE) in the test dataset specifically to examine the number of sentences containing more than one aspect." }, { "id": 86, "string": "Our investigation shows that no one system is better at predicting the sentiment of sentences that have more than one aspect (i.e." }, { "id": 87, "string": "company) within them." }, { "id": 88, "string": "Within those top 50 errors we found that the BLSTM systems do not know which parts of the sentence are associated to the company the sentiment is with respect to." }, { "id": 89, "string": "Also they do not know the strength/existence of certain sentiment words." }, { "id": 90, "string": "Conclusion and Future Work In this short paper, we have described our implemented solutions to SemEval Task 5 track 2, utilising both SVR and BLSTM approaches." }, { "id": 91, "string": "Our results show an improvement of around 5% when using LSTM models relative to SVR." }, { "id": 92, "string": "We have shown that this task can be partially represented as an aspect based sentiment task on a domain specific problem." }, { "id": 93, "string": "In general, our approaches acted as sentence level classifiers as they take no target company into consideration." }, { "id": 94, "string": "As our results show, the choice of evaluation metric makes a great deal of difference to system training and testing." }, { "id": 95, "string": "Future work will be to implement aspect specific information into an LSTM model as it has been shown to be useful in other work (Wang et al., 2016) ." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 7 }, { "section": "Related Work", "n": "2", "start": 8, "end": 22 }, { "section": "Data", "n": "3", "start": 23, "end": 28 }, { "section": "System description", "n": "4", "start": 29, "end": 31 }, { "section": "SVR", "n": "4.1", "start": 32, "end": 32 }, { "section": "Tokenisation", "n": "4.1.1", "start": 33, "end": 33 }, { "section": "N-grams", "n": "4.1.2", "start": 34, "end": 34 }, { "section": "SVR parameters", "n": "4.1.3", "start": 35, "end": 35 }, { "section": "Word Replacements", "n": "4.1.4", "start": 36, "end": 42 }, { "section": "Target aspect", "n": "4.1.5", "start": 43, "end": 45 }, { "section": "BLSTM", "n": "4.2", "start": 46, "end": 55 }, { "section": "The output activation function is linear.", "n": "3.", "start": 56, "end": 57 }, { "section": "Standard LSTM (SLSTM)", "n": "4.2.1", "start": 58, "end": 60 }, { "section": "Early LSTM (ELSTM)", "n": "4.2.2", "start": 61, "end": 63 }, { "section": "Results", "n": "5", "start": 64, "end": 89 }, { "section": "Conclusion and Future Work", "n": "6", "start": 90, "end": 95 } ], "figures": [ { "filename": "../figure/image/970-Figure1-1.png", "caption": "Figure 1: Left hand side is the ELSTM model architecture and the right hand side shows the SLSTM. The numbers in the parenthesis represent the size of the output dimension where L is the length of the longest sentence.", "page": 2, "bbox": { "x1": 324.96, "x2": 508.32, "y1": 61.44, "y2": 361.91999999999996 } }, { "filename": "../figure/image/970-Table1-1.png", "caption": "Table 1: Results", "page": 3, "bbox": { "x1": 82.56, "x2": 279.36, "y1": 549.6, "y2": 606.24 } } ] }, "gem_id": "GEM-SciDuet-train-6" }, { "slides": { "0": { "title": "Exploring intellectual structures", "text": [ "Collaboration, Author co-citation analysis,", "Journal Impact Factor, SJR", "Document citation analysis, Co-word analysis,", "Citation sentence: Containing brief content of cited work and opinion", "that the author of citing work on the cited work", "Topic Model: Adopting Author Conference Topic (ACT) model (Tang, Jin", "Oncology: The recent surge in number of publications in this field. Stem", "cells, one of the subfields of oncology, has been at the forefront of medicine", "Tang, J., Jin, R., & Zhang, J. (2008, December). A topic modeling approach and its integration into the random walk framework for academic search. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on (pp. 1055-1060). IEEE." ], "page_nums": [ 1, 2 ], "images": [] }, "1": { "title": "Citation Sentence", "text": [ "Embedding useful contents signifying the influence of cited authors on", "Being considered as an invisible intellectual place for idea exchanging", "Playing a role of supporting and expressing their own arguments by", "Exploring the implicit topics resided in citation sentences" ], "page_nums": [ 3 ], "images": [] }, "2": { "title": "Original ACT Model Tang Jin and Zhang 2008", "text": [ "Purpose of Academic search" ], "page_nums": [ 4 ], "images": [] }, "3": { "title": "Modified AJT Model", "text": [ "1) Citation Data Extraction", "2n d journal Topic 2", "Which topic is most salient? Who is the active authors sharing other authors ideas? Which journal leads such endeavor?" ], "page_nums": [ 5 ], "images": [] }, "4": { "title": "Method", "text": [ "The 77-SNP PRS was associated with a larg er effect", "than previously reported for a 10-SNP-PRS ( 20 )." ], "page_nums": [ 6 ], "images": [] }, "5": { "title": "Data collection", "text": [ "PubMed Central: 6,360 full-text articles", "15 journals of Oncology: by Thomson Reuters JCR & journals impact factor", "Cancer Cell, Journal of the National Cancer Institute, Leukemia, Oncogene,", "Annals of Oncology, Neuro-Oncology, Stem Cells, Oncotarget, OncoInnunology,", "Molecular Oncology, Breast Cancer Research Journal of Thoracic Oncology,", "Pigment Cell & Melanoma Resaerch, Clinical Epigenetics, Molecular Cancer" ], "page_nums": [ 7 ], "images": [] }, "6": { "title": "Research Flow", "text": [ "1) Citation Data Extraction" ], "page_nums": [ 8 ], "images": [] }, "7": { "title": "Results 8 Topics", "text": [ "Labeled by 3 Experts", "Author Group 1 Author Group 2 Author Group 3 Author Group 4", "Journal Group 1 Journal Group 2 Journal Group 3 Journal Group 4", "Research Annals of Oncology", "Pigment Cell & Melanoma Research", "Journal of Thoracic Oncology" ], "page_nums": [ 9 ], "images": [] }, "8": { "title": "Results contd", "text": [ "Author Group 5 Author Group 6 Author Group Author Group 8", "Journal Group 5 Journal Group 6 Journal Group 7 Journal Group 8", "Annals of Oncology Cancer Cell", "Annals of Oncology Breast Cancer Research" ], "page_nums": [ 10 ], "images": [] }, "9": { "title": "Conclusion", "text": [ "AJT model: to detect leading authors and journals in sub-disciplines", "represented by discovered topics in a certain field", "Citation sentences: Discovering latent meaning associated citation sentences", "and the major players leading the field" ], "page_nums": [ 12 ], "images": [] }, "10": { "title": "Future works", "text": [ "Comparing the proposed approach with the general topic modeling", "Investigating whether there is a different impact of using citation", "sentences and general meta-data (abstract and title)", "Considering the window size of citation sentences enriching citation" ], "page_nums": [ 13 ], "images": [] } }, "paper_title": "Exploring the leading authors and journals in major topics by citation sentences and topic modeling", "paper_id": "971", "paper": { "title": "Exploring the leading authors and journals in major topics by citation sentences and topic modeling", "abstract": "Citation plays an important role in understanding the knowledge sharing among scholars. Citation sentences embed useful contents that signify the influence of cited authors on shared ideas, and express own opinion of citing authors to others' articles. The purpose of the study is to provide a new lens to analyze the topical relationship embedded in the citation sentences in an integrated manner. To this end, we extract citation sentences from full-text articles in the field of Oncology. In addition, we adopt Author-Journal-Topic (AJT) model to take both authors and journals into consideration of topic analysis. For the study, we collect the 6,360 full-text articles from PubMed Central and select the top 15 journals on Oncology. By applying AJT model, we identify what the major topics are shared among researchers in Oncology and which authors and journal lead the idea exchange in sub-disciplines of Oncology.", "text": [ { "id": 0, "string": "Introduction As the size of data on the web continues to increase in an exponential manner, finding valuable meaning between data becomes of paramount importance in many research areas." }, { "id": 1, "string": "In the information science field, citations are challenging, pivotal materials to discover the relationship between academic documents because citations present the description of authors' ideas and the hidden relationship between authors and documents." }, { "id": 2, "string": "The earliest works focused mainly on classifying the citation behaviors and discovering the citation reasons with limited data such as the location of citation sentences and the number of references [1, 2] ." }, { "id": 3, "string": "Since the mid-1990s, with the development of computer technology, citation content analysis was elaborated by applying data analysis techniques like text-mining or natural language processing." }, { "id": 4, "string": "Zhang et al." }, { "id": 5, "string": "[3] present citation analysis based on sematic and syntactic approaches." }, { "id": 6, "string": "Semantic-based citation analysis is performed by qualitative analysis to discover the citation motivation and citation classification." }, { "id": 7, "string": "On the other hand, syntactic-based citation analysis can be conducted by citation location and citation frequency, which reveals the hidden relation of authors by using meta-data of documents such as journal, venue of publication, affiliation of authors, etc." }, { "id": 8, "string": "Following their study, Ding et al." }, { "id": 9, "string": "[4] propose a theoretical methodology through content citation analysis." }, { "id": 10, "string": "However, these analyses are somewhat limited to the explicit context that primarily represents their own ideas and arguments." }, { "id": 11, "string": "The main goal of the paper is to discover the implicit topical relationships buried in citation sentences by utilizing the citation information from the author's perspective of sharing other authors' point of view." }, { "id": 12, "string": "Implicitness of the topical relationship is realized by using citation sentences as the input for the topic modeling technique." }, { "id": 13, "string": "In this study, a citation sentence indicates the sentence including citation expression consisting of year and author of the cited work." }, { "id": 14, "string": "In general, the citation sentence contains brief content of cited work and opinion that the author of citing work on the cited work." }, { "id": 15, "string": "We claim that citation sentences reveal interesting characteristics of scholarly communication such as influence, idea exchange, justification for citer's arguments, etc." }, { "id": 16, "string": "We assume that using citation sentences for topic analysis reveals aforementioned characteristics." }, { "id": 17, "string": "To explore such intellectual space created by citation sentences, we take both authors and journals into consideration of topic analysis." }, { "id": 18, "string": "To this end, we applied Author-Conference-Topic (ACT) model proposed by Tang et al." }, { "id": 19, "string": "[5] for our topic analysis in relation with both authors and journals, which is called Author-Journal-Topic (AJT) topic model." }, { "id": 20, "string": "ACT model is a probabilistic topic model for simultaneously extracting topics of papers, authors, and conferences." }, { "id": 21, "string": "There are a few studies to analyze content of citation sentences." }, { "id": 22, "string": "Most of previous studies focus on how the topic of document influences citation and vice versa [6, 7, 8] using Topic Modeling." }, { "id": 23, "string": "Kataria, Mitra, and Bhatia [8] adapt citation to Author-Topic model [9] with the assumption that the context surrounding the citation anchor could be used to get topical information about the cited authors." }, { "id": 24, "string": "These studies including Tang et al." }, { "id": 25, "string": "[10] 's ACT model are the examples of combining topic modelling methods and citation content analysis." }, { "id": 26, "string": "However, most previous studies used metadata of documents." }, { "id": 27, "string": "In this work, we focus on identifying the landscape of the oncology field from a perspective of citation." }, { "id": 28, "string": "By using citation sentences, our results can indicate which authors are actively cited and which journals lead a certain topic." }, { "id": 29, "string": "The rest of the paper is organized as follows: Section 2 describes the proposed approach." }, { "id": 30, "string": "Section 3 analyzes the topic modeling results." }, { "id": 31, "string": "Section 4 concludes the paper with the future work." }, { "id": 32, "string": "Methodology Main idea The basic assumption of the proposed approach is that citation sentences embed useful contents signifying the influence of cited authors on shared ideas of citing authors." }, { "id": 33, "string": "Citation sentences are also considered as an invisible intellectual place for idea exchanging since citations are effective means of supporting and expressing their own arguments by using other works." }, { "id": 34, "string": "In the similar vein, Di Marco and Mercer [11] claim that citation sentences play a major role in creating the relationship among relevant authors within the similar research fields." }, { "id": 35, "string": "With these assumptions, we are to explore the implicitness of topic relationships resided in citation sentences from the integrated perspective by incorporating the citing authors and journal titles into interpreting the topical relationships." }, { "id": 36, "string": "As shown in Figure 1 , we utilized various features including citing authors, citing sentences and journal titles for topic analysis." }, { "id": 37, "string": "Authors in Figure 1 mean the citing authors who write a paper and who cite other's work." }, { "id": 38, "string": "Citation sentences are the sentences written by the authors when they cite other's work in the paper, and journal titles are the journal names publishing the citing authors' paper." }, { "id": 39, "string": "By employing AJT model with these three parameters , we can discover which topics are the most salient ones referred to frequently by researchers and who are the leading authors sharing other authors' ideas in the research field and which journal leads such endeavor." }, { "id": 40, "string": "Fig." }, { "id": 41, "string": "1." }, { "id": 42, "string": "Three parameters for AJT model Data collection For this study, we compile the dataset on the field of Oncology from PubMed Central that provides the full-text in the biomedical field." }, { "id": 43, "string": "We select top 15 journals of Oncology by Thomson Reuter's JCR and journal's impact factor, and from these 15 journals, we are able to collect 6,360 full-text articles." }, { "id": 44, "string": "Figure 2 describes the workflow of our study." }, { "id": 45, "string": "As mentioned earlier, with the fulltext articles collected from PubMed Central, we extract the citation sentences." }, { "id": 46, "string": "Most citation sentences are kept in the following format: (author, year), (reference number) [reference number]." }, { "id": 47, "string": "An example of such format is \"(Author name, 2000)\"." }, { "id": 48, "string": "We use the regular expression technique to parse and extract the citation sentences, when the tag , appears on the sentences after parsing XML records with the Java-based SAX parser." }, { "id": 49, "string": "Method Fig." }, { "id": 50, "string": "2." }, { "id": 51, "string": "Workflow We also parse other metadata for AJT model such as the name of authors and journal titles." }, { "id": 52, "string": "The author tags, and inside the , denote the list of authors who wrote the paper." }, { "id": 53, "string": "For journal, we extract the titles when the journal tags, and , are included in the tag of and ." }, { "id": 54, "string": "We also preprocess extracted sentences by removing both functional and general words and applying the Porter's stemming algorithm to improve the input for AJT Model." }, { "id": 55, "string": "AJT Model For our study, we apply ACT [10] model with several metadata such as citation sentences, journal titles and citing authors to develop AJT model." }, { "id": 56, "string": "Our AJT model utilizes journal titles and citation sentences instead of conference and abstract on documents." }, { "id": 57, "string": "The change of model is needed to analyze most influential topics in Oncology and to find leading authors who frequently mention the active topics and to detect the journals involved in such topics." }, { "id": 58, "string": "Figure 1 , Table 1) Like ACT model, AJT model assumes that each citing author is related to distribution over topics and each word in citation sentences is derived from a topic." }, { "id": 59, "string": "In the AJT model, the journal titles are related to each word." }, { "id": 60, "string": "To determine a word (ω_Si) in citation sentences (S), citing authors (x_Si) are consider for a word." }, { "id": 61, "string": "Each citing author is associated with a distributed topic." }, { "id": 62, "string": "A topic is generated from the citing author-topic distribution." }, { "id": 63, "string": "The words and journal titles are generated from a specific topic." }, { "id": 64, "string": "AJT model presents (1) 3 Results and Analyses For AJT model, we set the number of topics to 15 and finally select 8 topics as major topics." }, { "id": 65, "string": "Since we discovered that there are similar topics on our results, we calculated the similarity between 15 topics to select the most representative topics." }, { "id": 66, "string": "The topical similarities are measured by each word on topics and we calculated the similarities of two topics where each topic are represented in an array of a term vector." }, { "id": 67, "string": "Through this process, we chose 8 topics which have high topical similarities (over 0.5)." }, { "id": 68, "string": "Each topic presents top 5 words from topic-word distribution, and 5 most related authors and journal titles are displayed along with each topic." }, { "id": 69, "string": "By performing several times on the pilot studies, we decided to choose top 5 words which are quite appropriate to describe each topics." }, { "id": 70, "string": "The results of AJT-based topic modeling is shown in Table 1 ." }, { "id": 71, "string": "We label topic 1 \"breast cancer\" whose top words include breast, expression women, and growth." }, { "id": 72, "string": "Since the dataset is compiled with citation sentences, it implies that the topic \"breast cancer\" is a popular topic where researchers share and exchange ideas and facts related to breast cancer." }, { "id": 73, "string": "In relation to the topic \"breast cancer\", the active authors of breast cancer are Johnston Stephen RD, Colditz Graham A, and Sternlicht Mark D, and they share ideas with others on breast cancer from our results." }, { "id": 74, "string": "In terms of journals that provide a common place for idea sharing and communication, the journal \"Breast Cancer Research\" is the top journal of topic 1, and its impact factor is 5.49." }, { "id": 75, "string": "Authors such as Kurzrock Razelle, and Axelrod Haley in group 4 are the leading researchers sharing ideas on the topic \"targeted therapy.\"" }, { "id": 76, "string": "The topic 4 is associated with the targeted therapy represented by words like mutations, treatments, therapy and disease." }, { "id": 77, "string": "The two most influential journals in topic 4 are \"Oncotarget\" and \"Journal of Thoracic Oncology\" whose impact factors are 6.36 and 5.28 respectively, which indicates that these two journals are the major journals encouraging authors to share ideas and collaborate with each other on cancer targeted therapy subject area." }, { "id": 78, "string": "Authors like Zitgel Laurence, Galluzzi Lorenzo, and Kroemer Guido in the author group 7 are the ones that actively share ideas about the topic \"Cancer Immunology.\"" }, { "id": 79, "string": "Top concepts that are related to this topic are cell, immune, clinical and antitumor." }, { "id": 80, "string": "The top journal of the topic \"Cancer Immunology\" is Oncoinmmunology whose impact factor is 6.266." }, { "id": 81, "string": "Romagnani Paola and Salem Husein K in topic 8 \"Stem Cell\" are the authors that communicate and share ideas actively with each other in the given field, and the journal \"Stem Cells\" (impact factor: 6.523) is the leading journal." }, { "id": 82, "string": "We visualize topic keywords obtained from results of AJT-based topic model." }, { "id": 83, "string": "We construct the co-occurrence network and analyze which topic words play an important role in this domain." }, { "id": 84, "string": "Each node in the network represents a topic word, and an edge represents a co-occurrence frequency between keywords." }, { "id": 85, "string": "The size of nodes represents degree centrality and the color means network clusters obtained by using modularity algorithm." }, { "id": 86, "string": "This network consists of 100 nodes and 1,436 edges." }, { "id": 87, "string": "As shown in Figure 4 , each topic belongs to a specific community, but shares some important topic keywords." }, { "id": 88, "string": "Especially, the topic words positioned at the center is represented core-keywords in Oncology." }, { "id": 89, "string": "Figure 4 indicates that these words are the essential concepts of the Oncology domain." }, { "id": 90, "string": "Along with the results of AJT-based topic models, we can infer the major journals and authors develop their own research area based on these core-concepts." }, { "id": 91, "string": "Fig." }, { "id": 92, "string": "4." }, { "id": 93, "string": "Network of topic keywords The above results imply that the proposed approach identifies which topics are frequently shared, who facilitates to exchange ideas, and which journals provide a placeholder for it." }, { "id": 94, "string": "Identification of the triple relationship among authors, journals, and topics sheds new insight on understanding the well-discussed topics driven by the leading journals and authors that play a mediator role in the development of Oncology." }, { "id": 95, "string": "Conclusion One of the major research problems in bibliometrics is how to map out the intellectual structure of a research field." }, { "id": 96, "string": "The proposed approach tackles such research problem by utilizing citation sentences and AJT model." }, { "id": 97, "string": "By using citation sentences as the input for AJT model to find latent meaning, AJT model suggests a new way to detect leading authors and journals in sub-disciplines represented by discovered topics in a certain field." }, { "id": 98, "string": "Achieving this is not feasible by traditional frequency-based citation analysis." }, { "id": 99, "string": "One of the interesting observations is that the top-ranked journals in the discovered topics derived from AJT model are not ranked top in terms of JCR." }, { "id": 100, "string": "For example, the \"Oncotarget\" journal is the top-ranked journal in three topics in our analysis, but the ranking of the journal is 20 according to JCR." }, { "id": 101, "string": "Since we only report on preliminary results of our approach, we undertake in-depth analysis to investigate why this difference exists." }, { "id": 102, "string": "We also conduct various statistical tests on the results." }, { "id": 103, "string": "Based on the reported results in this paper, though, we claim that AJT can be used for discovering latent meaning associated citation sentences and the major players leading the field." }, { "id": 104, "string": "As a follow-up study, we will conduct a comparative study that compares the proposed approach with the general topic modeling technique such as LDA." }, { "id": 105, "string": "We also plan to investigate whether there is a different impact of using citation sentences and general meta-data such as abstract and title for topic analysis on facilitating idea sharing and scholarly communication." }, { "id": 106, "string": "In addition, we would like to consider the window size of citation sentences enriching citation context and to discover the authors' relationships among the neighboring citation sentences." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 31 }, { "section": "Main idea", "n": "2.1", "start": 32, "end": 41 }, { "section": "Data collection", "n": "2.2", "start": 42, "end": 54 }, { "section": "AJT Model", "n": "2.4", "start": 55, "end": 94 }, { "section": "Conclusion", "n": "4", "start": 95, "end": 106 } ], "figures": [ { "filename": "../figure/image/971-Figure1-1.png", "caption": "Fig. 1. Three parameters for AJT model", "page": 2, "bbox": { "x1": 124.8, "x2": 471.35999999999996, "y1": 278.88, "y2": 376.32 } }, { "filename": "../figure/image/971-Figure2-1.png", "caption": "Fig. 2. Workflow", "page": 2, "bbox": { "x1": 124.32, "x2": 468.47999999999996, "y1": 499.68, "y2": 610.0799999999999 } }, { "filename": "../figure/image/971-Table1-1.png", "caption": "Table 1. The Results of AJT-based Topic Modeling in Oncology", "page": 4, "bbox": { "x1": 122.88, "x2": 471.35999999999996, "y1": 610.56, "y2": 685.92 } }, { "filename": "../figure/image/971-Figure3-1.png", "caption": "Fig. 3. Graphical representation and Notions of AJT model, which applies ACT model (Applied Tang, J., Jin, R., & Zhang, J., 2008, p.1056, Figure 1, Table 1)", "page": 3, "bbox": { "x1": 123.83999999999999, "x2": 471.35999999999996, "y1": 388.8, "y2": 510.24 } }, { "filename": "../figure/image/971-Figure4-1.png", "caption": "Fig. 4. Network of topic keywords", "page": 6, "bbox": { "x1": 126.72, "x2": 468.47999999999996, "y1": 236.64, "y2": 512.16 } } ] }, "gem_id": "GEM-SciDuet-train-7" }, { "slides": { "0": { "title": "Motivation", "text": [ "Extracting cognates for related languages in Romance and", "Reducing the number of unknown words on SMT training data", "Learning regular differences in words roots/endings shared across related languages" ], "page_nums": [ 2 ], "images": [] }, "1": { "title": "Method", "text": [ "Produce n-best lists of cognates using a family of distance measures from comparable corpora", "Prune the n-best lists by ranking Machine Learning (ML) algorithm trained over parallel corpora", "Motivation n-best list allows surface variation on possible cognate translations" ], "page_nums": [ 4 ], "images": [] }, "2": { "title": "Similarity metrics", "text": [ "Compare words between frequency lists over comparable corpora", "L matching between the languages using Levenshtein distance:", "L-R Levenshtein distance computed separately for the roots and for the endings: aceito (pt) vs acepto (es) rejeito (pt) vs rechazo (es)", "L-C Levenshtein distance over words with similar number of starting characters (i.e. prefix): introducao (pt) vs introduccion (es) introduziu (pt) vs introdujo (es)" ], "page_nums": [ 5 ], "images": [] }, "3": { "title": "Search space constraints", "text": [ "Motivation Exhaustive method compares all the combinations of source and target words", "Order the target side frequency list into bins of similar frequency", "Compare each source word with target bins of similar frequency around a window", "L-C metric only compares words that share a given n prefix" ], "page_nums": [ 6 ], "images": [] }, "4": { "title": "Ranking", "text": [ "Motivation Prune n-best lists by ranking ML algorithm", "Training data come from aligned parallel corpora where the rank is given by the alignment probability from GIZA++", "Simulate cognate training data by pruning pairs of words below a Levenshtein threshold" ], "page_nums": [ 7 ], "images": [] }, "5": { "title": "Features", "text": [ "Number of times of each edit operation, the model assigns a different weight to each operation", "Cosine between the distributional vectors of the source and target words vectors from word2vec mapped to same space via a learned transformation matrix", "SVM ranking default configuration (RBF kernel)", "Easy-adapt features given different domains (Wikipedia, subtitles)" ], "page_nums": [ 8 ], "images": [] }, "6": { "title": "Data description", "text": [ "n-best lists from Wikipedia dumps (frequency lists)", "ML training Wiki-titles, parallel data from inter language links from the tittles of the Wikipedia articles 500K aligned links (i.e. sentences)", "Opensubs, 90K training instances", "Zoo proprietary corpus of subtitles produced by professional translators, 20K training instances", "Ranking test Heldout data from training", "Manual cognate test Wikipedia most frequent words", "SMT test Zoo data" ], "page_nums": [ 10 ], "images": [] }, "7": { "title": "Language pairs", "text": [ "Romance Source: Portuguese, French, Italian Target: Spanish", "Slavonic Source: Ukrainian, Bulgarian Target: Russian" ], "page_nums": [ 11 ], "images": [] }, "8": { "title": "Results on heldout data", "text": [ "Error score on heldout data", "E Edit distance features", "EC Edit distance plus distributed vectors features", "Zoo error% Opensubs error% Wiki-titles error%", "Romance pt-es it-es fr-es", "Model E Model EC Model E Model EC Model E Model EC" ], "page_nums": [ 12 ], "images": [] }, "9": { "title": "Manual evaluation", "text": [ "Conclusions Results Machine Translation", "Results on sample of 100 words", "n-best lists L, L-R, L-C ranking model E", "List L List L-R List L-C" ], "page_nums": [ 13 ], "images": [] }, "10": { "title": "Addition of lists SMT", "text": [ "1-best lists with L-C and E ranking pt-es: 80K training sentences, 100K cognate pairs", "significant uk-ru: 140K training sentences, 100K cognate pairs" ], "page_nums": [ 14 ], "images": [] }, "12": { "title": "Conclusions", "text": [ "MT dictionaries extracted from comparable resources for related languages", "Positive results on the n-bes lists with L-C", "Frequency window heuristic shows poor results", "ML models are able to rank similar words on the top of the list", "Preliminary results on an SMT system show modest improvements compare to the baseline", "The OOV rate shows improvements around reduction on word types" ], "page_nums": [ 17 ], "images": [] }, "13": { "title": "Future work", "text": [ "Morphology features for the n-best list (Unsupervised)", "Instead of prefix heuristic (L-C) and stemmer (L-R)", "Contribution for all the produced cognate lists on SMT", "Using char-based transliteration model trained on Zoo plus n-best lists", "Motivation alignment learns useful transformations: e.g. introducao (pt) vs introduccion (es)" ], "page_nums": [ 18 ], "images": [] } }, "paper_title": "Obtaining SMT dictionaries for related languages", "paper_id": "972", "paper": { "title": "Obtaining SMT dictionaries for related languages", "abstract": "This study explores methods for developing Machine Translation dictionaries on the basis of word frequency lists coming from comparable corpora. We investigate (1) various methods to measure the similarity of cognates between related languages, (2) detection and removal of noisy cognate translations using SVM ranking. We show preliminary results on several Romance and Slavonic languages.", "text": [ { "id": 0, "string": "Introduction Cognates are words having similarities in their spelling and meaning in two languages, either because the two languages are typologically related, e.g., maladie vs malattia ('disease'), or because they were both borrowed from the same source (informatique vs informatica)." }, { "id": 1, "string": "The advantage of their use in Statistical Machine Translation (SMT) is that the procedure can be based on comparable corpora, i.e., similar corpora which are not translations of each other (Sharoff et al., 2013) ." }, { "id": 2, "string": "Given that there are more sources of comparable corpora in comparison to parallel ones, the lexicon obtained from them is likely to be richer and more variable." }, { "id": 3, "string": "Detection of cognates is a well-known task, which has been explored for a range of languages using different methods." }, { "id": 4, "string": "The two main approaches applied to detection of the cognates are the generative and discriminative paradigms." }, { "id": 5, "string": "The first one is based on detection of the edit distance between potential candidate pairs." }, { "id": 6, "string": "The distance can be a simple Levenshtein distance, or a distance measure with the scores learned from an existing parallel set (Tiedemann, 1999; Mann and Yarowsky, 2001) ." }, { "id": 7, "string": "The discriminative paradigm uses standard approaches to machine learning, which are based on (1) extracting features, e.g., character n-grams, and (2) learning to predict the transformations of the source word needed to (Jiampojamarn et al., 2010; Frunza and Inkpen, 2009) ." }, { "id": 8, "string": "Given that SMT is usually based on a full-form lexicon, one of the possible issues in generation of cognates concerns the similarity of words in their root form vs the similarity in endings." }, { "id": 9, "string": "For example, the Ukrainian wordform áëèaeíüîãî 'near gen ' is cognate to Russian áëèaeíåãî, the root is identical, while the ending is considerably different (üîãî vs åãî)." }, { "id": 10, "string": "Regular differences in the endings, which are shared across a large number of words, can be learned separately from the regular differences in the roots." }, { "id": 11, "string": "One also needs to take into account the false friends among cognates." }, { "id": 12, "string": "For example, diseñar means 'to design' in Spanish vs desenhar in Portuguese means 'to draw'." }, { "id": 13, "string": "There are also often cases of partial cognates, when the words share the meaning in some contexts, but not in others, e.g., aeåíà in Russian means 'wife', while its Bulgarian cognate aeåíà has two meanings: 'wife' and 'woman'." }, { "id": 14, "string": "Yet another complexity concerns a frequency mismatch." }, { "id": 15, "string": "Two cognates might differ in their frequency." }, { "id": 16, "string": "For example, dibujo in Spanish ('a drawing', rank 1779 in the Wikipedia frequency list) corresponds to a relatively rare cognate word debuxo in Portuguese (rank 104,514 in Wikipedia), while another Portuguese word desenho is more commonly used in this sense (rank 884 in the Portuguese Wikipedia)." }, { "id": 17, "string": "For MT tasks we need translations that are equally appropriate in the source and target language, therefore cognates useful for a high-quality dictionary for SMT need to have roughly the same frequency in comparable corpora and they need to be used in similar contexts." }, { "id": 18, "string": "This study investigates the settings for extracting cognates for related languages in Romance and Slavonic language families for the task of reducing the number of unknown words for SMT." }, { "id": 19, "string": "This in-cludes the effects of having constraints for the cognates to be similar in their roots and in the endings, to occur in distributionally similar contexts and to have similar frequency." }, { "id": 20, "string": "Methodology The methodology for producing the list of cognates is based on the following steps: 1) Produce several lists of cognates using a family of distance measures, discussed in Section 2.1 from comparable corpora, 2) Prune the candidate lists by ranking items, this is done using a Machine Learning (ML) algorithm trained over parallel corpora for detecting the outliers, discussed in Section 2.2; The initial frequency lists for alignment are based Wikipedia dumps for the following languages: Romance (French, Italian, Spanish, Portuguese) and Slavonic (Bulgarian, Russian, Ukrainian), where the target languages are Spanish and Russian 1 ." }, { "id": 21, "string": "Cognate detection We extract possible lists of cognates from comparable corpora by using a family of similarity measures: L direct matching between the languages using Levenshtein distance (Levenshtein, 1966) ; L(w s , w t ) = 1 − ed(w s , w t ) L-R Levenshtein distance with weights computed separately for the roots and for the endings; LR(r s , r t , e s , e t ) = α×ed(rs,rt)+β×ed(es,et) α+β L-C Levenshtein distance over word with similar number of starting characters (i.e." }, { "id": 22, "string": "prefix); LC(c s , c t ) = 1 − ed(c s , c t ), same prefix 0, otherwise where ed(., .)" }, { "id": 23, "string": "is the normalised Levenshtein distance in characters between the source word w s and the target word w t ." }, { "id": 24, "string": "The r s and r t are the stems produced by the Snowball stemmer 2 ." }, { "id": 25, "string": "Since the Snowball stemmer does not support Ukrainian and Bulgarian, we used the Russian model for making the stem/ending split." }, { "id": 26, "string": "e s , e t are the characters at the end of a word form given a stem and c s , c t are the first n characters of a word." }, { "id": 27, "string": "In this work, we set the weights α = 0.6 and β = 0.4 giving more importance to the roots." }, { "id": 28, "string": "We set a higher weight to roots on the L-R, which is language dependent, and compare to the L-C metric, which is language independent." }, { "id": 29, "string": "We transform the Levenshtein distances into similarity metrics by subtracting the normalised distance score from one." }, { "id": 30, "string": "The produced lists contain for each source word the possible n-best target words accordingly to the maximum scores with one of the previous measures." }, { "id": 31, "string": "The n-best list allows possible cognate translations to a given source word that share a part of the surface form." }, { "id": 32, "string": "Different from (Mann and Yarowsky, 2001) , we produce n-best cognate lists scored by edit distance instead of 1-best." }, { "id": 33, "string": "An important problem when comparing comparable corpora is the way of representing the search space, where an exhaustive method compares all the combinations of source and target words (Mann and Yarowsky, 2001) ." }, { "id": 34, "string": "We constraint the search space by comparing each source word against the target words that belong to a frequency window around the frequency of the source word." }, { "id": 35, "string": "This constraint only applies for the L and L-R metrics." }, { "id": 36, "string": "We use Wikipedia dumps for the source and target side processed in the form frequency lists." }, { "id": 37, "string": "We order the target side list into bins of similar frequency and for the source side we filter words that appear only once." }, { "id": 38, "string": "We use the window approach given that the frequency between the corpora under study can not be directly comparable." }, { "id": 39, "string": "During testing we use a wide window of ±200 bins to minimise the loss of good candidate translations." }, { "id": 40, "string": "The second search space constraint heuristic is the L-C metric." }, { "id": 41, "string": "This metric only compares source words with the target words upto a given n prefix." }, { "id": 42, "string": "For c s , c t in L-C , we use the first four characters to compare groups of words as suggested in (Kondrak et al., 2003) ." }, { "id": 43, "string": "Cognate Ranking Given that the n-best lists contain noise, we aim to prune them by an ML ranking model." }, { "id": 44, "string": "However, there is a lack of resources to train a classification model for cognates (i.e." }, { "id": 45, "string": "cognate vs. false friend), as mentioned in (Fišer and Ljubešić, 2013) ." }, { "id": 46, "string": "Available data that can be used to judge the cognate lists are the alignment pairs extracted from parallel data." }, { "id": 47, "string": "We decide to use a ranking model to avoid data imbalance present in classification and to use the probability scores of the alignment pairs as ranks, as opposed to the classification model used by (Irvine and Callison-Burch, 2013) ." }, { "id": 48, "string": "Moreover, we also use a popular domain adaptation technique (Daumé et al., 2010) given that we have access to different domains of parallel training data that might be compatible with our comparable corpora." }, { "id": 49, "string": "The training data are the alignments between pairs of words where we rank them accordingly to their correspondent alignment probability from the output of GIZA++ (Och and Ney, 2003) ." }, { "id": 50, "string": "We then use a heuristic to prune training data in order to simulate cognate words." }, { "id": 51, "string": "Pairs of words scored below the Levenshtein similarity threshold of 0.5 are not considered as cognates given that they are likely to have a different surface form." }, { "id": 52, "string": "We represent the training and test data with features extracted from different edit distance scores and distributional measures." }, { "id": 53, "string": "The edit distances features are as follows: 1) Similarity measure L and 2) Number of times of each edit operation." }, { "id": 54, "string": "Thus, the model assigns a different importance to each operation." }, { "id": 55, "string": "The distributional feature is based on the cosine between the distributional vectors of a window of n words around the word currently under comparison." }, { "id": 56, "string": "We train distributional similarity models with word2vec (Mikolov et al., 2013a) for the source and target side separately." }, { "id": 57, "string": "We extract the continuous vector for each word in the window, concatenate it and then compute the cosine between the concatenated vectors of the source and the target." }, { "id": 58, "string": "We suspect that the vectors will have similar behaviour between the source and the target given that they are trained under parallel Wikipedia articles." }, { "id": 59, "string": "We develop two ML models: 1) Edit distance scores and 2) Edit distance scores and distributional similarity score." }, { "id": 60, "string": "We use SVMlight (Joachims, 1998) Results and Discussion In this section we describe the data used to produce the n-best lists and train the cognate ranking models." }, { "id": 61, "string": "We evaluate the ranking models with heldout data from each training domain." }, { "id": 62, "string": "We also provide manual evaluation over the ranked n-best lists for error analysis." }, { "id": 63, "string": "Data The n-best lists to detect cognates were extracted from the respective Wikipedias by using the method described in Section 2.1." }, { "id": 64, "string": "The training data for the ranking model consists of different types of parallel corpora." }, { "id": 65, "string": "The parallel corpora are as follows: 1) Wiki-titles we use the inter language links to create a parallel corpus from the tittles of the Wikipedia articles, with about 500K aligned links (i.e." }, { "id": 66, "string": "'sentences') per language pair (about 200k for bg-ru), giving us about 200K training instances per language pair 3 , 2) Opensubs is an open source corpus of subtitles built by the fan community, with 1M sentences, 6M tokens, 100K words, giving about 90K training instances (Tiedemann, 2012) and 3) Zoo is a proprietary corpus of subtitles produced by professional translators, with 100K sentences, 700K tokens, 40K words and giving about 20K training instances per language pair." }, { "id": 67, "string": "Our objective is to create MT dictionaries from the produced n-best lists and we use parallel data as a source of training to prune them." }, { "id": 68, "string": "We are interested in the corpora of subtitles because the chosen domain of our SMT experiments is subtitling, while the proposed ranking method can be used in other application domains as well." }, { "id": 69, "string": "We consider Zoo and Opensubs as two different domains given that they were built by different types of translators and they differ in size and quality." }, { "id": 70, "string": "The heldout data consists of 2K instances for each corpus." }, { "id": 71, "string": "We use Wikipedia documents and Opensusbs subtitles to train word2vec for the distributional similarity feature." }, { "id": 72, "string": "We use the continuous bag-ofwords algorithm for word2vec and set the parameters for training to 200 dimensions and a window of 8 words." }, { "id": 73, "string": "The Wikipedia documents with an average number of 70K documents for each language, and Opensubs subtitles with 1M sentences for each language." }, { "id": 74, "string": "In practice we only use the Wikipedia data given that for Opensubs the model is able to find relatively few vectors, for example a vector is found only for 20% of the words in the pt-es pair." }, { "id": 75, "string": "Evaluation of the Ranking Model We define two ranking models as: model E for edit distance features and model EC for both edit Table 1 shows the results of the ranking procedure." }, { "id": 76, "string": "For the Romance family language pairs the model EC with context features consistently reduces the error compared to the solely use of edit distance metrics." }, { "id": 77, "string": "The only exception is the it-es EC model with poor results for the domain of Wiki-titles." }, { "id": 78, "string": "The models for the Slavonic family behave similarly to the Romance family, where the use of context features reduces the ranking error." }, { "id": 79, "string": "The exception is the bg-ru model on the Opensubs domain." }, { "id": 80, "string": "A possible reason for the poor results on the ites and bg-ru models is that the model often assigns a high similarity score to unrelated words." }, { "id": 81, "string": "For example, in it-es, mortes 'deads' is treated as close to categoria 'category'." }, { "id": 82, "string": "A possible solution is to map the vectors form the source side into the space of the target side via a learned transformation matrix (Mikolov et al., 2013b) ." }, { "id": 83, "string": "Preliminary Results on Comparable Corpora After we extracted the n-best lists for the Romance family comparable corpora, we applied one of the ranking models on these lists and we manually evaluated over a sample of 50 words 4 ." }, { "id": 84, "string": "We set n to 10 for the n-best lists." }, { "id": 85, "string": "We use a frequency window of 200 for the n-best list search heuristic and the domain of the comparable corpora to Wiki-titles 4 The sample consists of words with a frequency between 2K and 5. for the domain adaptation technique." }, { "id": 86, "string": "The purpose of manual evaluation is to decide whether the ML setup is sensible on the objective task." }, { "id": 87, "string": "Each list is evaluated by accuracy at 1 and accuracy at 10." }, { "id": 88, "string": "We also show success and failure examples of the ranking and the n-best lists." }, { "id": 89, "string": "Table 2 shows the preliminary results of the ML model E on a sample of Wikipedia dumps." }, { "id": 90, "string": "The L and L-R lists consistently show poor results." }, { "id": 91, "string": "A possible reason is the amount of errors given the first step to extract the n-best lists." }, { "id": 92, "string": "For example, in pt-es, for the word vivem 'live' the 10-best list only contain one word with a similar meaning viva 'living' but it can be also translated as 'cheers'." }, { "id": 93, "string": "In the pt-es list for the word representação 'description' the correct translation representación is not among the 10-best in the L list." }, { "id": 94, "string": "However, it is present in the 10-best for the L-C list and the ML model EC ranks it in the first place." }, { "id": 95, "string": "The edit distance model E still makes mistakes like with the list L-C, the word vivem 'live' translates into viven 'living' and the correct translation is vivir." }, { "id": 96, "string": "However, given a certain context/sense the previous translation can be correct." }, { "id": 97, "string": "The ranking scores given by the SVM varies from each list version." }, { "id": 98, "string": "For the L-C lists the scores are more uniform in increasing order and with a small variance." }, { "id": 99, "string": "The L and L-R lists show the opposite behaviour." }, { "id": 100, "string": "We add the produced Wikipedia n-best lists with the L metric into a SMT training dataset for the ptes pair." }, { "id": 101, "string": "We use the Moses SMT toolkit (Koehn et al., 2007) to test the augmented datasets." }, { "id": 102, "string": "We compare the augmented model with a baseline both trained by using the Zoo corpus of subtitles." }, { "id": 103, "string": "We use a 1-best list consisting of 100K pairs." }, { "id": 104, "string": "Te dataset used for pt-es baseline is: 80K training sentences, 1K sentences for tuning and 2K sen- Lang Pairs acc@1 acc@10 acc@1 acc@10 acc@1 acc@10 pt-es 20 60 22 59 32 70 it-es 16 53 18 45 44 66 fr-es 10 48 12 51 29 59 A possible reason for low improvement in terms of the BLEU scores is because MT evaluation metrics, such as BLEU, compare the MT output with a human reference." }, { "id": 105, "string": "The human reference translations in our corpus have been done from English (e.g., En→Es), while the test translations come from a related language (En→Pt→Es), often resulting in different paraphrases of the same English source." }, { "id": 106, "string": "While our OOV rate improved, the evaluation scores did not reflected this, because our MT output was still far from the reference even in cases it was otherwise acceptable." }, { "id": 107, "string": "List L List L-R List L-C Conclusions and future Work We have presented work in progress for developing MT dictionaries extracted from comparable resources for related languages." }, { "id": 108, "string": "The extraction heuristic show positive results on the n-best lists that group words with the same starting char-5 https://github.com/clab/fast_align 6 https://kheafield.com/code/kenlm/ 7 The p-value for the uk-ru pair is 0.06 we do not consider this result as statistically significant." }, { "id": 109, "string": "acters, because the used comparable corpora consist of related languages that share a similar orthography." }, { "id": 110, "string": "However, the lists based on the frequency window heuristic show poor results to include the correct translations during the extraction step." }, { "id": 111, "string": "Our ML models based on similarity metrics over parallel corpora show rankings similar to heldout data." }, { "id": 112, "string": "However, we created our training data using simple heuristics that simulate cognate words (i.e." }, { "id": 113, "string": "pairs of words with a small surface difference)." }, { "id": 114, "string": "The ML models are able to rank similar words on the top of the list and they give a reliable score to discriminate wrong translations." }, { "id": 115, "string": "Preliminary results on the addition of the n-best lists into an SMT system show modest improvements compare to the baseline." }, { "id": 116, "string": "However, the OOV rate shows improvements around 10% reduction on word types, because of the wide variety of lexical choices introduced by the MT dictionaries." }, { "id": 117, "string": "Future work involves the addition of unsupervised morphology features for the n-best list extraction, i.e." }, { "id": 118, "string": "first step, given that the use of starting characters shows to be an effective heuristic to prune the search space and language independent." }, { "id": 119, "string": "Finally, we will measure the contribution for all the produced cognate lists, where we can try different strategies to add the dictionaries into an SMT system (Irvine and Callison-Burch, 2014) ." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 17 }, { "section": "Methodology", "n": "2", "start": 18, "end": 20 }, { "section": "Cognate detection", "n": "2.1", "start": 21, "end": 42 }, { "section": "Cognate Ranking", "n": "2.2", "start": 43, "end": 59 }, { "section": "Results and Discussion", "n": "3", "start": 60, "end": 62 }, { "section": "Data", "n": "3.1", "start": 63, "end": 74 }, { "section": "Evaluation of the Ranking Model", "n": "3.2", "start": 75, "end": 82 }, { "section": "Preliminary Results on Comparable Corpora", "n": "3.3", "start": 83, "end": 106 }, { "section": "Conclusions and future Work", "n": "4", "start": 107, "end": 119 } ], "figures": [ { "filename": "../figure/image/972-Table2-1.png", "caption": "Table 2: Accuracy at 1 and at 10 results of the ML model E over a sample of 50 words on Wikipedia dumps comparable corpora for the Romance family.", "page": 4, "bbox": { "x1": 132.96, "x2": 464.15999999999997, "y1": 61.44, "y2": 132.96 } }, { "filename": "../figure/image/972-Table1-1.png", "caption": "Table 1: Zero/one-error percentage results on heldout test parallel data for each training domain.", "page": 3, "bbox": { "x1": 106.56, "x2": 490.08, "y1": 61.44, "y2": 189.12 } } ] }, "gem_id": "GEM-SciDuet-train-8" }, { "slides": { "0": { "title": "Latent Dirichlet Allocation", "text": [ "David Blei. Probabilistic topic models. Comm. ACM. 2012" ], "page_nums": [ 2 ], "images": [] }, "2": { "title": "Variations and extensions", "text": [ "Author topic model (Rosen-Zvi et al 2004)", "Supervised LDA (SLDA; McAuliffe and Blei, 2008)", "Dirichlet multinomial regression (Mimno and McCallum, 2008)", "Sparse additive generative models (SAGE; Eisenstein et al,", "Structural topic model (Roberts et al, 2014)" ], "page_nums": [ 4 ], "images": [] }, "3": { "title": "Desired features of model", "text": [ "Easy modification by end-users.", "Covariates: features which influences text (as in SAGE).", "Labels: features to be predicted along with text (as in SLDA).", "Possibility of sparse topics.", "Incorporate additional prior knowledge.", "Use variational autoencoder (VAE) style of inference (Kingma" ], "page_nums": [ 5, 6, 7, 8, 9 ], "images": [] }, "4": { "title": "Desired outcome", "text": [ "Coherent groupings of words (something like topics), with offsets for observed metadata", "Encoder to map from documents to latent representations", "Classifier to predict labels from from latent representation" ], "page_nums": [ 10, 11, 12 ], "images": [] }, "5": { "title": "Model", "text": [ "p( w) i generator network: p(w i) = fg( )", "ELBO Eq[log p(words ri DKL[q(ri words)p(ri", "encoder network: q( i w) = fe( )" ], "page_nums": [ 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 ], "images": [] }, "6": { "title": "Scholar", "text": [ "p(word i ci softmax(d Ti B(topic) cTi B(cov))", "Optionally include interactions between topics and covariates", "p(yi i ci fy (i ci", "log i f(words, ci yi", "Optional incorporation of word vectors to embed input" ], "page_nums": [ 26, 27, 28, 29 ], "images": [] }, "7": { "title": "Optimization", "text": [ "Tricks from Srivastava and Sutton, 2017:", "Adam optimizer with high-learning rate to bypass mode collapse", "Batch-norm layers to avoid divergence", "Annealing away from batch-norm output to keep results interpretable" ], "page_nums": [ 30 ], "images": [] }, "8": { "title": "Output of Scholar", "text": [ "B(topic),B(cov): Coherent groupings of positive and negative", "deviations from background ( topics)", "f, f: Encoder network: mapping from words to topics:", "i softmax(fe(words, ci yi", "fy : Classifier mapping from i to labels: y fy (i ci" ], "page_nums": [ 31, 32, 33 ], "images": [] }, "9": { "title": "Evaluation", "text": [ "1. Performance as a topic model, without metadata (perplexity, coherence)", "2. Performance as a classifier, compared to SLDA", "3. Exploratory data analysis" ], "page_nums": [ 34 ], "images": [] }, "10": { "title": "Quantitative results basic model", "text": [ "LDA SAGE NVDM Scholar Scholar Scholar +wv +sparsity" ], "page_nums": [ 35, 36, 37, 38, 39, 40 ], "images": [] }, "11": { "title": "Classification results", "text": [ "LR SLDA Scholar Scholar (labels) (covariates)" ], "page_nums": [ 41 ], "images": [] }, "12": { "title": "Exploratory Data Analysis", "text": [ "Data: Media Frames Corpus (Card et al, 2015)", "Collection of thousands of news articles annotated in terms of tone and framing", "Relevant metadata: year of publication, newspaper, etc." ], "page_nums": [ 42 ], "images": [] }, "13": { "title": "Tone as a label", "text": [ "english language city spanish community boat desert died men miles coast haitian visas visa applications students citizenship asylum judge appeals deportation court labor jobs workers percent study wages bush border president bill republicans state gov benefits arizona law bill bills arrested charged charges agents operation" ], "page_nums": [ 43 ], "images": [ "figure/image/975-Figure2-1.png" ] }, "14": { "title": "Tone as a covariate with interactions", "text": [ "Base topics Anti-immigration Pro-immigration ice customs agency population born percent judge case court guilty patrol border miles licenses drivers card island story chinese guest worker workers benefits bill welfare criminal customs jobs million illegals guilty charges man patrol border foreign sept visas smuggling federal bill border house republican california detainees detention english newcomers asylum court judge died authorities desert green citizenship card island school ellis workers tech skilled law welfare students" ], "page_nums": [ 44 ], "images": [] }, "15": { "title": "Conclusions", "text": [ "Variational autoencoders (VAEs) provide a powerful framework for latent variable modeling", "We use the VAE framework to create a customizable model for documents with metadata", "We obtain comparable performance with enhanced flexibility and scalability", "Code is available: www.github.com/dallascard/scholar" ], "page_nums": [ 45 ], "images": [] } }, "paper_title": "Neural Models for Documents with Metadata", "paper_id": "975", "paper": { "title": "Neural Models for Documents with Metadata", "abstract": "Most real-world document collections involve various types of metadata, such as author, source, and date, and yet the most commonly-used approaches to modeling text corpora ignore this information. While specialized models have been developed for particular applications, few are widely used in practice, as customization typically requires derivation of a custom inference algorithm. In this paper, we build on recent advances in variational inference methods and propose a general neural framework, based on topic models, to enable flexible incorporation of metadata and allow for rapid exploration of alternative models. Our approach achieves strong performance, with a manageable tradeoff between perplexity, coherence, and sparsity. Finally, we demonstrate the potential of our framework through an exploration of a corpus of articles about US immigration.", "text": [ { "id": 0, "string": "Introduction Topic models comprise a family of methods for uncovering latent structure in text corpora, and are widely used tools in the digital humanities, political science, and other related fields (Boyd-Graber et al., 2017) ." }, { "id": 1, "string": "Latent Dirichlet allocation (LDA; Blei et al., 2003) is often used when there is no prior knowledge about a corpus." }, { "id": 2, "string": "In the real world, however, most documents have non-textual attributes such as author (Rosen-Zvi et al., 2004) , timestamp , rating (McAuliffe and Blei, 2008) , or ideology (Eisenstein et al., 2011; Nguyen et al., 2015b) , which we refer to as metadata." }, { "id": 3, "string": "Many customizations of LDA have been developed to incorporate document metadata." }, { "id": 4, "string": "Two models of note are supervised LDA (SLDA; McAuliffe and Blei, 2008) , which jointly models words and labels (e.g., ratings) as being generated from a latent representation, and sparse additive generative models (SAGE; Eisenstein et al., 2011) , which assumes that observed covariates (e.g., author ideology) have a sparse effect on the relative probabilities of words given topics." }, { "id": 5, "string": "The structural topic model (STM; Roberts et al., 2014) , which adds correlations between topics to SAGE, is also widely used, but like SAGE it is limited in the types of metadata it can efficiently make use of, and how that metadata is used." }, { "id": 6, "string": "Note that in this work we will distinguish labels (metadata that are generated jointly with words from latent topic representations) from covariates (observed metadata that influence the distribution of labels and words)." }, { "id": 7, "string": "The ability to create variations of LDA such as those listed above has been limited by the expertise needed to develop custom inference algorithms for each model." }, { "id": 8, "string": "As a result, it is rare to see such variations being widely used in practice." }, { "id": 9, "string": "In this work, we take advantage of recent advances in variational methods (Kingma and Welling, 2014; Rezende et al., 2014; Miao et al., 2016; Srivastava and Sutton, 2017) to facilitate approximate Bayesian inference without requiring model-specific derivations, and propose a general neural framework for topic models with metadata, SCHOLAR." }, { "id": 10, "string": "1 SCHOLAR combines the abilities of SAGE and SLDA, and allows for easy exploration of the following options for customization: 1." }, { "id": 11, "string": "Covariates: as in SAGE and STM, we incorporate explicit deviations for observed covariates, as well as effects for interactions with topics." }, { "id": 12, "string": "2." }, { "id": 13, "string": "Supervision: as in SLDA, we can use metadata as labels to help infer topics that are relevant in predicting those labels." }, { "id": 14, "string": "3." }, { "id": 15, "string": "Rich encoder network: we use the encoding network of a variational autoencoder (VAE) to incorporate additional prior knowledge in the form of word embeddings, and/or to provide interpretable embeddings of covariates." }, { "id": 16, "string": "4." }, { "id": 17, "string": "Sparsity: as in SAGE, a sparsity-inducing prior can be used to encourage more interpretable topics, represented as sparse deviations from a background log-frequency." }, { "id": 18, "string": "We begin with the necessary background and motivation ( §2), and then describe our basic framework and its extensions ( §3), followed by a series of experiments ( §4)." }, { "id": 19, "string": "In an unsupervised setting, we can customize the model to trade off between perplexity, coherence, and sparsity, with improved coherence through the introduction of word vectors." }, { "id": 20, "string": "Alternatively, by incorporating metadata we can either learn topics that are more predictive of labels than SLDA, or learn explicit deviations for particular parts of the metadata." }, { "id": 21, "string": "Finally, by combining all parts of our model we can meaningfully incorporate metadata in multiple ways, which we demonstrate through an exploration of a corpus of news articles about US immigration." }, { "id": 22, "string": "In presenting this particular model, we emphasize not only its ability to adapt to the characteristics of the data, but the extent to which the VAE approach to inference provides a powerful framework for latent variable modeling that suggests the possibility of many further extensions." }, { "id": 23, "string": "Our implementation is available at https://github." }, { "id": 24, "string": "com/dallascard/scholar." }, { "id": 25, "string": "Background and Motivation LDA can be understood as a non-negative Bayesian matrix factorization model: the observed document-word frequency matrix, X ∈ Z D×V (D is the number of documents, V is the vocabulary size) is factored into two low-rank matrices, Θ D×K and B K×V , where each row of Θ, θ i ∈ ∆ K is a latent variable representing a distribution over topics in document i, and each row of B, β k ∈ ∆ V , represents a single topic, i.e., a distribution over words in the vocabulary." }, { "id": 26, "string": "2 While it is possible to factor the count data into unconstrained 2 Z denotes nonnegative integers, and ∆ K denotes the set of K-length nonnegative vectors that sum to one." }, { "id": 27, "string": "For a proper probabilistic interpretation, the matrix to be factored is actually the matrix of latent mean parameters of the assumed data generating process, Xij ∼ Poisson(Λij)." }, { "id": 28, "string": "See Cemgil (2009) or Paisley et al." }, { "id": 29, "string": "(2014) for details." }, { "id": 30, "string": "matrices, the particular priors assumed by LDA are important for interpretability (Wallach et al., 2009) ." }, { "id": 31, "string": "For example, the neural variational document model (NVDM; Miao et al., 2016) allows θ i ∈ R K and achieves normalization by taking the softmax of θ i B." }, { "id": 32, "string": "However, the experiments in Srivastava and Sutton (2017) found the performance of the NVDM to be slightly worse than LDA in terms of perplexity, and dramatically worse in terms of topic coherence." }, { "id": 33, "string": "The topics discovered by LDA tend to be parsimonious and coherent groupings of words which are readily identifiable to humans as being related to each other (Chang et al., 2009) , and the resulting mode of the matrix Θ provides a representation of each document which can be treated as a measurement for downstream tasks, such as classification or answering social scientific questions (Wallach, 2016) ." }, { "id": 34, "string": "LDA does not require -and cannot make use of -additional prior knowledge." }, { "id": 35, "string": "As such, the topics that are discovered may bear little connection to metadata of a corpus that is of interest to a researcher, such as sentiment, ideology, or time." }, { "id": 36, "string": "In this paper, we take inspiration from two models which have sought to alleviate this problem." }, { "id": 37, "string": "The first, supervised LDA (SLDA; McAuliffe and Blei, 2008) , assumes that documents have labels y which are generated conditional on the corresponding latent representation, i.e., y i ∼ p(y | θ i )." }, { "id": 38, "string": "3 By incorporating labels into the model, it is forced to learn topics which allow documents to be represented in a way that is useful for the classification task." }, { "id": 39, "string": "Such models can be used inductively as text classifiers (Balasubramanyan et al., 2012) ." }, { "id": 40, "string": "SAGE (Eisenstein et al., 2011) , by contrast, is an exponential-family model, where the key innovation was to replace topics with sparse deviations from the background log-frequency of words (d), i.e., p(word | softmax(d + θ i B))." }, { "id": 41, "string": "SAGE can also incorporate deviations for observed covariates, as well as interactions between topics and covariates, by including additional terms inside the softmax." }, { "id": 42, "string": "In principle, this allows for inferring, for example, the effect on an author's ideology on their choice of words, as well as ideological variations on each underlying topic." }, { "id": 43, "string": "Unlike the NVDM, SAGE still constrains θ i to lie on the simplex, as in LDA." }, { "id": 44, "string": "SLDA and SAGE provide two different ways that users might wish to incorporate prior knowl-edge as a way of guiding the discovery of topics in a corpus: SLDA incorporates labels through a distribution conditional on topics; SAGE includes explicit sparse deviations for each unique value of a covariate, in addition to topics." }, { "id": 45, "string": "4 Because of the Dirichlet-multinomial conjugacy in the original model, efficient inference algorithms exist for LDA." }, { "id": 46, "string": "Each variation of LDA, however, has required the derivation of a custom inference algorithm, which is a time-consuming and errorprone process." }, { "id": 47, "string": "In SLDA, for example, each type of distribution we might assume for p(y | θ) would require a modification of the inference algorithm." }, { "id": 48, "string": "SAGE breaks conjugacy, and as such, the authors adopted L-BFGS for optimizing the variational bound." }, { "id": 49, "string": "Moreover, in order to maintain computational efficiency, it assumed that covariates were limited to a single categorical label." }, { "id": 50, "string": "More recently, the variational autoencoder (VAE) was introduced as a way to perform approximate posterior inference on models with otherwise intractable posteriors (Kingma and Welling, 2014; Rezende et al., 2014) ." }, { "id": 51, "string": "This approach has previously been applied to models of text by Miao et al." }, { "id": 52, "string": "(2016) and Srivastava and Sutton (2017) ." }, { "id": 53, "string": "We build on their work and show how this framework can be adapted to seamlessly incorporate the ideas of both SAGE and SLDA, while allowing for greater flexibility in the use of metadata." }, { "id": 54, "string": "Moreover, by exploiting automatic differentiation, we allow for modification of the model without requiring any change to the inference procedure." }, { "id": 55, "string": "The result is not only a highly adaptable family of models with scalable inference and efficient prediction; it also points the way to incorporation of many ideas found in the literature, such as a gradual evolution of topics , and hierarchical models (Blei et al., 2010; Nguyen et al., 2013 Nguyen et al., , 2015b )." }, { "id": 56, "string": "SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsity We begin by presenting the generative story for our model, and explain how it generalizes both SLDA and SAGE ( §3.1)." }, { "id": 57, "string": "We then provide a general explanation of inference using VAEs and how it applies to our model ( §3.2), as well as how to infer docu-4 A third way of incorporating metadata is the approach used by various \"upstream\" models, such as Dirichletmultinomial regression (Mimno and McCallum, 2008) , which uses observed metadata to inform the document prior." }, { "id": 58, "string": "We hypothesize that this approach could be productively combined with our framework, but we leave this as future work." }, { "id": 59, "string": "ment representations and predict labels at test time ( §3.3)." }, { "id": 60, "string": "Finally, we discuss how we can incorporate additional prior knowledge ( §3.4)." }, { "id": 61, "string": "Generative Story Consider a corpus of D documents, where document i is a list of N i words, w i , with V words in the vocabulary." }, { "id": 62, "string": "For each document, we may have observed covariates c i (e.g., year of publication), and/or one or more labels, y i (e.g., sentiment)." }, { "id": 63, "string": "Our model builds on the generative story of LDA, but optionally incorporates labels and covariates, and replaces the matrix product of Θ and B with a more flexible generative network, f g , followed by a softmax transform." }, { "id": 64, "string": "Instead of using a Dirichlet prior as in LDA, we employ a logistic normal prior on θ as in Srivastava and Sutton (2017) to facilitate inference ( §3.2): we draw a latent variable, r, 5 from a multivariate normal, and transform it to lie on the simplex using a softmax transform." }, { "id": 65, "string": "6 The generative story is shown in Figure 1a and described in equations below: For each document i of length N i : # Draw a latent representation on the simplex from a logistic normal prior: r i ∼ N (r | µ 0 (α), diag(σ 2 0 (α))) θ i = softmax(r i ) # Generate words, incorporating covariates: η i = f g (θ i , c i ) For each word j in document i: w ij ∼ p(w | softmax(η i )) # Similarly generate labels: y i ∼ p(y | f y (θ i , c i )), where p(w | softmax(η i )) is a multinomial distribution and p(y | f y (θ i , c i )) is a distribution appropriate to the data (e.g., multinomial for categorical labels)." }, { "id": 66, "string": "f g is a model-specific combination of latent variables and covariates, f y is a multi-layer neural network, and µ 0 (α) and σ 2 0 (α) are the mean and diagonal covariance terms of a multivariate normal prior." }, { "id": 67, "string": "To approximate a symmetric Dirichlet prior with hyperparameter α, these are given by the Laplace approximation (Hennig et al., 2012) to be µ 0,k (α) = 0 and σ 2 0,k = (K − 1)/(αK)." }, { "id": 68, "string": "If we were to ignore covariates, place a Dirichlet prior on B, and let η = θ i B, this model is equivalent to SLDA with a logistic normal prior." }, { "id": 69, "string": "Similarly, we can recover a model that is like SAGE, but lacks sparsity, if we ignore labels, and let (1) where d is the V -dimensional background term (representing the log of the overall word frequency), θ i ⊗ c i is a vector of interactions between topics and covariates, and B cov and B int are additional weight (deviation) matrices." }, { "id": 70, "string": "The background is included to account for common words with approximately the same frequency across documents, meaning that the B * weights now represent both positive and negative deviations from this background." }, { "id": 71, "string": "This is the form of f g which we will use in our experiments." }, { "id": 72, "string": "η i = d + θ i B + c i B cov + (θ i ⊗ c i ) B int , To recover the full SAGE model, we can place a sparsity-inducing prior on each B * ." }, { "id": 73, "string": "As in Eisenstein et al." }, { "id": 74, "string": "(2011) , we make use of the compound normal-exponential prior for each element of the weight matrices, B * m,n , with hyperparameter γ, 7 τ m,n ∼ Exponential(γ), (2) B * m,n ∼ N (0, τ m,n )." }, { "id": 75, "string": "(3) We can choose to ignore various parts of this model, if, for example, we don't have any labels or observed covariates, or we don't wish to use interactions or sparsity." }, { "id": 76, "string": "8 Other generator networks could also be considered, with additional layers to represent more complex interactions, although this might involve some loss of interpretability." }, { "id": 77, "string": "In the absence of metadata, and without sparsity, our model is equivalent to the ProdLDA model of Srivastava and Sutton (2017) with an explicit background term, and ProdLDA is, in turn, a 7 To avoid having to tune γ, we employ an improper Jeffery's prior, p(τm,n) ∝ 1/τm,n, as in SAGE." }, { "id": 78, "string": "Although this causes difficulties in posterior inference for the variance terms, τ , in practice, we resort to a variational EM approach, with MAP-estimation for the weights, B, and thus alternate between computing expectations of the τ parameters, and updating all other parameters using some variant of stochastic gradient descent." }, { "id": 79, "string": "For this, we only require the expectation of each τmn for each E-step, which is given by 1/B 2 m,n ." }, { "id": 80, "string": "We refer the reader to Eisenstein et al." }, { "id": 81, "string": "(2011) for additional details." }, { "id": 82, "string": "8 We could also ignore latent topics, in which case we would get a naïve Bayes-like model of text with deviations for each covariate p(wij | ci) ∝ exp(d + c i B cov )." }, { "id": 83, "string": "Figure 1a presents the generative story of our model." }, { "id": 84, "string": "Figure 1b illustrates the inference network using the reparametrization trick to perform variational inference on our model." }, { "id": 85, "string": "Shaded nodes are observed; double circles indicate deterministic transformations of parent nodes." }, { "id": 86, "string": "special case of SAGE, without background logfrequencies, sparsity, covariates, or labels." }, { "id": 87, "string": "In the next section we generalize the inference method used for ProdLDA; in our experiments we validate its performance and explore the effects of regularization and word-vector initialization ( §3.4)." }, { "id": 88, "string": "The NVDM (Miao et al., 2016) uses the same approach to inference, but does not not restrict document representations to the simplex." }, { "id": 89, "string": "Learning and Inference As in past work, each document i is assumed to have a latent representation r i , which can be interpreted as its relative membership in each topic (after exponentiating and normalizing)." }, { "id": 90, "string": "In order to infer an approximate posterior distribution over r i , we adopt the sampling-based VAE framework developed in previous work (Kingma and Welling, 2014; Rezende et al., 2014) ." }, { "id": 91, "string": "As in conventional variational inference, we assume a variational approximation to the posterior, q Φ (r i | w i , c i , y i ), and seek to minimize the KL divergence between it and the true posterior, p(r i | w i , c i , y i ), where Φ is the set of variational parameters to be defined below." }, { "id": 92, "string": "After some manipulations (given in supplementary materials), we obtain the evidence lower bound (ELBO) for a sin-gle document, L(w i ) = E q Φ (r i |w i ,c i ,y i )   N i j=1 log p(w ij | r i , c i )   + E q Φ (r i |w i ,c i ,y i ) [log p(y i | r i , c i )] − D KL [q Φ (r i | w i , c i , y i ) || p(r i | α)] ." }, { "id": 93, "string": "(4) As in the original VAE, we will encode the parameters of our variational distributions using a shared multi-layer neural network." }, { "id": 94, "string": "Because we have assumed a diagonal normal prior on r, this will take the form of a network which outputs a mean vector, µ i = f µ (w i , c i , y i ) and diagonal of a covariance matrix, σ 2 i = f σ (w i , c i , y i ), such that q Φ (r i | w i , c i , y i ) = N (µ i , σ 2 i ) ." }, { "id": 95, "string": "Incorporating labels and covariates to the inference network used by Miao et al." }, { "id": 96, "string": "(2016) and Srivastava and Sutton (2017) , we use: π i = f e ([W x x i ; W c c i ; W y y i ]), (5) µ i = W µ π i + b µ , (6) log σ 2 i = W σ π i + b σ , (7) where x i is a V -dimensional vector representing the counts of words in w i , and f e is a multilayer perceptron." }, { "id": 97, "string": "The full set of encoder parameters, Φ, thus includes the parameters of f e and all weight matrices and bias vectors in Equations 5-7 (see Figure 1b )." }, { "id": 98, "string": "This approach means that the expectations in Equation 4 are intractable, but we can approximate them using sampling." }, { "id": 99, "string": "In order to maintain differentiability with respect to Φ, even after sampling, we make use of the reparameterization trick (Kingma and Welling, 2014), 9 which allows us to reparameterize samples from q Φ (r | w i , c i , y i ) in terms of samples from an independent source of noise, i.e., (s) ∼ N (0, I), r (s) i = g Φ (w i , c i , y i , (s) ) = µ i + σ i · (s) ." }, { "id": 100, "string": "We thus replace the bound in Equation 4 with a Monte Carlo approximation using a single sam- 9 The Dirichlet distribution cannot be directly reparameterized in this way, which is why we use the logistic normal prior on θ to approximate the Dirichlet prior used in LDA." }, { "id": 101, "string": "ple 10 of (and thereby of r): L(w i ) ≈ N i j=1 log p(w ij | r (s) i , c i ) + log p(y i | r (s) i , c i ) − D KL [q Φ (r i | w i , c i , y i ) || p(r i | α)] ." }, { "id": 102, "string": "(8) We can now optimize this sampling-based approximation of the variational bound with respect to Φ, B * , and all parameters of f g and f y using stochastic gradient descent." }, { "id": 103, "string": "Moreover, because of this stochastic approach to inference, we are not restricted to covariates with a small number of unique values, which was a limitation of SAGE." }, { "id": 104, "string": "Finally, the KL divergence term in Equation 8 can be computed in closed form (see supplementary materials)." }, { "id": 105, "string": "Prediction on Held-out Data In addition to inferring latent topics, our model can both infer latent representations for new documents and predict their labels, the latter of which was the motivation for SLDA." }, { "id": 106, "string": "In traditional variational inference, inference at test time requires fixing global parameters (topics), and optimizing the per-document variational parameters for the test set." }, { "id": 107, "string": "With the VAE framework, by contrast, the encoder network (Equations 5-7) can be used to directly estimate the posterior distribution for each test document, using only a forward pass (no iterative optimization or sampling)." }, { "id": 108, "string": "If not using labels, we can use this approach directly, passing the word counts of new documents through the encoder to get a posterior q Φ (r i | w i , c i )." }, { "id": 109, "string": "When we also include labels to be predicted, we can first train a fully-observed model, as above, then fix the decoder, and retrain the encoder without labels." }, { "id": 110, "string": "In practice, however, if we train the encoder network using one-hot encodings of document labels, it is sufficient to provide a vector of all zeros for the labels of test documents; this is what we adopt for our experiments ( §4.2), and we still obtain good predictive performance." }, { "id": 111, "string": "The label network, f y , is a flexible component which can be used to predict a wide range of outcomes, from categorical labels (such as star ratings; McAuliffe and Blei, 2008) to real-valued outputs (such as number of citations or box-office returns; Yogatama et al., 2011) ." }, { "id": 112, "string": "For categorical labels, predictions are given bŷ y i = argmax y ∈ Y p(y | r i , c i )." }, { "id": 113, "string": "(9) Alternatively, when dealing with a small set of categorical labels, it is also possible to treat them as observed categorical covariates during training." }, { "id": 114, "string": "At test time, we can then consider all possible one-hot vectors, e, in place of c i , and predict the label that maximizes the probability of the words, i.e., y i = argmax y ∈ Y N i j=1 log p(w ij | r i , e y )." }, { "id": 115, "string": "(10) This approach works well in practice (as we show in §4.2), but does not scale to large numbers of labels, or other types of prediction problems, such as multi-class classification or regression." }, { "id": 116, "string": "The choice to include metadata as covariates, labels, or both, depends on the data." }, { "id": 117, "string": "The key point is that we can incorporate metadata in two very different ways, depending on what we want from the model." }, { "id": 118, "string": "Labels guide the model to infer topics that are relevant to those labels, whereas covariates induce explicit deviations, leaving the latent variables to account for the rest of the content." }, { "id": 119, "string": "Additional Prior Information A final advantage of the VAE framework is that the encoder network provides a way to incorporate additional prior information in the form of word vectors." }, { "id": 120, "string": "Although we can learn all parameters starting from a random initialization, it is also possible to initialize and fix the initial embeddings of words in the model, W x , in Equation 5." }, { "id": 121, "string": "This leverages word similarities derived from large amounts of unlabeled data, and may promote greater coherence in inferred topics." }, { "id": 122, "string": "The same could also be done for some covariates; for example, we could embed the source of a news article based on its place on the ideological spectrum." }, { "id": 123, "string": "Conversely, if we choose to learn these parameters, the learned values (W y and W c ) may provide meaningful embeddings of these metadata (see section §4.3)." }, { "id": 124, "string": "Other variants on topic models have also proposed incorporating word vectors, both as a parallel part of the generative process (Nguyen et al., 2015a) , and as an alternative parameterization of topic distributions (Das et al., 2015) , but inference is not scalable in either of these models." }, { "id": 125, "string": "Because of the generality of the VAE framework, we could also modify the generative story so that word embeddings are emitted (rather than tokens); we leave this for future work." }, { "id": 126, "string": "Experiments and Results To evaluate and demonstrate the potential of this model, we present a series of experiments below." }, { "id": 127, "string": "We first test SCHOLAR without observed metadata, and explore the effects of using regularization and/or word vector initialization, compared to LDA, SAGE, and NVDM ( §4.1)." }, { "id": 128, "string": "We then evaluate our model in terms of predictive performance, in comparison to SLDA and an l 2 -regularized logistic regression baseline ( §4.2)." }, { "id": 129, "string": "Finally, we demonstrate the ability to incorporate covariates and/or labels in an exploratory data analysis ( §4.3)." }, { "id": 130, "string": "The scores we report are generalization to heldout data, measured in terms of perplexity; coherence, measured in terms of non-negative point-wise mutual information (NPMI; Chang et al., 2009; Newman et al., 2010) , and classification accuracy on test data." }, { "id": 131, "string": "For coherence we evaluate NPMI using the top 10 words of each topic, both internally (using test data), and externally, using a decade of articles from the English Gigaword dataset (Graff and Cieri, 2003) ." }, { "id": 132, "string": "Since our model employs variational methods, the reported perplexity is an upper bound based on the ELBO." }, { "id": 133, "string": "As datasets we use the familiar 20 newsgroups, the IMDB corpus of 50,000 movie reviews (Maas et al., 2011) , and the UIUC Yahoo answers dataset with 150,000 documents in 15 categories (Chang et al., 2008) ." }, { "id": 134, "string": "For further exploration, we also make use of a corpus of approximately 4,000 timestamped news articles about US immigration, each annotated with pro-or anti-immigration tone (Card et al., 2015) ." }, { "id": 135, "string": "We use the original author-provided implementations of SAGE 11 and SLDA, 12 while for LDA we use Mallet." }, { "id": 136, "string": "13 ." }, { "id": 137, "string": "Our implementation of SCHOLAR is in TensorFlow, but we have also provided a preliminary PyTorch implementation of the core of our model." }, { "id": 138, "string": "14 For additional details about datasets and implementation, please refer to the supplementary material." }, { "id": 139, "string": "It is challenging to fairly evaluate the relative computational efficiency of our approach compared to past work (due to the stochastic nature of our ap-11 github.com/jacobeisenstein/SAGE 12 github.com/blei-lab/class-slda 13 mallet.cs.umass.edu 14 github.com/dallascard/scholar proach to inference, choices about hyperparameters such as tolerance, and because of differences in implementation)." }, { "id": 140, "string": "Nevertheless, in practice, the performance of our approach is highly appealing." }, { "id": 141, "string": "For all experiments in this paper, our implementation was much faster than SLDA or SAGE (implemented in C and Matlab, respectively), and competitive with Mallet." }, { "id": 142, "string": "Unsupervised Evaluation Although the emphasis of this work is on incorporating observed labels and/or covariates, we briefly report on experiments in the unsupervised setting." }, { "id": 143, "string": "Recall that, without metadata, SCHOLAR equates to ProdLDA, but with an explicit background term." }, { "id": 144, "string": "15 We therefore use the same experimental setup as Srivastava and Sutton (2017) (learning rate, momentum, batch size, and number of epochs) and find the same general patterns as they reported (see Table 1 and supplementary material): our model returns more coherent topics than LDA, but at the cost of worse perplexity." }, { "id": 145, "string": "SAGE, by contrast, attains very high levels of sparsity, but at the cost of worse perplexity and coherence than LDA." }, { "id": 146, "string": "As expected, the NVDM produces relatively low perplexity, but very poor coherence, due to its lack of constraints on θ." }, { "id": 147, "string": "Further experimentation revealed that the VAE framework involves a tradeoff among the scores; running for more epochs tends to result in better perplexity on held-out data, but at the cost of worse coherence." }, { "id": 148, "string": "Adding regularization to encourage sparse topics has a similar effect as in SAGE, leading to worse perplexity and coherence, but it does create sparse topics." }, { "id": 149, "string": "Interestingly, initializing the encoder with pretrained word2vec embeddings, and not updating them returned a model with the best internal coherence of any model we considered for IMDB and Yahoo answers, and the second-best for 20 newsgroups." }, { "id": 150, "string": "The background term in our model does not have much effect on perplexity, but plays an important role in producing coherent topics; as in SAGE, the background can account for common words, so they are mostly absent among the most heavily weighted words in the topics." }, { "id": 151, "string": "For instance, words like film and movie in the IMDB corpus are relatively unimportant in the topics learned by our Table 1 : Performance of our various models in an unsupervised setting (i.e., without labels or covariates) on the IMDB dataset using a 5,000-word vocabulary and 50 topics." }, { "id": 152, "string": "The supplementary materials contain additional results for 20 newsgroups and Yahoo answers." }, { "id": 153, "string": "model, but would be much more heavily weighted without the background term, as they are in topics learned by LDA." }, { "id": 154, "string": "Text Classification We next consider the utility of our model in the context of categorical labels, and consider them alternately as observed covariates and as labels generated conditional on the latent representation." }, { "id": 155, "string": "We use the same setup as above, but tune number of training epochs for our model using a random 20% of training data as a development set, and similarly tune regularization for logistic regression." }, { "id": 156, "string": "Table 2 summarizes the accuracy of various models on three datasets, revealing that our model offers competitive performance, both as a joint model of words and labels (Eq." }, { "id": 157, "string": "9), and a model which conditions on covariates (Eq." }, { "id": 158, "string": "10)." }, { "id": 159, "string": "Although SCHOLAR is comparable to the logistic regression baseline, our purpose here is not to attain state-of-the-art performance on text classification." }, { "id": 160, "string": "Rather, the high accuracies we obtain demonstrate that we are learning low-dimensional representations of documents that are relevant to the label of interest, outperforming SLDA, and have the same attractive properties as topic models." }, { "id": 161, "string": "Further, any neural network that is successful for text classification could be incorporated into f y and trained end-to-end along with topic discovery." }, { "id": 162, "string": "Exploratory Study We demonstrate how our model might be used to explore an annotated corpus of articles about immigration, and adapt to different assumptions about the data." }, { "id": 163, "string": "We only use a small number of topics in this part (K = 8) for compact presentation." }, { "id": 164, "string": "Tone as a label." }, { "id": 165, "string": "We first consider using the annotations as a label, and train a joint model to infer topics relevant to the tone of the article (pro-or anti-immigration)." }, { "id": 166, "string": "Figure 2 shows a set of topics learned in this way, along with the predicted probability of an article being pro-immigration conditioned on the given topic." }, { "id": 167, "string": "All topics are coherent, and the predicted probabilities have strong face validity, e.g., \"arrested charged charges agents operation\" is least associated with pro-immigration." }, { "id": 168, "string": "Tone as a covariate." }, { "id": 169, "string": "Next we consider using tone as a covariate, and build a model using both tone and tone-topic interactions." }, { "id": 170, "string": "Table 3 shows a set of topics learned from the immigration data, along with the most highly-weighted words in the corresponding tone-topic interaction terms." }, { "id": 171, "string": "As can be seen, these interaction terms tend to capture different frames (e.g., \"criminal\" vs. \"detainees\", and \"illegals\" vs. \"newcomers\", etc)." }, { "id": 172, "string": "Combined model with temporal metadata." }, { "id": 173, "string": "Finally, we incorporate both the tone annotations and the year of publication of each article, treating the former as a label and the latter as a covariate." }, { "id": 174, "string": "In this model, we also include an embedding matrix, W c , to project the one-hot year vectors down to a two-dimensional continuous space, with a learned deviation for each dimension." }, { "id": 175, "string": "We omit the topics in the interest of space, but Figure 3 shows the learned embedding for each year, along with the top terms of the corresponding deviations." }, { "id": 176, "string": "As can be seen, the model learns that adjacent years tend to produce similar deviations, even though we have not explicitly encoded this information." }, { "id": 177, "string": "The leftright dimension roughly tracks a temporal trend with positive deviations shifting from the years of Clinton and INS on the left, to Obama and ICE on the right." }, { "id": 178, "string": "16 Meanwhile, the events of 9/11 dominate the vertical direction, with the words sept, hijackers, and attacks increasing in probability as we move up in the space." }, { "id": 179, "string": "If we wanted to look at each year individually, we could drop the embedding of years, and learn a sparse set of topic-year interactions, similar to tone in Table 3 ." }, { "id": 180, "string": "Additional Related Work The literature on topic models is vast; in addition to papers cited throughout, other efforts to incorporate metadata into topic models include Dirichletmultinomial regression (DMR; Mimno and McCallum, 2008) , Labeled LDA (Ramage et al., 2009) , and MedLDA (Zhu et al., 2009) ." }, { "id": 181, "string": "A recent paper also extended DMR by using deep neural networks to embed metadata into a richer document prior (Benton and Dredze, 2018) ." }, { "id": 182, "string": "A separate line of work has pursued parameterizing unsupervised models of documents using neural networks (Hinton and Salakhutdinov, Base topics (each row is a topic) Anti-immigration interactions Pro-immigration interactions ice customs agency enforcement homeland criminal customs arrested detainees detention center agency population born percent americans english jobs million illegals taxpayers english newcomers hispanic city judge case court guilty appeals attorney guilty charges man charged asylum court judge case appeals patrol border miles coast desert boat guard patrol border agents boat died authorities desert border bodies licenses drivers card visa cards applicants foreign sept visas system green citizenship card citizen apply island story chinese ellis international smuggling federal charges island school ellis english story guest worker workers bush labor bill bill border house senate workers tech skilled farm labor benefits bill welfare republican state senate republican california gov state law welfare students tuition Table 3 : Top words for topics (left) and the corresponding anti-immigration (middle) and pro-immigration (right) variations when treating tone as a covariate, with interactions." }, { "id": 183, "string": "2009; Larochelle and Lauly, 2012) , including non-Bayesian approaches (Cao et al., 2015) ." }, { "id": 184, "string": "More recently, Lau et al." }, { "id": 185, "string": "(2017) proposed a neural language model that incorporated topics, and He et al." }, { "id": 186, "string": "(2017) developed a scalable alternative to the correlated topic model by simultaneously learning topic embeddings." }, { "id": 187, "string": "Others have attempted to extend the reparameterization trick to the Dirichlet and Gamma distributions, either through transformations or a generalization of reparameterization (Ruiz et al., 2016) ." }, { "id": 188, "string": "Black-box and VAE-style inference have been implemented in at least two general purpose tools designed to allow rapid exploration and evaluation of models (Kucukelbir et al., 2015; ." }, { "id": 189, "string": "Conclusion We have presented a neural framework for generalized topic models to enable flexible incorporation of metadata with a variety of options." }, { "id": 190, "string": "We take advantage of stochastic variational inference to develop a general algorithm for our framework such that variations do not require any model-specific algorithm derivations." }, { "id": 191, "string": "Our model demonstrates the tradeoff between perplexity, coherence, and sparsity, and outperforms SLDA in predicting document labels." }, { "id": 192, "string": "Furthermore, the flexibility of our model enables intriguing exploration of a text corpus on US immigration." }, { "id": 193, "string": "We believe that our model and code will facilitate rapid exploration of document collections with metadata." } ], "headers": [ { "section": "Introduction", "n": "1", "start": 0, "end": 24 }, { "section": "Background and Motivation", "n": "2", "start": 25, "end": 55 }, { "section": "SCHOLAR: A Neural Topic Model with Covariates, Supervision, and Sparsity", "n": "3", "start": 56, "end": 60 }, { "section": "Generative Story", "n": "3.1", "start": 61, "end": 88 }, { "section": "Learning and Inference", "n": "3.2", "start": 89, "end": 104 }, { "section": "Prediction on Held-out Data", "n": "3.3", "start": 105, "end": 118 }, { "section": "Additional Prior Information", "n": "3.4", "start": 119, "end": 125 }, { "section": "Experiments and Results", "n": "4", "start": 126, "end": 141 }, { "section": "Unsupervised Evaluation", "n": "4.1", "start": 142, "end": 153 }, { "section": "Text Classification", "n": "4.2", "start": 154, "end": 161 }, { "section": "Exploratory Study", "n": "4.3", "start": 162, "end": 179 }, { "section": "Additional Related Work", "n": "5", "start": 180, "end": 188 }, { "section": "Conclusion", "n": "6", "start": 189, "end": 193 } ], "figures": [ { "filename": "../figure/image/975-Table3-1.png", "caption": "Table 3: Top words for topics (left) and the corresponding anti-immigration (middle) and pro-immigration (right) variations when treating tone as a covariate, with interactions.", "page": 8, "bbox": { "x1": 81.6, "x2": 513.12, "y1": 63.36, "y2": 151.68 } }, { "filename": "../figure/image/975-Table2-1.png", "caption": "Table 2: Accuracy of various models on three datasets with categorical labels.", "page": 7, "bbox": { "x1": 79.67999999999999, "x2": 283.2, "y1": 63.36, "y2": 136.79999999999998 } }, { "filename": "../figure/image/975-Figure3-1.png", "caption": "Figure 3: Learned embeddings of year-ofpublication (treated as a covariate) from combined model of news articles about immigration.", "page": 7, "bbox": { "x1": 311.52, "x2": 527.52, "y1": 275.52, "y2": 420.47999999999996 } }, { "filename": "../figure/image/975-Figure2-1.png", "caption": "Figure 2: Topics inferred by a joint model of words and tone, and the corresponding probability of proimmigration tone for each topic. A topic is represented by the top words sorted by word probability throughout the paper.", "page": 7, "bbox": { "x1": 312.96, "x2": 527.52, "y1": 67.67999999999999, "y2": 175.68 } }, { "filename": "../figure/image/975-Figure1-1.png", "caption": "Figure 1: Figure 1a presents the generative story of our model. Figure 1b illustrates the inference network using the reparametrization trick to perform variational inference on our model. Shaded nodes are observed; double circles indicate deterministic transformations of parent nodes.", "page": 3, "bbox": { "x1": 306.71999999999997, "x2": 522.24, "y1": 62.879999999999995, "y2": 287.52 } }, { "filename": "../figure/image/975-Table1-1.png", "caption": "Table 1: Performance of our various models in an unsupervised setting (i.e., without labels or covariates) on the IMDB dataset using a 5,000-word vocabulary and 50 topics. The supplementary materials contain additional results for 20 newsgroups and Yahoo answers.", "page": 6, "bbox": { "x1": 306.71999999999997, "x2": 531.36, "y1": 63.36, "y2": 155.04 } } ] }, "gem_id": "GEM-SciDuet-train-9" }, { "slides": { "0": { "title": "Language generation Equivalence in the target space", "text": [ "Ground truth sequences lie in a union of low-dimensional subspaces where sequences convey the same message.", "I France won the world cup for the second time.", "I France captured its second world cup title.", "Some words in the vocabulary share the same meaning.", "I Capture, conquer, win, gain, achieve, accomplish, . . .", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 1 ], "images": [] }, "1": { "title": "Contributions", "text": [ "Take into consideration the nature of the target language space with:", "A token-level smoothing for a robust multi-class classification.", "A sequence-level smoothing to explore relevant alternative sequences.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 2 ], "images": [] }, "2": { "title": "Maximum likelihood estimation MLE", "text": [ "For a pair (x y), we model the conditional distribution:", "Given the ground truth target sequence y?:", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "Zero-one loss, all the outputs y y? are treated equally.", "Discrepancy at the sentence level between the training (1-gram) and evaluation metric (4-gram)." ], "page_nums": [ 3, 4 ], "images": [] }, "3": { "title": "Loss smoothing", "text": [ "Prerequisite: A word embedding w (e.g. Glove) in the target space and a distance d", "with a temperature st. r" ], "page_nums": [ 5, 6, 9 ], "images": [] }, "4": { "title": "Token level smoothing", "text": [ "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 7 ], "images": [] }, "5": { "title": "Loss smoothing Token level", "text": [ "Uniform label smoothing over all words in the vocabulary:", "We can leverage word co-occurrence statistics to build a non-uniform and meaningful distribution.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "We can estimate the exact KL divergence for every target token." ], "page_nums": [ 8, 10, 11 ], "images": [] }, "6": { "title": "Sequence level smoothing", "text": [ "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 12 ], "images": [] }, "7": { "title": "Loss smoothing Sequence level", "text": [ "Prerequisite: A distance d in the sequences space Vn, n N.", "Hamming Edit 1BLEU 1CIDEr", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "Can we evaluate the partition function Z for a given reward?", "We can approximate Z for Hamming distance." ], "page_nums": [ 13, 14 ], "images": [] }, "8": { "title": "Loss smoothing Sequence level Hamming distance", "text": [ "consider only sequences of the same length as y? (d(y y if |y |y", "We partition the set of sequences y?:", "their distance to the ground truth", "d d Sd Sd", "The reward in each subset is a constant.", "The cardinality of each subset is known.", "d Z |Sd exp", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "We can easily draw from r with Hamming distance:", "Pick d positions in the sequence to be changed among {1, . . . ,T}.", "Sample substitutions from V of the vocabulary." ], "page_nums": [ 15, 16, 17 ], "images": [] }, "9": { "title": "Loss smoothing Sequence level Other distances", "text": [ "We cannot easily sample from more complicated rewards such as BLEU or CIDEr.", "Choose q the reward distribution relative to Hamming distance.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 18 ], "images": [] }, "10": { "title": "Loss smoothing Sequence level Support reduction", "text": [ "Can we reduce the support of r?", "Reduce the support from V |y?| to V |y", "sub where Vsub V.", "Vsub Vbatch tokens occuring in the SGD mini-batch.", "Vsub Vrefs tokens occuring in the available references.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 19 ], "images": [] }, "11": { "title": "Loss smoothing Sequence level Lazy training", "text": [ "Default training Lazy training", "l y l is: l y l is:", "not forwarded in the RNN.", "log p(yl |yl x)", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "|y ||cell |, where cell are the cell parameters." ], "page_nums": [ 20, 21 ], "images": [] }, "12": { "title": "Image captioning on MS COCO Setup", "text": [ "5 captions for every image.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 23 ], "images": [] }, "13": { "title": "Image captioning on MS COCO Results", "text": [ "Loss Reward Vsub BLEU-1 BLEU-4 CIDEr", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 24, 25, 26, 27 ], "images": [] }, "14": { "title": "Machine translation Setup", "text": [ "Bi-LSTM encoder-decoder with attention (Bahdanau et al. 2015)", "IWSLT14 DEEN WMT14 ENFR", "Dev 7k Dev 6k", "Test 7k Test 3k", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 28 ], "images": [] }, "15": { "title": "Machine translation Results", "text": [ "Loss Reward Vsub WMT14 EnFr IWSLT14 DeEn", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 29, 30, 31 ], "images": [] }, "16": { "title": "Conclusion", "text": [ "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 32 ], "images": [] }, "17": { "title": "Takeaways", "text": [ "Improving over MLE with:", "Sequence-level smoothing: an extension of RAML (Norouzi et al. 2016)", "I Reduced support of the reward distribution.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing", "Token-level smoothing: smoothing across semantically similar tokens instead of", "the usual uniform noise.", "Both schemes can be combined for better results." ], "page_nums": [ 33, 34 ], "images": [] }, "18": { "title": "Future work", "text": [ "Validate on other seq2seq models besides LSTM encoder-decoders.", "Validate on models with BPE instead of words.", "I Experiment with other distributions for sampling other than the Hamming distance.", "I Sparsify the reward distribution for scalability.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 35 ], "images": [] }, "19": { "title": "Appendices", "text": [ "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 37 ], "images": [] }, "20": { "title": "Training time", "text": [ "Average wall time to process a single batch (10 images 50 captions) when training the RNN language model with fixed CNN (without attention) on a Titan X GPU.", "Loss MLE Tok Seq Seq lazy Seq Seq lazy Seq Seq lazy Tok-Seq Tok-Seq Tok-Seq", "Reward Glove sim Hamming", "Vsub V V Vbatch Vbatch Vrefs Vrefs V Vbatch Vrefs", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 39 ], "images": [] }, "21": { "title": "Generated captions", "text": [ "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 40, 41 ], "images": [] }, "22": { "title": "Generated translations EnFr", "text": [ "I think its conceivable that these data are used for mutual benefit.", "Jestime quil est concevable que ces donnees soient utilisees dans leur interet mutuel.", "Je pense quil est possible que ces donnees soient utilisees a des fins reciproques.", "Je pense quil est possible que ces donnees soient utilisees pour le benefice mutuel.", "The public will be able to enjoy the technical prowess of young skaters , some of whom , like Hyeres young star , Lorenzo Palumbo , have already taken part in top-notch competitions.", "Le public pourra admirer les prouesses techniques de jeunes qui , pour certains , frequentent deja les competitions au plus haut niveau , a linstar du jeune prodige hyerois Lorenzo Palumbo.", "Le public sera en mesure de profiter des connaissances techniques des jeunes garcons , dont certains , a linstar de la jeune star americaine , Lorenzo , ont deja participe a des competitions de competition.", "Le public sera en mesure de profiter de la finesse technique des jeunes musiciens , dont certains , comme la jeune star de lentreprise , Lorenzo , ont deja pris part a des competitions de gymnastique.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 42 ], "images": [] }, "23": { "title": "MS COCO server results", "text": [ "BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L CIDEr SPICE", "Ours: Tok-Seq CIDEr Ours: Tok-Seq CIDEr +", "Table: MS-COCO s server evaluation . (+) for ensemble submissions, for submissions with CIDEr optimization and () for models using additional data.", "ACL 2018, Melbourne M. Elbayad || Token-level and Sequence-level Loss Smoothing" ], "page_nums": [ 43 ], "images": [] } }, "paper_title": "Token-level and sequence-level loss smoothing for RNN language models", "paper_id": "977", "paper": { "title": "Token-level and sequence-level loss smoothing for RNN language models", "abstract": "Despite the effectiveness of recurrent neural network language models, their maximum likelihood estimation suffers from two limitations. It treats all sentences that do not match the ground truth as equally poor, ignoring the structure of the output space. Second, it suffers from \"exposure bias\": during training tokens are predicted given ground-truth sequences, while at test time prediction is conditioned on generated output sequences. To overcome these limitations we build upon the recent reward augmented maximum likelihood approach i.e. sequence-level smoothing that encourages the model to predict sentences close to the ground truth according to a given performance metric. We extend this approach to token-level loss smoothing, and propose improvements to the sequence-level smoothing approach. Our experiments on two different tasks, image captioning and machine translation, show that token-level and sequence-level loss smoothing are complementary, and significantly improve results.", "text": [ { "id": 0, "string": "Introduction Recurrent neural networks (RNNs) have recently proven to be very effective sequence modeling tools, and are now state of the art for tasks such as machine translation , image captioning (Kiros et al., 2014; Anderson et al., 2017) and automatic speech recognition (Chorowski et al., 2015; Chiu et al., 2017) ." }, { "id": 1, "string": "The basic principle of RNNs is to iteratively compute a vectorial sequence representation, by applying at each time-step the same trainable func-tion to compute the new network state from the previous state and the last symbol in the sequence." }, { "id": 2, "string": "These models are typically trained by maximizing the likelihood of the target sentence given an encoded source (text, image, speech) ." }, { "id": 3, "string": "Maximum likelihood estimation (MLE), however, has two main limitations." }, { "id": 4, "string": "First, the training signal only differentiates the ground-truth target output from all other outputs." }, { "id": 5, "string": "It treats all other output sequences as equally incorrect, regardless of their semantic proximity from the ground-truth target." }, { "id": 6, "string": "While such a \"zero-one\" loss is probably acceptable for coarse grained classification of images, e.g." }, { "id": 7, "string": "across a limited number of basic object categories (Everingham et al., 2010) it becomes problematic as the output space becomes larger and some of its elements become semantically similar to each other." }, { "id": 8, "string": "This is in particular the case for tasks that involve natural language generation (captioning, translation, speech recognition) where the number of possible outputs is practically unbounded." }, { "id": 9, "string": "For natural language generation tasks, evaluation measures typically do take into account structural similarity, e.g." }, { "id": 10, "string": "based on n-grams, but such structural information is not reflected in the MLE criterion." }, { "id": 11, "string": "The second limitation of MLE is that training is based on predicting the next token given the input and preceding ground-truth output tokens, while at test time the model predicts conditioned on the input and the so-far generated output sequence." }, { "id": 12, "string": "Given the exponentially large output space of natural language sentences, it is not obvious that the learned RNNs generalize well beyond the relatively sparse distribution of ground-truth sequences used during MLE optimization." }, { "id": 13, "string": "This phenomenon is known as \"exposure bias\" (Ranzato et al., 2016; ." }, { "id": 14, "string": "MLE minimizes the KL divergence between a target Dirac distribution on the ground-truth sentence(s) and the model's distribution." }, { "id": 15, "string": "In this pa-per, we build upon the \"loss smoothing\" approach by Norouzi et al." }, { "id": 16, "string": "(2016) , which smooths the Dirac target distribution over similar sentences, increasing the support of the training data in the output space." }, { "id": 17, "string": "We make the following main contributions: • We propose a token-level loss smoothing approach, using word-embeddings, to achieve smoothing among semantically similar terms, and we introduce a special procedure to promote rare tokens." }, { "id": 18, "string": "• For sequence-level smoothing, we propose to use restricted token replacement vocabularies, and a \"lazy evaluation\" method that significantly speeds up training." }, { "id": 19, "string": "• We experimentally validate our approach on the MSCOCO image captioning task and the WMT'14 English to French machine translation task, showing that on both tasks combining token-level and sequence-level loss smoothing improves results significantly over maximum likelihood baselines." }, { "id": 20, "string": "In the remainder of the paper, we review the existing methods to improve RNN training in Section 2." }, { "id": 21, "string": "Then, we present our token-level and sequence-level approaches in Section 3." }, { "id": 22, "string": "Experimental evaluation results based on image captioning and machine translation tasks are laid out in Section 4." }, { "id": 23, "string": "Related work Previous work aiming to improve the generalization performance of RNNs can be roughly divided into three categories: those based on regularization, data augmentation, and alternatives to maximum likelihood estimation." }, { "id": 24, "string": "Regularization techniques are used to increase the smoothness of the function learned by the network, e.g." }, { "id": 25, "string": "by imposing an 2 penalty on the network weights, also known as \"weight decay\"." }, { "id": 26, "string": "More recent approaches mask network activations during training, as in dropout (Srivastava et al., 2014) and its variants adapted to recurrent models (Pham et al., 2014; Krueger et al., 2017) ." }, { "id": 27, "string": "Instead of masking, batch-normalization (Ioffe and Szegedy, 2015) rescales the network activations to avoid saturating the network's non-linearities." }, { "id": 28, "string": "Instead of regularizing the network parameters or activations, it is also possible to directly regularize based on the entropy of the output distribution (Pereyra et al., 2017) ." }, { "id": 29, "string": "Data augmentation techniques improve the ro-bustness of the learned models by applying transformations that might be encountered at test time to the training data." }, { "id": 30, "string": "In computer vision, this is common practice, and implemented by, e.g., scaling, cropping, and rotating training images (Le-Cun et al., 1998; Krizhevsky et al., 2012; Paulin et al., 2014) ." }, { "id": 31, "string": "In natural language processing, examples of data augmentation include input noising by randomly dropping some input tokens (Iyyer et al., 2015; Bowman et al., 2015; Kumar et al., 2016) , and randomly replacing words with substitutes sampled from the model ." }, { "id": 32, "string": "Xie et al." }, { "id": 33, "string": "(2017) introduced data augmentation schemes for RNN language models that leverage n-gram statistics in order to mimic Kneser-Ney smoothing of n-grams models." }, { "id": 34, "string": "In the context of machine translation, Fadaee et al." }, { "id": 35, "string": "(2017) modify sentences by replacing words with rare ones when this is plausible according to a pretrained language model, and substitutes its equivalent in the target sentence using automatic word alignments." }, { "id": 36, "string": "This approach, however, relies on the availability of additional monolingual data for language model training." }, { "id": 37, "string": "The de facto standard way to train RNN language models is maximum likelihood estimation (MLE) ." }, { "id": 38, "string": "The sequential factorization of the sequence likelihood generates an additive structure in the loss, with one term corresponding to the prediction of each output token given the input and the preceding ground-truth output tokens." }, { "id": 39, "string": "In order to directly optimize for sequence-level structured loss functions, such as measures based on n-grams like BLEU or CIDER, Ranzato et al." }, { "id": 40, "string": "(2016) use reinforcement learning techniques that optimize the expectation of a sequence-level reward." }, { "id": 41, "string": "In order to avoid early convergence to poor local optima, they pre-train the model using MLE." }, { "id": 42, "string": "Leblond et al." }, { "id": 43, "string": "(2018) build on the learning to search approach to structured prediction (Daumé III et al., 2009; Chang et al., 2015) and adapts it to RNN training." }, { "id": 44, "string": "The model generates candidate sequences at each time-step using all possible tokens, and scores these at sequence-level to derive a training signal for each time step." }, { "id": 45, "string": "This leads to an approach that is structurally close to MLE, but computationally expensive." }, { "id": 46, "string": "Norouzi et al." }, { "id": 47, "string": "(2016) introduce a reward augmented maximum likelihood (RAML) approach, that incorpo-rates a notion of sequence-level reward without facing the difficulties of reinforcement learning." }, { "id": 48, "string": "They define a target distribution over output sentences using a soft-max over the reward over all possible outputs." }, { "id": 49, "string": "Then, they minimize the KL divergence between the target distribution and the model's output distribution." }, { "id": 50, "string": "Training with a general reward distribution is similar to MLE training, except that we use multiple sentences sampled from the target distribution instead of only the ground-truth sentences." }, { "id": 51, "string": "In our work, we build upon the work of Norouzi et al." }, { "id": 52, "string": "(2016) by proposing improvements to sequence-level smoothing, and extending it to token-level smoothing." }, { "id": 53, "string": "Our token-level smoothing approach is related to the label smoothing approach of Szegedy et al." }, { "id": 54, "string": "(2016) for image classification." }, { "id": 55, "string": "Instead of maximizing the probability of the correct class, they train the model to predict the correct class with a large probability and all other classes with a small uniform probability." }, { "id": 56, "string": "This regularizes the model by preventing overconfident predictions." }, { "id": 57, "string": "In natural language generation with large vocabularies, preventing such \"narrow\" over-confident distributions is imperative, since for many tokens there are nearly interchangeable alternatives." }, { "id": 58, "string": "Loss smoothing for RNN training We briefly recall standard recurrent neural network training, before presenting sequence-level and token-level loss smoothing below." }, { "id": 59, "string": "Maximum likelihood RNN training We are interested in modeling the conditional probability of a sequence y = (y 1 , ." }, { "id": 60, "string": "." }, { "id": 61, "string": "." }, { "id": 62, "string": ", y T ) given a conditioning observation x, p θ (y|x) = T t=1 p θ (y t |x, y