gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
sequence | paper_headers
sequence | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-16#paper-994#slide-2 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-2 | Current Approaches and Challenges | Sentence simplification as monolingual machine translation | Sentence simplification as monolingual machine translation | [] |
GEM-SciDuet-train-16#paper-994#slide-3 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-3 | Conservatism in MT Based Simplification | In both SMT and NMT Text Simplification, a large proportion of the input sentences are not modified. (Alva-Manchego et al., 2017; on the Newsela corpus).
It is confirmed in the present work (experiments on Wikipedia):
of the input sentences remain unchanged.
- None of the references are identical to the source.
- According to automatic and human evaluation, the references are indeed simpler. | In both SMT and NMT Text Simplification, a large proportion of the input sentences are not modified. (Alva-Manchego et al., 2017; on the Newsela corpus).
It is confirmed in the present work (experiments on Wikipedia):
of the input sentences remain unchanged.
- None of the references are identical to the source.
- According to automatic and human evaluation, the references are indeed simpler. | [] |
GEM-SciDuet-train-16#paper-994#slide-4 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-4 | Sentence Splitting in Text Simplification | Splitting in NMT-Based Simplification
Sentence splitting is not addressed.
Rareness of splittings in the simplification training corpora.
Recently, corpus focusing on sentence splitting for the Split-and-Rephrase task
(Narayan et al., 2017) where the other operations are not addressed.
Directly modeling sentence splitting
1. Hand-crafted syntactic rules:
- Compilation and validation can be laborious (Shardlow, 2014)
Many rules are often involved (e.g., 111 rules in Siddharthan and Angrosh,
2014) for relative clauses, appositions, subordination and coordination).
- Usually language specific.
Noun phrase Relative clause Relative Pronoun
One of the two rules for relative clauses in Siddharthan, 2004.
2. Using semantics for determining potential splitting points
Narayan and Gardent (2014) - HYBRID
- Discourse Semantic Representation (DRS) structures for splitting and deletion.
- Depends on the proportion of splittings in the training corpus.
We here use an intermediate way:
Simple algorithm to directly decompose the sentence into its semantic constituents. | Splitting in NMT-Based Simplification
Sentence splitting is not addressed.
Rareness of splittings in the simplification training corpora.
Recently, corpus focusing on sentence splitting for the Split-and-Rephrase task
(Narayan et al., 2017) where the other operations are not addressed.
Directly modeling sentence splitting
1. Hand-crafted syntactic rules:
- Compilation and validation can be laborious (Shardlow, 2014)
Many rules are often involved (e.g., 111 rules in Siddharthan and Angrosh,
2014) for relative clauses, appositions, subordination and coordination).
- Usually language specific.
Noun phrase Relative clause Relative Pronoun
One of the two rules for relative clauses in Siddharthan, 2004.
2. Using semantics for determining potential splitting points
Narayan and Gardent (2014) - HYBRID
- Discourse Semantic Representation (DRS) structures for splitting and deletion.
- Depends on the proportion of splittings in the training corpus.
We here use an intermediate way:
Simple algorithm to directly decompose the sentence into its semantic constituents. | [] |
GEM-SciDuet-train-16#paper-994#slide-5 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-5 | Direct Semantic Splitting DSS | A simple algorithm that directly decomposes the sentence into its semantic components, using 2 splitting rules.
The splitting is directed by semantic parsing.
The semantic annotation directly captures shared arguments.
It can be used as a preprocessing step for other simplification operations.
Input sentence Split sentence Output
Sentence Splitting Deletions, Word substitutions | A simple algorithm that directly decomposes the sentence into its semantic components, using 2 splitting rules.
The splitting is directed by semantic parsing.
The semantic annotation directly captures shared arguments.
It can be used as a preprocessing step for other simplification operations.
Input sentence Split sentence Output
Sentence Splitting Deletions, Word substitutions | [] |
GEM-SciDuet-train-16#paper-994#slide-6 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-6 | The Semantic Structures | Semantic Annotation: UCCA (Abend and Rappoport, 2013)
- Based on typological and cognitive theories
L A A and P
He came back home played piano
Parallel Scene (H) Linker (L) P surprised His Participant (A) Process (P) arrival
E observed E C A A Parallel Scene (H) the planet R S C Participant (A) Process (P) State (S) E which has Center (C) Elaborator (E) Relator (R) satellites
- Stable across translations (Sulem, Abend and Rappoport, 2015)
- Used for the evaluation of MT, GEC and Text Simplification
- Explicitly annotates semantic distinctions, abstracting away from syntax
- Unlike AMR, semantic units are directly anchored in the text.
Scenes evoked by a Main Relation (Process or State).
- A Scene may contain one or several Participants.
- A Scene can provide additional information on an established entity:
it is then an Elaborator Scene.
- A Scene may also be a Participant in another Scene:
It is then a Participant Scene.
- In the other cases, Scenes are annotated as Parallel Scenes.
A Linker may be included. | Semantic Annotation: UCCA (Abend and Rappoport, 2013)
- Based on typological and cognitive theories
L A A and P
He came back home played piano
Parallel Scene (H) Linker (L) P surprised His Participant (A) Process (P) arrival
E observed E C A A Parallel Scene (H) the planet R S C Participant (A) Process (P) State (S) E which has Center (C) Elaborator (E) Relator (R) satellites
- Stable across translations (Sulem, Abend and Rappoport, 2015)
- Used for the evaluation of MT, GEC and Text Simplification
- Explicitly annotates semantic distinctions, abstracting away from syntax
- Unlike AMR, semantic units are directly anchored in the text.
Scenes evoked by a Main Relation (Process or State).
- A Scene may contain one or several Participants.
- A Scene can provide additional information on an established entity:
it is then an Elaborator Scene.
- A Scene may also be a Participant in another Scene:
It is then a Participant Scene.
- In the other cases, Scenes are annotated as Parallel Scenes.
A Linker may be included. | [] |
GEM-SciDuet-train-16#paper-994#slide-7 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-7 | The Semantic Rules | Placing each Scene in a different sentence.
Fits with event-wise simplification (Glavas and Stajner, 2013)
Here we only use semantic criteria.
It was also investigated in the context of Text Simplification evaluation:
SAMSA measure (Sulem, Abend and Rappoport, NAACL 2018)
A A and He came back home and played piano.
He came back home played piano
A A P A A P He came back home. He played piano.
A A P A Input sentence A P Input Scenes He played piano He came back home
P He observed the planet which has 14 satellites.
the R S planet
E C which has
P A He A He observed the planet. Planet has 14 satellites.
observed S E C planet E C has the planet satellites
A A the R S SSSciC iSc1Scn planet
Input sentence A Elaborator Scenes A
Input sentence without the Elaborator Scenes, preserving the Minimal Center has the planet satellites
Grammatical errors resulting from the split are not addressed by the rules. e.g., no article regeneration.
The output is directly fed into the NMT component.
Participant Scenes are not separated here to avoid direct splitting in these cases:
His arrival surprised Mary.
He said John went to school.
More transformations would be required for splitting in these cases. | Placing each Scene in a different sentence.
Fits with event-wise simplification (Glavas and Stajner, 2013)
Here we only use semantic criteria.
It was also investigated in the context of Text Simplification evaluation:
SAMSA measure (Sulem, Abend and Rappoport, NAACL 2018)
A A and He came back home and played piano.
He came back home played piano
A A P A A P He came back home. He played piano.
A A P A Input sentence A P Input Scenes He played piano He came back home
P He observed the planet which has 14 satellites.
the R S planet
E C which has
P A He A He observed the planet. Planet has 14 satellites.
observed S E C planet E C has the planet satellites
A A the R S SSSciC iSc1Scn planet
Input sentence A Elaborator Scenes A
Input sentence without the Elaborator Scenes, preserving the Minimal Center has the planet satellites
Grammatical errors resulting from the split are not addressed by the rules. e.g., no article regeneration.
The output is directly fed into the NMT component.
Participant Scenes are not separated here to avoid direct splitting in these cases:
His arrival surprised Mary.
He said John went to school.
More transformations would be required for splitting in these cases. | [] |
GEM-SciDuet-train-16#paper-994#slide-8 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-8 | Combining DSS with Neural Text Simplification | After DSS, the output is fed to an MT-based simplification system.
We use a state-of-the-art NMT-Based TS system, NTS (Nisioi et al., 2017).
The combined system is called SENTS.
NTS was built using the OpenNMT (Klein et al., 2017) framework.
We use the NTS-w2v provided model where word2vec embeddings are used for the initialization.
Beam search is used during decoding. We explore both the highest (h1) and a lower ranked hypothesis (h4), which is less conservative.
NTS model trained on the corpus of Hwang et al., 2015 (~280K sentence pairs).
It was tuned on the corpus of Xu et al., 2016 (2000 sentences with 8 references). | After DSS, the output is fed to an MT-based simplification system.
We use a state-of-the-art NMT-Based TS system, NTS (Nisioi et al., 2017).
The combined system is called SENTS.
NTS was built using the OpenNMT (Klein et al., 2017) framework.
We use the NTS-w2v provided model where word2vec embeddings are used for the initialization.
Beam search is used during decoding. We explore both the highest (h1) and a lower ranked hypothesis (h4), which is less conservative.
NTS model trained on the corpus of Hwang et al., 2015 (~280K sentence pairs).
It was tuned on the corpus of Xu et al., 2016 (2000 sentences with 8 references). | [] |
GEM-SciDuet-train-16#paper-994#slide-9 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-9 | Experiments | Test set of Xu et al., 2016: sentences, each with 8 references
e.g., percentage of sentences copied from the input (%Same)
First 70 sentences of the corpus
3 annotators native English speakers
4 questions for each input-output pair
Is the output fluent and grammatical?
Does the output preserve the meaning of the input?
Qd Is the output simpler than the input, ignoring the complexity of the words?
4 parameters: Grammaticality (G) Meaning Preservation (P) Simplicity (S) Structural Simplicity (StS) | Test set of Xu et al., 2016: sentences, each with 8 references
e.g., percentage of sentences copied from the input (%Same)
First 70 sentences of the corpus
3 annotators native English speakers
4 questions for each input-output pair
Is the output fluent and grammatical?
Does the output preserve the meaning of the input?
Qd Is the output simpler than the input, ignoring the complexity of the words?
4 parameters: Grammaticality (G) Meaning Preservation (P) Simplicity (S) Structural Simplicity (StS) | [] |
GEM-SciDuet-train-16#paper-994#slide-10 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-10 | Results | BLEU SARI G M S StS
Automatic evaluation: BLEU, SARI
Human evaluation (first 70 sentences):
G Grammaticality: 1 to 5 scale S Simplicity: -2 to +2 scale
P Meaning Preservation: 1 to 5 scale StS Structural Simplicity: -2 to +2 scale
Identity gets the highest BLEU score and the lowest SARI scores.
The two SENTS systems outperform HYBRID in terms of BLEU, SARI, G, M and S.
SENTS-h1 has the best StS score.
%Same SARI G M S StS
Automatic evaluation: %Same, SARI
Compared to NTS, SENTS reduces conservatism and increases simplicity.
Compared to DSS, SENTS improves grammaticality and increases structural simplicity, since deletions are performed by the NTS component.
Replacing NTS by Statistical MT
Combination of DSS and Moses: SEMoses
The behavior of SEMoses is similar to that of DSS, confirming the over-conservatism of Moses (Alva-Manchego et al., 2017) for simplification.
All the splitting points from the DSS phase are preserved.
Replacing the parser by manual annotation
In the case of SEMoses, meaning preservation is improved. Simplicity degrades, possibly due to a larger number of annotated Scenes.
In the case of SENTS-h1, high simplicity scores are obtained. | BLEU SARI G M S StS
Automatic evaluation: BLEU, SARI
Human evaluation (first 70 sentences):
G Grammaticality: 1 to 5 scale S Simplicity: -2 to +2 scale
P Meaning Preservation: 1 to 5 scale StS Structural Simplicity: -2 to +2 scale
Identity gets the highest BLEU score and the lowest SARI scores.
The two SENTS systems outperform HYBRID in terms of BLEU, SARI, G, M and S.
SENTS-h1 has the best StS score.
%Same SARI G M S StS
Automatic evaluation: %Same, SARI
Compared to NTS, SENTS reduces conservatism and increases simplicity.
Compared to DSS, SENTS improves grammaticality and increases structural simplicity, since deletions are performed by the NTS component.
Replacing NTS by Statistical MT
Combination of DSS and Moses: SEMoses
The behavior of SEMoses is similar to that of DSS, confirming the over-conservatism of Moses (Alva-Manchego et al., 2017) for simplification.
All the splitting points from the DSS phase are preserved.
Replacing the parser by manual annotation
In the case of SEMoses, meaning preservation is improved. Simplicity degrades, possibly due to a larger number of annotated Scenes.
In the case of SENTS-h1, high simplicity scores are obtained. | [] |
GEM-SciDuet-train-16#paper-994#slide-12 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-12 | Conclusion 1 | We presented here the first simplification system combining semantic structures and neural machine translation.
Our system compares favorably to the state-of-the-art in combined structural and lexical simplification.
This approach addresses the conservatism of MT-based systems.
Sentence splitting is performed without relying on a specialized corpus. | We presented here the first simplification system combining semantic structures and neural machine translation.
Our system compares favorably to the state-of-the-art in combined structural and lexical simplification.
This approach addresses the conservatism of MT-based systems.
Sentence splitting is performed without relying on a specialized corpus. | [] |
GEM-SciDuet-train-16#paper-994#slide-13 | 994 | Simple and Effective Text Simplification Using Semantic and Neural Methods | Sentence splitting is a major simplification operator. Here we present a simple and efficient splitting algorithm based on an automatic semantic parser. After splitting, the text is amenable for further fine-tuned simplification operations. In particular, we show that neural Machine Translation can be effectively used in this situation. Previous application of Machine Translation for simplification suffers from a considerable disadvantage in that they are overconservative, often failing to modify the source in any way. Splitting based on semantic parsing, as proposed here, alleviates this issue. Extensive automatic and human evaluation shows that the proposed method compares favorably to the stateof-the-art in combined lexical and structural simplification. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266
],
"paper_content_text": [
"Introduction Text Simplification (TS) is generally defined as the conversion of a sentence into one or more simpler sentences.",
"It has been shown useful both as a preprocessing step for tasks such as Machine Translation (MT; Mishra et al., 2014; Štajner and Popović, 2016) and relation extraction (Niklaus et al., 2016) , as well as for developing reading aids, e.g.",
"for people with dyslexia (Rello et al., 2013) or non-native speakers (Siddharthan, 2002) .",
"TS includes both structural and lexical operations.",
"The main structural simplification operation is sentence splitting, namely rewriting a single sentence into multiple sentences while preserving its meaning.",
"While recent improvement in TS has been achieved by the use of neural MT (NMT) approaches (Nisioi et al., 2017; Zhang and Lapata, 2017) , where TS is consid-ered a case of monolingual translation, the sentence splitting operation has not been addressed by these systems, potentially due to the rareness of this operation in the training corpora (Narayan and Gardent, 2014; Xu et al., 2015) .",
"We show that the explicit integration of sentence splitting in the simplification system could also reduce conservatism, which is a grave limitation of NMT-based TS systems (Alva-Manchego et al., 2017) .",
"Indeed, experimenting with a stateof-the-art neural system (Nisioi et al., 2017) , we find that 66% of the input sentences remain unchanged, while none of the corresponding references is identical to the source.",
"Human and automatic evaluation of the references (against other references), confirm that the references are indeed simpler than the source, indicating that the observed conservatism is excessive.",
"Our methods for performing sentence splitting as pre-processing allows the TS system to perform other structural (e.g.",
"deletions) and lexical (e.g.",
"word substitutions) operations, thus increasing both structural and lexical simplicity.",
"For combining linguistically informed sentence splitting with data-driven TS, two main methods have been proposed.",
"The first involves handcrafted syntactic rules, whose compilation and validation are laborious (Shardlow, 2014) .",
"For example, Siddharthan and Angrosh (2014) used 111 rules for relative clauses, appositions, subordination and coordination.",
"Moreover, syntactic splitting rules, which form a substantial part of the rules, are usually language specific, requiring the development of new rules when ported to other languages (Aluísio and Gasperin, 2010; Seretan, 2012; Hung et al., 2012; Barlacchi and Tonelli, 2013 , for Portuguese, French, Vietnamese, and Italian respectively).",
"The second method uses linguistic information for detecting potential splitting points, while splitting probabilities are learned us-ing a parallel corpus.",
"For example, in the system of Narayan and Gardent (2014) (henceforth, HYBRID) , the state-of-the-art for joint structural and lexical TS, potential splitting points are determined by event boundaries.",
"In this work, which is the first to combine structural semantics and neural methods for TS, we propose an intermediate way for performing sentence splitting, presenting Direct Semantic Splitting (DSS), a simple and efficient algorithm based on a semantic parser which supports the direct decomposition of the sentence into its main semantic constituents.",
"After splitting, NMT-based simplification is performed, using the NTS system.",
"We show that the resulting system outperforms HY-BRID in both automatic and human evaluation.",
"We use the UCCA scheme for semantic representation (Abend and Rappoport, 2013) , where the semantic units are anchored in the text, which simplifies the splitting operation.",
"We further leverage the explicit distinction in UCCA between types of Scenes (events), applying a specific rule for each of the cases.",
"Nevertheless, the DSS approach can be adapted to other semantic schemes, like AMR (Banarescu et al., 2013) .",
"We collect human judgments for multiple variants of our system, its sub-components, HYBRID and similar systems that use phrase-based MT.",
"This results in a sizable human evaluation benchmark, which includes 28 systems, totaling at 1960 complex-simple sentence pairs, each annotated by three annotators using four criteria.",
"1 This benchmark will support the future analysis of TS systems, and evaluation practices.",
"Previous work is discussed in §2, the semantic and NMT components we use in §3 and §4 respectively.",
"The experimental setup is detailed in §5.",
"Our main results are presented in §6, while §7 presents a more detailed analysis of the system's sub-components and related settings.",
"Related Work MT-based sentence simplification.",
"Phrasebased Machine Translation (PBMT; Koehn et al., 2003) was first used for TS by Specia (2010) , who showed good performance on lexical simplification and simple rewriting, but under-prediction of other operations.",
"Štajner et al.",
"(2015) took a similar approach, finding that it is beneficial to use training data where the source side is highly similar to the target.",
"Other PBMT for TS systems include the work of Coster and Kauchak (2011b) , which uses Moses (Koehn et al., 2007) , the work of Coster and Kauchak (2011a) , where the model is extended to include deletion, and PBMT-R (Wubben et al., 2012) , where Levenshtein distance to the source is used for re-ranking to overcome conservatism.",
"The NTS NMT-based system (Nisioi et al., 2017) (henceforth, N17) reported superior performance over PBMT in terms of BLEU and human evaluation scores, and serves as a component in our system (see Section 4).",
"took a similar approach, adding lexical constraints to an NMT model.",
"Zhang and Lapata (2017) combined NMT with reinforcement learning, using SARI (Xu et al., 2016) , BLEU, and cosine similarity to the source as the reward.",
"None of these models explicitly addresses sentence splitting.",
"Alva-Manchego et al.",
"(2017) proposed to reduce conservatism, observed in PBMT and NMT systems, by first identifying simplification operations in a parallel corpus and then using sequencelabeling to perform the simplification.",
"However, they did not address common structural operations, such as sentence splitting, and claimed that their method is not applicable to them.",
"Xu et al.",
"(2016) used Syntax-based Machine Translation (SBMT) for sentence simplification, using a large scale paraphrase dataset (Ganitketitch et al., 2013) for training.",
"While it does not target structural simplification, we include it in our evaluation for completeness.",
"Structural sentence simplification.",
"Syntactic hand-crafted sentence splitting rules were proposed by Chandrasekar et al.",
"(1996) , Siddharthan (2002) , Siddhathan (2011) in the context of rulebased TS.",
"The rules separate relative clauses and coordinated clauses and un-embed appositives.",
"In our method, the use of semantic distinctions instead of syntactic ones reduces the number of rules.",
"For example, relative clauses and appositives can correspond to the same semantic category.",
"In syntax-based splitting, a generation module is sometimes added after the split (Siddharthan, 2004) , addressing issues such as reordering and determiner selection.",
"In our model, no explicit regeneration is applied to the split sentences, which are fed directly to an NMT system.",
"Glavaš andŠtajner (2013) used a rule-based system conditioned on event extraction and syntax for defining two simplification models.",
"The eventwise simplification one, which separates events to separate output sentences, is similar to our semantic component.",
"Differences are in that we use a single semantic representation for defining the rules (rather than a combination of semantic and syntactic criteria), and avoid the need for complex rules for retaining grammaticality by using a subsequent neural component.",
"Combined structural and lexical TS.",
"Earlier TS models used syntactic information for splitting.",
"Zhu et al.",
"(2010) used syntactic information on the source side, based on the SBMT model of Yamada and Knight (2001) .",
"Syntactic structures were used on both sides in the model of Woodsend and Lapata (2011) , based on a quasi-synchronous grammar (Smith and Eisner, 2006) , which resulted in 438 learned splitting rules.",
"The model of Siddharthan and Angrosh (2014) is similar to ours in that it combines linguistic rules for structural simplification and statistical methods for lexical simplification.",
"However, we use 2 semantic splitting rules instead of their 26 syntactic rules for relative clauses and appositions, and 85 syntactic rules for subordination and coordination.",
"Narayan and Gardent (2014) argued that syntactic structures do not always capture the semantic arguments of a frame, which may result in wrong splitting boundaries.",
"Consequently, they proposed a supervised system (HYBRID) that uses semantic structures (Discourse Semantic Representations, (Kamp, 1981) ) for sentence splitting and deletion.",
"Splitting candidates are pairs of event variables associated with at least one core thematic role (e.g., agent or patient).",
"Semantic annotation is used on the source side in both training and test.",
"Lexical simplification is performed using the Moses system.",
"HYBRID is the most similar system to ours architecturally, in that it uses a combination of a semantic structural component and an MT component.",
"Narayan and Gardent (2016) proposed instead an unsupervised pipeline, where sentences are split based on a probabilistic model trained on the semantic structures of Simple Wikipedia as well as a language model trained on the same corpus.",
"Lexical simplification is there performed using the unsupervised model of Biran et al.",
"(2011) .",
"As their BLEU and adequacy scores are lower than HYBRID's, we use the latter for comparison.",
"Stajner and Glavaš (2017) combined rule-based simplification conditioned on event extraction, to-gether with an unsupervised lexical simplifier.",
"They tackle a different setting, and aim to simplify texts (rather than sentences), by allowing the deletion of entire input sentences.",
"Split and Rephrase.",
"recently proposed the Split and Rephrase task, focusing on sentence splitting.",
"For this purpose they presented a specialized parallel corpus, derived from the WebNLG dataset .",
"The latter is obtained from the DBPedia knowledge base (Mendes et al., 2012) using content selection and crowdsourcing, and is annotated with semantic triplets of subject-relation-object, obtained semi-automatically.",
"They experimented with five systems, including one similar to HY-BRID, as well as sequence-to-sequence methods for generating sentences from the source text and its semantic forms.",
"The present paper tackles both structural and lexical simplification, and examines the effect of sentence splitting on the subsequent application of a neural system, in terms of its tendency to perform other simplification operations.",
"For this purpose, we adopt a semantic corpus-independent approach for sentence splitting that can be easily integrated in any simplification system.",
"Another difference is that the semantic forms in Split and Rephrase are derived semi-automatically (during corpus compilation), while we automatically extract the semantic form, using a UCCA parser.",
"Direct Semantic Splitting Semantic Representation UCCA (Universal Cognitive Conceptual Annotation; Abend and Rappoport, 2013) is a semantic annotation scheme rooted in typological and cognitive linguistic theory (Dixon, 2010b (Dixon, ,a, 2012 Langacker, 2008) .",
"It aims to represent the main semantic phenomena in the text, abstracting away from syntactic forms.",
"UCCA has been shown to be preserved remarkably well across translations (Sulem et al., 2015) and has also been successfully used for the evaluation of machine translation (Birch et al., 2016) and, recently, for the evaluation of TS (Sulem et al., 2018) and grammatical error correction (Choshen and Abend, 2018) .",
"Formally, UCCA structures are directed acyclic graphs whose nodes (or units) correspond either to the leaves of the graph or to several elements viewed as a single entity according to some semantic or cognitive consideration.",
"A Scene is UCCA's notion of an event or a frame, and is a unit that corresponds to a movement, an action or a state which persists in time.",
"Every Scene contains one main relation, which can be either a Process or a State.",
"Scenes contain one or more Participants, interpreted in a broad sense to include locations and destinations.",
"For example, the sentence \"He went to school\" has a single Scene whose Process is \"went\".",
"The two Participants are \"He\" and \"to school\".",
"Scenes can have several roles in the text.",
"First, they can provide additional information about an established entity (Elaborator Scenes), commonly participles or relative clauses.",
"For example, \"(child) who went to school\" is an Elaborator Scene in \"The child who went to school is John\" (\"child\" serves both as an argument in the Elaborator Scene and as the Center).",
"A Scene may also be a Participant in another Scene.",
"For example, \"John went to school\" in the sentence: \"He said John went to school\".",
"In other cases, Scenes are annotated as Parallel Scenes (H), which are flat structures and may include a Linker (L), as in: \"When L [he arrives] H , [he will call them] H \".",
"With respect to units which are not Scenes, the category Center denotes the semantic head.",
"For example, \"dogs\" is the Center of the expression \"big brown dogs\", and \"box\" is the center of \"in the box\".",
"There could be more than one Center in a unit, for example in the case of coordination, where all conjuncts are Centers.",
"We define the minimal center of a UCCA unit u to be the UCCA graph's leaf reached by starting from u and iteratively selecting the child tagged as Center.",
"For generating UCCA's structures we use TUPA, a transition-based parser (Hershcovich et al., 2017) (specifically, the TUPA BiLST M model).",
"TUPA uses an expressive set of transitions, able to support all structural properties required by the UCCA scheme.",
"Its transition classifier is based on an MLP that receives a BiLSTM encoding of elements in the parser state (buffer, stack and intermediate graph), given word embeddings and other features.",
"The Semantic Rules For performing DSS, we define two simple splitting rules, conditioned on UCCA's categories.",
"We currently only consider Parallel Scenes and Elaborator Scenes, not separating Participant Scenes, in order to avoid splitting in cases of nominalizations or indirect speech.",
"For example, the sentence \"His arrival surprised everyone\", which has, in addition to the Scene evoked by \"surprised\", a Participant Scene evoked by \"arrival\", is not split here.",
"Rule #1.",
"Parallel Scenes of a given sentence are extracted, separated in different sentences, and concatenated according to the order of appearance.",
"More formally, given a decomposition of a sentence S into parallel Scenes Sc 1 , Sc 2 , · · · Sc n (indexed by the order of the first token), we obtain the following rule, where \"|\" is the sentence delimiter: S −→ Sc1|Sc2| · · · |Scn As UCCA allows argument sharing between Scenes, the rule may duplicate the same sub-span of S across sentences.",
"For example, the rule will convert \"He came back home and played piano\" into \"He came back home\"|\"He played piano.\"",
"Rule #2.",
"Given a sentence S, the second rule extracts Elaborator Scenes and corresponding minimal centers.",
"Elaborator Scenes are then concatenated to the original sentence, where the Elaborator Scenes, except for the minimal center they elaborate, are removed.",
"Pronouns such as \"who\", \"which\" and \"that\" are also removed.",
"Formally, if {(Sc 1 , C 1 ) · · · (Sc n , C n )} are the Elaborator Scenes of S and their corresponding minimal centers, the rewrite is: S −→ S − n i=1 (Sci − Ci)|Sc1| · · · |Scn where S −A is S without the unit A.",
"For example, this rule converts the sentence \"He observed the planet which has 14 known satellites\" to \"He observed the planet| Planet has 14 known satellites.\".",
"Article regeneration is not covered by the rule, as its output is directly fed into the NMT component.",
"After the extraction of Parallel Scenes and Elaborator Scenes, the resulting simplified Parallel Scenes are placed before the Elaborator Scenes.",
"See Figure 1 .",
"Neural Component The split sentences are run through the NTS stateof-the-art neural TS system (Nisioi et al., 2017) , built using the OpenNMT neural machine translation framework (Klein et al., 2017) .",
"The architecture includes two LSTM layers, with hidden states of 500 units in each, as well as global attention combined with input feeding (Luong et al., 2015) .",
"Training is done with a 0.3 dropout probability (Srivastava et al., 2014) .",
"This model uses alignment probabilities between the predictions and the original sentences, rather than characterbased models, to retrieve the original words.",
"We here consider the w2v initialization for NTS (N17), where word2vec embeddings of size 300 are trained on Google News (Mikolov et al., 2013a) and local embeddings of size 200 are trained on the training simplification corpus (Řehůřek and Sojka, 2010; Mikolov et al., 2013b) .",
"Local embeddings for the encoder are trained on the source side of the training corpus, while those for the decoder are trained on the simplified side.",
"For sampling multiple outputs from the system, beam search is performed during decoding by generating the first 5 hypotheses at each step ordered by the log-likelihood of the target sentence given the input sentence.",
"We here explore both the highest (h1) and fourth-ranked (h4) hypotheses, which we show to increase the SARI score and to be much less conservative.",
"2 We thus experiment with two variants of the neural component, denoted by NTS-h1 and NTS-h4.",
"The pipeline application of the rules and the neural system results in two corresponding models: SENTS-h1 and SENTS-h4.",
"Experimental Setup Corpus All systems are tested on the test corpus of Xu et al.",
"(2016) , 3 comprising 359 sentences from the PWKP corpus (Zhu et al., 2010) Neural component.",
"We use the NTS-w2v model 6 provided by N17, obtained by training on the corpus of Hwang et al.",
"(2015) and tuning on the corpus of Xu et al.",
"(2016) .",
"The training set is based on manual and automatic alignments between standard English Wikipedia and Simple English Wikipedia, including both good matches and partial matches whose similarity score is above the 0.45 scale threshold (Hwang et al., 2015) .",
"The total size of the training set is about 280K aligned sentences, of which 150K sentences are full matches and 130K are partial matches.",
"7 Comparison systems.",
"We compare our findings to HYBRID, which is the state of the art for joint structural and lexical simplification, imple-mented by Zhang and Lapata (2017) .",
"8 We use the released output of HYBRID, trained on a corpus extracted from Wikipedia, which includes the aligned sentence pairs from Kauchak (2013) , the aligned revision sentence pairs in Woodsend and Lapata (2011) , and the PWKP corpus, totaling about 296K sentence pairs.",
"The tuning set is the same as for the above systems.",
"In order to isolate the effect of NMT, we also implement SEMoses, where the neural-based component is replaced by the phrase-based MT system Moses, 9 which is also used in HYBRID.",
"The training, tuning and test sets are the same as in the case of SENTS.",
"MGIZA 10 is used for word alignment.",
"The KenLM language model is trained using the target side of the training corpus.",
"Additional baselines.",
"We report human and automatic evaluation scores for Identity (where the output is identical to the input), for Simple Wikipedia where the output is the corresponding aligned sentence in the PWKP corpus, and for the SBMT-SARI system, tuned against SARI (Xu et al., 2016) , which maximized the SARI score on this test set in previous works (Nisioi et al., 2017; Zhang and Lapata, 2017) .",
"Automatic evaluation.",
"The automatic metrics used for the evaluation are: (1) BLEU (Papineni et al., 2002) (2) SARI (System output Against References and against the Input sentence; Xu et al., 2016) , which compares the n-grams of the system output with those of the input and the human references, separately evaluating the quality of words that are added, deleted and kept by the systems.",
"(3) F add : the addition component of the SARI score (F-score); (4) F keep : the keeping component of the SARI score (F-score); (5) P del : the deletion component of the SARI score (precision).",
"11 Each metric is computed against the 8 available references.",
"We also assess system conservatism, reporting the percentage of sentences copied from the input (%Same), the averaged Levenshtein distance from the source (LD SC , which considers additions, deletions, and substitutions), and the number of source sentences that are split (#Split).",
"12 Human evaluation.",
"Human evaluation is carried out by 3 in-house native English annotators, who rated the different input-output pairs for the different systems according to 4 parameters: Grammaticality (G), Meaning preservation (M), Simplicity (S) and Structural Simplicity (StS).",
"Each input-output pair is rated by all 3 annotators.",
"Elicitation questions are given in Table 1 .",
"As the selection process of the input-output pairs in the test corpus of Xu et al.",
"(2016) , as well as their crowdsourced references, are explicitly biased towards lexical simplification, the use of human evaluation permits us to evaluate the structural aspects of the system outputs, even where structural operations are not attested in the references.",
"Indeed, we show that system outputs may receive considerably higher structural simplicity scores than the source, in spite of the sample selection bias.",
"Following previous work (e.g., Narayan and Gardent, 2014; Xu et al., 2016; Nisioi et al., 2017) , Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"Note that in the first question, the input sentence is not taken into account.",
"The grammaticality of the input is assessed by evaluating the Identity transformation (see Table 2 ), providing a baseline for the grammaticality scores of the other systems.",
"Following N17, a -2 to +2 scale is used for measuring simplicity, where a 0 score indicates that the input and the output are equally complex.",
"This scale, compared to the standard 1 to 5 scale, permits a better differentiation between cases where simplicity is hurt (the output is more complex than the original) and between cases where the output is as simple as the original, for example in the case of the identity transformation.",
"Structural simplicity is also evaluated with a -2 to +2 scale.",
"The question for eliciting StS is accompanied with a negative example, showing a case of lexical simplification, where a complex word is replaced by a simple one (the other questions appear without examples).",
"A positive example is not included so as not to bias the annotators by revealing the nature of the operations we focus on (splitting and deletion).",
"We follow N17 in applying human evaluation on the first 70 sentences of the test corpus.",
"13 The resulting corpus, totaling 1960 sentence pairs, each annotated by 3 annotators, also include the additional experiments described in Section 7 as well as the outputs of the NTS and SENTS systems used with the default initialization.",
"The inter-annotator agreement, using Cohen's quadratic weighted κ (Cohen, 1968) , is computed as the average agreement of the 3 annotator pairs.",
"The obtained rates are 0.56, 0.75, 0.47 and 0.48 for G, M, S and StS respectively.",
"System scores are computed by averaging over the 3 annotators and the 70 sentences.",
"G Is the output fluent and grammatical?",
"M Does the output preserve the meaning of the input?",
"S Is the output simpler than the input?",
"StS Is the output simpler than the input, ignoring the complexity of the words?",
"Table 2 : Human evaluation of the different NMT-based systems.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"The highest score in each column appears in bold.",
"Structural simplification systems are those that explicitly model structural operations.",
"Results Human evaluation.",
"Results are presented in Table 2 .",
"First, we can see that the two SENTS systems outperform HYBRID in terms of G, M, and S. SENTS-h1 is the best scoring system, under all human measures.",
"In comparison to NTS, SENTS scores markedly higher on the simplicity judgments.",
"Meaning preservation and grammaticality are lower for SENTS, which is likely due to the more conservative nature of NTS.",
"Interestingly, the application of the splitting rules by themselves does not yield a considerably simpler sentence.",
"This likely stems from the rules not necessarily yielding grammatical sentences (NTS often serves as a grammatical error corrector over it), and from the incorporation of deletions, which are also structural operations, and are performed by the neural system.",
"An example of high structural simplicity scores for SENTS resulting from deletions is presented in Table 5 , together with the outputs of the other systems and the corresponding human evaluation scores.",
"NTS here performs lexical simplification, replacing the word \"incursions\" by \"raids\" or \"attacks\"'.",
"On the other hand, the high StS scores obtained by DSS and SEMoses are due to sentence splittings.",
"Automatic evaluation.",
"Results are presented in Table 3 .",
"Identity obtains much higher BLEU scores than any other system, suggesting that BLEU may not be informative in this setting.",
"SARI seems more informative, and assigns the lowest score to Identity and the second highest to the reference.",
"Both SENTS systems outperform HYBRID in terms of SARI and all its 3 sub-components.",
"The h4 setting (hypothesis #4 in the beam) is generally best, both with and without the splitting rules.",
"Comparing SENTS to using NTS alone (without splitting), we see that SENTS obtains higher SARI scores when hypothesis #1 is used and that NTS obtains higher scores when hypothesis #4 is used.",
"This may result from NTS being more conservative than SENTS (and HYBRID), which is rewarded by SARI (conservatism is indicated by the %Same column).",
"Indeed for h1, %Same is reduced from around 66% for NTS, to around 7% for SENTS.",
"Conservatism further decreases when h4 is used (for both NTS and SENTS).",
"Examining SARI's components, we find that SENTS outperforms NTS on F add , and is comparable (or even superior for h1 setting) to NTS on P del .",
"The superior SARI score of NTS over SENTS is thus entirely a result of a superior F keep , which is easier for a conservative system to maximize.",
"Comparing HYBRID with SEMoses, both of which use Moses, we find that SEMoses obtains higher BLEU and SARI scores, as well as G and M human scores, and splits many more sentences.",
"HYBRID scores higher on the human simplicity measures.",
"We note, however, that applying NTS alone is inferior to HYBRID in terms of simplicity, and that both components are required to obtain high simplicity scores (with SENTS).",
"We also compare the sentence splitting component used in our systems (namely DSS) to that used in HYBRID, abstracting away from deletionbased and lexical simplification.",
"We therefore apply DSS to the test set (554 sentences) of the Table 4 : Automatic and human evaluation for the different combinations of Moses and DSS.",
"The automatic metrics as well as the lexical and structural properties reported (%Same: proportion of sentences copied from the input; LDSC: Averaged Levenshtein distance from the source; #Split: number of split sentences) concern the 359 sentences of the test corpus.",
"Human evaluation, with the G, M, S, and StS parameters, is applied to the first 70 sentences of the corpus.",
"The highest score in each column appears in bold.",
"WEB-SPLIT corpus (See Section 2), which focuses on sentence splitting.",
"We compare our results to those reported for a variant of HYBRID used without the deletion module, and trained on WEB-SPLIT .",
"DSS gets a higher BLEU score (46.45 vs. 39.97) and performs more splittings (number of output sentences per input sentence of 1.73 vs. 1.26).",
"Additional Experiments Replacing the parser by manual annotation.",
"In order to isolate the influence of the parser on the results, we implement a semi-automatic version of the semantic component, which uses manual UCCA annotation instead of the parser, focusing of the first 70 sentences of the test corpus.",
"We employ a single expert UCCA annotator and use the UCCAApp annotation tool .",
"Results are presented in Table 6 , for both SENTS and SEMoses.",
"In the case of SEMoses, meaning preservation is improved when manual UCCA annotation is used.",
"On the other hand, simplicity degrades, possibly due to the larger number of Scenes marked by the human annotator (TUPA tends to under-predict Scenes).",
"This effect doesn't show with SENTS, where trends are similar to the automatic parses case, and high simplicity scores are obtained.",
"This demonstrates that UCCA parsing technology is sufficiently mature to be used to carry out structural simplification.",
"We also directly evaluate the performance of the parser by computing F1, Recall and Precision DAG scores (Hershcovich et al., 2017) , against the manual UCCA annotation.",
"14 We obtain for primary edges (i.e.",
"edges that form a tree structure) scores of 68.9 %, 70.5%, and 67.4% for F1, Recall and Precision respectively.",
"For remotes edges (i.e.",
"additional edges, forming a DAG), the scores are 45.3%, 40.5%, and 51.5%.",
"These results are comparable with the out-of-domain results reported by Hershcovich et al.",
"(2017) .",
"Experiments on Moses.",
"We test other variants of SEMoses, where phrase-based MT is used instead of NMT.",
"Specifically, we incorporate semantic information in a different manner by implementing two additional models: (1) SETrain1-Moses, where a new training corpus is obtained by applying the splitting rules to the target side of the G M S StS Identity In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the incursions of other Viking groups.",
"5.00 5.00 0.00 0.00 Simple Wikipedia In return, Rollo swore fealty to Charles, converted to Christianity, and swore to defend the northern region of France against raids by other Viking groups.",
"4.67 5.00 1.00 0.00 SBMT-SARI In return, Rollo swore fealty to Charles, converted to Christianity, and set out to defend the north of France from the raids of other viking groups.",
"4.67 4.67 0.67 0.00 NTS-h1 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the raids of other Viking groups.",
"5.00 5.00 1.00 0.00 NTS-h4 In return, Rollo swore fealty to Charles, converted to Christianity, and undertook to defend the northern region of France against the attacks of other Viking groups.",
"4.67 5.00 1.00 0.00 DSS Rollo swore fealty to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"4.00 4.33 1.33 1.33 HYBRID In return Rollo swore, and undertook to defend the region of France., Charles, converted 2.33 2.00 0.33 0.33 SEMoses Rollo swore put his seal to Charles.",
"Rollo converted to Christianity.",
"Rollo undertook to defend the northern region of France against the incursions of other viking groups.",
"3.33 4.00 1.33 1.33 SENTS-h1 Rollo swore fealty to Charles.",
"5.00 2.00 2.00 2.00 SENTS-h4 Rollo swore fealty to Charles and converted to Christianity.",
"5.00 2.67 1.33 1.33 Table 5 : System outputs for one of the test sentences with the corresponding human evaluation scores (averaged over the 3 annotators).",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"Table 6 : Human evaluation using manual UCCA annotation.",
"Grammaticality (G) and Meaning preservation (M) are measured using a 1 to 5 scale.",
"A -2 to +2 scale is used for measuring simplicity (S) and structural simplicity (StS) of the output relative to the input sentence.",
"X m refers to the semi-automatic version of the system X. training corpus; (2) SETrain2-Moses, where the rules are applied to the source side.",
"The resulting parallel corpus is concatenated to the original training corpus.",
"We also examine whether training a language model (LM) on split sentences has a positive effect, and train the LM on the split target side.",
"For each system X, the version with the LM trained on split sentences is denoted by X LM .",
"We repeat the same human and automatic evaluation protocol as in §6, presenting results in Table 4 .",
"Simplicity scores are much higher in the case of SENTS (that uses NMT), than with Moses.",
"The two best systems according to SARI are SEMoses and SEMoses LM which use DSS.",
"In fact, they resemble the performance of DSS applied alone (Tables 2 and 3) , which confirms the high degree of conservatism observed by Moses in simplification (Alva-Manchego et al., 2017) .",
"Indeed, all Moses-based systems that don't apply DSS as preprocessing are conservative, obtaining high scores for BLEU, grammaticality and meaning preservation, but low scores for simplicity.",
"Training the LM on split sentences shows little improvement.",
"Conclusion We presented the first simplification system combining semantic structures and neural machine translation, showing that it outperforms existing lexical and structural systems.",
"The proposed approach addresses the over-conservatism of MTbased systems for TS, which often fail to modify the source in any way.",
"The semantic component performs sentence splitting without relying on a specialized corpus, but only an off-theshelf semantic parser.",
"The consideration of sentence splitting as a decomposition of a sentence into its Scenes is further supported by recent work on structural TS evaluation (Sulem et al., 2018) , which proposes the SAMSA metric.",
"The two works, which apply this assumption to different ends (TS system construction, and TS evaluation), confirm its validity.",
"Future work will leverage UCCA's cross-linguistic applicability to support multi-lingual TS and TS pre-processing for MT."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"4",
"5",
"6",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Semantic Representation",
"The Semantic Rules",
"Neural Component",
"Experimental Setup",
"Results",
"Additional Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-16#paper-994#slide-13 | Conclusion 2 | Sentence splitting is treated as the decomposition of the sentence into its Scenes (as in SAMSA evaluation measure;
Sulem, Abend and Rappoport, NAACL 2018)
Future work will leverage UCCAs cross-linguistic applicability to support multi-lingual text simplification and simplification pre-processing for MT. | Sentence splitting is treated as the decomposition of the sentence into its Scenes (as in SAMSA evaluation measure;
Sulem, Abend and Rappoport, NAACL 2018)
Future work will leverage UCCAs cross-linguistic applicability to support multi-lingual text simplification and simplification pre-processing for MT. | [] |
GEM-SciDuet-train-17#paper-1001#slide-1 | 1001 | Consistent Improvement in Translation Quality of Chinese-Japanese Technical Texts by Adding Additional Quasi-parallel Training Data | Bilingual parallel corpora are an extremely important resource as they are typically used in data-driven machine translation. There already exist many freely available corpora for European languages, but almost none between Chinese and Japanese. The constitution of large bilingual corpora is a problem for less documented language pairs. We construct a quasi-parallel corpus automatically by using analogical associations based on certain number of parallel corpus and a small number of monolingual data. Furthermore, in SMT experiments performed on Chinese-Japanese, by adding this kind of data into the baseline training corpus, on the same test set, the evaluation scores of the translation results we obtained were significantly or slightly improved over the baseline systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126
],
"paper_content_text": [
"Introduction Bilingual corpora are an essential resource for current SMT.",
"So as to enlarge such corpora, technology research has been done in extracting parallel sentences from existing non-parallel corpora.",
"The approaches and difficulties depend on the parallelness of the given bilingual parallel corpus.",
"Fung and Cheung (2004) give a detailed description of the types of non-parallel corpora.",
"They proposed a completely unsupervised method for mining parallel sentences from quasi-comparable bilingual texts which include both in-topic and off-topic documents.",
"Chu et al.",
"(2013) proposed a novel method of classifier training and testing that simulates the real parallel sentence extraction process.",
"They used linguistic knowledge of Chinese character features.",
"Their approach improved in several aspects and worked well for extracting parallel sentences from quasi-comparable corpora.",
"Their experimental results on parallel sentence extraction from quasi-comparable corpora indicated that their proposed system performs significantly better than previous studies.",
"There also exist some works on extracting parallel parallel sentences from comparable corpora, such as Wikipedia.",
"Smith et al.",
"(2010) include features which make use of the additional annotation given by Wikipedia, and features using an automatically induced lexicon model.",
"In this paper, we propose to construct a bilingual corpus of quasi-parallel sentences automatically.",
"This is different from parallel or comparable or quasi-comparable corpora.",
"A quasi-parallel corpus contains aligned sentence pairs that are translations to each other to a certain extent.",
"The method relies on a certain number of existing parallel sentences and a small number of unaligned, unrelated, monolingual sentences.",
"To construct the quasi-parallel corpus, analogical associations captured by analogical clusters are used.",
"The motivation is that the construction of large bilingual corpora is a problem for less-resourced language pairs, but it is to be noticed that the monolingual data are easier to access in large amounts.",
"The languages that we tackle in this paper are: Chinese and Japanese.",
"Our approach leverages Chinese and Japanese monolingual data collected from the Web by clustering and grouping these sentences using analogical associations.",
"Our clusters can be considered as rewriting models for new sentence generation.",
"We generate new sentences using these rewriting models starting from seed sentences from the monolingual part of the existing parallel corpus we used, and filter out dubious newly over-generated sentences.",
"Finally, we extract newly generated sentences and assess the strength of translation relations between them based on the similarity, across languages, between the clusters they were generated from.",
"2 Chinese and Japanese Linguistic Resources Chinese and Japanese Parallel Sentences The Chinese and Japanese linguistic resources we use in this paper are the ASPEC-JC 1 corpus.",
"It is a parallel corpus consisting of Japanese scientific papers from the reference database and electronic journal site J-STAGE of the Japan Science and Technology Agency (JST) that have been translated to Chinese after receiving permission from the necessary academic associations.",
"The parts selected were abstracts and paragraph units from the body text, as these contain the highest overall vocabulary coverage.",
"This corpus is designed for Machine Translation and is split as below (some statistics are given in Table 1 ): • Training Data: 672,315 sentences; • Development Data: 2,090 sentences; • Development-Test Data: 2,148 sentences; • Test Data: 2,107 sentences.",
"For new sentence generation from the training data, we extracted 103,629 Chinese-Japanese parallel sentences with less than 30 characters in length.",
"We propose to make use of this part of data as seed sentences for new sentence generation in both languages, then deduce and construct a Chinese-Japanese quasi-parallel corpus that we will use as additional data to inflate the baseline training corpus.",
"Chinese and Japanese Monolingual Sentences To generate new quasi-parallel data, we also use unrelated unaligned monolingual data.",
"We collected monolingual Chinese and Japanese short sentences with less than 30 characters in size from the Web using an in-house Web-crawler, mainly from the following websites: \"Yahoo China\", \"Yahoo China News\", \"douban\" for Chinese and \"Yahoo!",
"JAPAN\", \"Mainichi Japan\" for Japanese.",
"Table 2 gives the statistics of the cleaned 70,000 monolingual data that we used in the experiments.",
"3 Constructing Analogical Clusters According to Proportional Analogies Proportional Analogies Proportional analogies establish a structural relationship between four objects, A, B, C and D: 'A is to B as C is to D'.",
"An efficient algorithm for the resolution of analogical equations between strings of characters has been proposed in (Lepage, 1998) .",
"The algorithm relies on counting numbers of occurrences of characters and computing edit distances (with only insertion and deletion as edit operations) between strings of characters (d (A, B) = d (C, D) and d (A,C) = d (B, D)).",
"The algorithm uses fast bit string operations and distance computation (Allison and Dix, 1986 ).",
"Sentential Analogies We gather pairs of sentences that constitute proportional analogies, independently in Chinese and Japanese.",
"For instance, the two following pairs of Japanese sentences are said to form an analogy, because the edit distance between the sentence pair on the left of '::' is the same as between the sentence pair on the right side: d (A, B) = d (C, D) = 13 and d (A,C) = d (B, D) = 5, and the relation on the number of occurrences of characters, which must be valid for each character, may be illustrated as follows for the character 茶: 1 (in A) -1 (in B) = 0 (in C) -0 (in D).",
"We call any such two pairs of sentences a sentential analogy.",
"紅茶が 飲みた い。 : あ な た は 紅 茶 が 好 き で す か。 :: ビ ー ル が 飲 み たい。 : あなたは ビールが 好きです か。 I'd like a cup of black tea.",
": Do you like black tea?",
":: I'd like a beer.",
": Do you like beer?",
"Analogical Cluster When several sentential analogies involve the same pairs of sentences, they form a series of analogous sentences, and they can be written on a sequence of lines where each line contains one sentence pair and where any two pairs of sentences from the sequence of lines forms a sentential analogy.",
"We call such a sequence of lines an analogical cluster.",
"The size of a cluster is the number of its sentential pairs.",
"The following example in Japanese shows three possible sentential analogies and the size of the cluster is 3.",
"English translation is given below.",
": Do you like beer?",
"I'd like some juice.",
": Do you like juice?",
"As we will see in Section 4, analogical clusters can be considered as rewriting models.",
"New sentences can be generated using them.",
"Experiments on clusters production In each language, independently, we also construct analogical clusters from the unrelated monolingual data.",
"The number of unique sentences used is 70,000 for both languages.",
"Table 3 summarizes some statistics on the clusters produced.",
"Chinese Japanese # of different sentences 70,000 70,000 # of clusters 23,182 21,975 Table 3 : Statistics on the Chinese and Japanese clusters constructed from our unrelated monolingual data independently in each language.",
"Determining corresponding clusters by computing similarity The steps for determining corresponding clusters are, • First, for each sentence pair in a cluster, we extract the change between the left and the right sides by finding the longest common subsequence (LCS) (Wagner and Fischer, 1974).",
"• Then, we consider the changes between the left (S le f t ) and the right (S right ) sides in one cluster as two sets.",
"We perform word segmentation 2 on these changes in sets to obtain minimal sets of changes made up with words or characters.",
"• Finally, we compute the similarity between the left sets (S le f t ) and the right sets (S right ) of Chinese and Japanese clusters.",
"To this end, we make use of the EDR dictionary 3 and word-to-word alignments (based on ASPEC-JC data using Anymalign 4 ), We keep 72,610 word-to-word correspondences obtained with Anymalign in 1 hour after filtering on both translation probabilities with a threshold of 0.3, the quality of these word-to-word correspondences is about 96%.",
"We also use a traditional-simplified Chinese variant table 5 and Kanji-Hanzi Conversion Table 6 to translate all Japanese words into Chinese, or convert Japanese characters into simplified Chinese characters.",
"We calculate the similarity between two Chinese and Japanese word sets according to a classical Dice formula: Sim = 2 × |S zh ∩ S ja | |S zh | + |S ja | (1) Here, S zh and S ja denote the minimal sets of changes across the clusters (both on the left or right) in both languages (after translation and conversion).",
"To compute the similarity between two Chinese and Japanese clusters we take the arithmetic mean on both sides, as given in formula (2) : Sim C zh −C ja = 1 2 (Sim le f t + Sim right ) (2) We set different thresholds for Sim C zh −C ja and check the correspondence between these extracted clusters by sampling.",
"Where the Sim C zh −C ja threshold is set to 0.300, the acceptability of the correspondence between the extracted clusters reaches 78%.",
"About 15,710 corresponding clusters were extracted (Sim C zh −C ja ≥ 0.300) by the above steps.",
"Generating New Sentences Using Analogical Associations Generation of New Sentences Analogy is not only a structural relationship.",
"It is also a process (Itkonen, 2005) by which, \"given two related forms and only one form, the fourth missing form is coined\" (de Saussure, 1916) .",
"If the objects A, B, C are given, we may obtain an other unknown object D according to the analogical equation A : B :: C : D. This principle can be illustrated as follows with sentences: 紅 茶 が 飲 みたい。 : あ な た は 紅 茶 が 好 き で すか。 :: ビ ー ル が 飲 み たい。 : x ⇒ x = あ な た は ビ ー ル が 好 きですか。 In this example, the solution of the analogical equation is D = \"あなたはビールが好きです か。\" (Do you like beer?).",
"If we regard each sentence pair in a cluster as a pair A : B (left to right or right to left), and any short sentence not belonging to the cluster as C (a seed sentence), the analogical equation A : B :: C : D of unknown D can be forged.",
"Such analogical equations allow us to produce new candidate sentences.",
"Each sentence pair in a cluster is a potential template for the generation of new candidate sentences.",
"Experiments on New Sentences Generation and Filtering by N-sequences For the generation of new sentences, we make use of the clusters we obtained from the experiments in Section 3.2 as rewriting models.",
"The seed sentences as input data for new sentences generation are the unique Chinese and Japanese short sentences from the 103,629 ASPEC-JC parallel sentences (less than 30 characters).",
"In this experiment, we generated new sentences with each pair of sentences in clusters for Chinese and Japanese respectively.",
"Table 4 gives the statistics for new sentence generation.",
"To filter out invalid and grammatically incorrect sentences and keep only well-formed sentences with high fluency of expression and adequacy of meaning, we eliminate any sentence that contains an N-sequence of a given length unseen in the reference corpus.",
"This technique to assess the quality of outputs of NLP systems has been used in previous works (Lin and Hovy, 2003; Doddington, 2002; Lepage and Denoual, 2005) .",
"In our experiment, we introduced begin/end markers to make sure that the beginning and the end of a sentence are also correct.",
"The best quality was obtained for the values N=6 for Chinese and N=7 for Japanese with the size of reference corpus (about 1,700,000 monolingual data for both Chinese and Japanese).",
"Quality assessment was performed by extracting a sample of 1,000 sentences randomly and checking manually by native speakers.",
"The grammatical quality was at least 96%.",
"This means that 96% of the Chinese and Japanese sentences may be considered as grammatically correct.",
"For new valid sentences, we remember their corresponding seed sentences and the cluster they were generated from.",
"Deducing and Acquiring Quasi-parallel Sentences We deduce translation relations based on the initial parallel corpus and corresponding clusters between Chinese and Japanese.",
"If the seeds of two new generated sentences in Chinese and Japanese are aligned in the initial parallel corpus, and if the clusters which they were generated from are corresponding, we suppose that these two Chinese and Japanese newly generated sentences are translations of one another to a certain extent.",
"Table 5 gives the statistics on the quasi-parallel deducing obtained.",
"Among the 35,817 unique Chinese-Japanese quasi-parallel sentences obtained, about 74% were found to be exact translations by manual check on a sampling of 1,000 pairs of sentences.",
"This justifies our use of the term \"quasi-parallel\" for this kind of data.",
"SMT Experiments Experimental Protocol To assess the contribution of the generated quasiparallel corpus, we propose to compare two SMT systems.",
"The first one is constructed using the initial given ASPEC-JC parallel corpus.",
"This is the baseline.",
"The second one adds the additional quasi-parallel corpus obtained using analogical associations and analogical clusters.",
"Baseline: The statistics of the data used in the experiments are given in Table 6 (left).",
"The training corpus consists of 672,315 sentences of initial Chinese-Japanese parallel corpus.",
"The tuning set is 2,090 sentences from the ASPEC-JC.dev corpus, and 2,107 sentences also from the ASPEC-JC.test corpus were used for testing.",
"We perform all experiments using the standard GIZA++/MOSES pipeline (Och and Ney, 2003) .",
"Adding Additional Quasi-parallel Corpus: The statistics of the data used in this second setting are given in Table 6 (right).",
"The training corpus is made of 708,132 (672,315 + 35,817) sentences, i.e., the combination of the initial Chinese-Japanese parallel corpus used in the baseline and the quasi-parallel corpus.",
"Experimental Results: Table 7 and Table 8 give the evaluation results.",
"We use the standard metrics BLEU (Papineni et al., 2002) , NIST (Doddington et al., 2000) , WER (Nießen et al., 2000) , TER (Snover et al., 2006) and RIBES (Isozaki et al., 2010) .",
"As Table 7 shows, significant improvement over the baseline is obtained by adding the quasi-parallel generated data based on the Moses version 1.0, and Table 8 shows a slightly improvement over the baseline is obtained by adding the quasi-parallel generated data based on the Moses version 2.1.1.",
"Influence of Segmentation on Translation Results We also use Kytea 7 to segment Chinese and Japanese.",
"Table 9 and Table 10 show the evaluation results by using Kytea as the segmentation tools based on standard GIZA++/MOSES (different version in 1.0 and 2.1.1) pipeline.",
"As the evaluation scores (BLEU and RIBES) shown in Table 7, Table 8 , Table 9 and Table 10 : • We obtained more increase based on Moses version 1.0 than Moses version 2.1.1 by using urheen/mecab or kytea for Chinese and Japanese as the segmentation tools; • But, based on Moses version 2.1.1 we obtained higher BLEU and RIBES than Moses version 1.0 by using two different segmentation tools; • Based on the same Moses version, most of the BLEU and RIBES scores are higher by using urheen and mecab as the segmentation tools for Chinese and Japanese than using kytea (except ja-zh by using kytea based on Moses version 2.1.1).",
"Issues for Context-aware Machine Translation Context-aware plays an important role in disambiguation and machine translation.",
"Usually, the MT systems look at surface form only, conversational speech tends to be more concise and more context-dependent (Example1), and some ambiguities often arises due to polysemy (Example2 from our experiment results by using urheen and mecab as the segmentation tools) and homonymy.",
"Example1: 下次我要尝尝白的。 Reference en: I'll try Chinese wine next time.",
"Reference ja: 今度は中 中 中国 国 国の の のワ ワ ワイ イ イン ン ンを試し てみます。 MT output en: Next time I'll try the white.",
"MT output ja: 次回は私は白 白 白を試してみま す。 Example2: 结果发现,其中昼夜均符合 环境标准的地点是,平成1 5 年 度 为 6 3 处 处 处 ( 3 6 . 2%),平成16年度为5 9处 处 处(37.8%)。 Reference ja: その結果 ,全地点のうち昼 夜ともに環境基準地を達成 したのは ,平成15年度の 63地 地 地点 点 点(36 .2% ), 平 成 1 6 年 度 で 5 9 地 地 地 点 点 点 (37.8%)であった。 MT output (google) : それは、昼と夜が環境基準 に 沿 っ た も の で あ る 場 所 が63(36.2% ) が 平 成15年 であることが判明した、平 成59(37.8%)が16歳。 MT output (our baseline) : その結果,その昼夜ともに 環境基準の地点は,平成1 5年度は63箇 箇 箇所 所 所(36. 2%)では,平成16年度 は59箇 箇 箇所 所 所(37.8%) であった。 MT output (our base- line+add) : その結果,その昼夜ともに 環境基準の地点は,平成1 5年度は63箇 箇 箇所 所 所(36. 2%)では,平成16年度 は59箇 箇 箇所 所 所(37.8%) であった。 As the Example2 shows, we obtained the better and more correct translation results based on our translation systems.",
"Correct meaning of a word or a sentence depends context information.",
"The large training data in the same domain is also an extremely important factor in translation systems.",
"They allow us to obtain the well-formed translation result with high fluency of expression and adequacy of meaning.",
"Conclusion We presented a technique to automatically generate a quasi-parallel corpus to inflate the training corpus used to build an SMT system.",
"The experimental data we use are ASPEC-JC corpus and the monolingual data were collected from the Web.",
"We produced analogical clusters as rewriting models to generate new sentences, and filter newly over-generated sentences by the N-sequences filtering method.",
"The grammatical quality of the valid new sentences is at least 96%.",
"We then assess translation relations between newly generated short sentences across both languages, relying on the similarity between the clusters across languages.",
"We automatically obtained 35,817 Chinese-Japanese sentence pairs, 74% of which were found to be exact translations.",
"We call such sentence pairs a quasi-parallel corpus.",
"In SMT experiments performed on Chinese-Japanese, using the standard GIZA++/MOSES pipeline, by adding our quasi-parallel data, we were able to inflate the training data in a rewarding way.",
"On the same test set, based on different MOSES versions and segmentation tools, all of translation scores significantly or slightly improved over the baseline systems.",
"It should be stressed that the data that allowed us to get such improvement are not so large in quantity and not so good in quality, but we were able to control both quantity and quality so as to consistently improve translation quality.",
"Table 10 : Evaluation results for Chinese-Japanese translation across two SMT systems (baseline and baseline + additional quasi-parallel data), Moses version: 2.1.1, segmentation tools: Kytea."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"3.1",
"3.1.1",
"3.1.2",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Chinese and Japanese Parallel Sentences",
"Chinese and Japanese Monolingual Sentences",
"Proportional Analogies",
"Sentential Analogies",
"Analogical Cluster",
"Experiments on clusters production",
"Determining corresponding clusters by computing similarity",
"Generation of New Sentences",
"Experiments on New Sentences Generation and Filtering by N-sequences",
"Deducing and Acquiring Quasi-parallel Sentences",
"Experimental Protocol",
"Influence of Segmentation on Translation Results",
"Issues for Context-aware Machine Translation",
"Conclusion"
]
} | GEM-SciDuet-train-17#paper-1001#slide-1 | SMT Experiments | Experimental results of SMT
BLEU NIST WER TER RIBES baseline zh-ja + additional training data
Table: Evaluation results for ChineseJapanese translation across two
SMT systems (baseline and baseline + additional quasi-parallel data),
Moses version: 1.0, segmentation tools: urheen and mecab. | Experimental results of SMT
BLEU NIST WER TER RIBES baseline zh-ja + additional training data
Table: Evaluation results for ChineseJapanese translation across two
SMT systems (baseline and baseline + additional quasi-parallel data),
Moses version: 1.0, segmentation tools: urheen and mecab. | [] |
GEM-SciDuet-train-18#paper-1009#slide-0 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-0 | Data is Limited | Most of the popular models in NLP are data-driven
We often need to operate in a specific scenario Limited data
Take spoken language understanding as an example
Need to be implemented for many domains Limited data
Intent Detection flights from Boston to Tokyo intent: flight
Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
E.g., intelligent customer service robot
What can we do with limited data? | Most of the popular models in NLP are data-driven
We often need to operate in a specific scenario Limited data
Take spoken language understanding as an example
Need to be implemented for many domains Limited data
Intent Detection flights from Boston to Tokyo intent: flight
Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
E.g., intelligent customer service robot
What can we do with limited data? | [] |
GEM-SciDuet-train-18#paper-1009#slide-1 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-1 | Regular Expression Rules | When data is limited Use rule-based system
Regular expression is the most commonly used rule in NLP
Many regular expression rules in company
Intent Detection flights from Boston to Tokyo intent: flight
Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
However, regular expressions are hard to generalize
Neural networks are potentially good at generalization
Can we combine the advantages of two worlds?
Regular Expressions Pro: controllable, do not need data
/^flights? from/ Con: need to specify every variation
Neural Network Pro: semantic matching
Con: need a lot of data | When data is limited Use rule-based system
Regular expression is the most commonly used rule in NLP
Many regular expression rules in company
Intent Detection flights from Boston to Tokyo intent: flight
Slot Filling flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
However, regular expressions are hard to generalize
Neural networks are potentially good at generalization
Can we combine the advantages of two worlds?
Regular Expressions Pro: controllable, do not need data
/^flights? from/ Con: need to specify every variation
Neural Network Pro: semantic matching
Con: need a lot of data | [] |
GEM-SciDuet-train-18#paper-1009#slide-2 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-2 | Which Part of Regular Expression to Use | Regular expression (RE) output is useful
flights? from/ flights from Boston to Tokyo intent: flight
flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
RE contains clue words
NN should attend to these clue words for prediction | Regular expression (RE) output is useful
flights? from/ flights from Boston to Tokyo intent: flight
flights from Boston to Tokyo fromloc.city:Bostontoloc.city:Tokyo
RE contains clue words
NN should attend to these clue words for prediction | [] |
GEM-SciDuet-train-18#paper-1009#slide-3 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-3 | Method 1 RE Output As Features | Embed the REtag, append to input
REtag: flight Softmax Classifier
RE feat s Attention
Intent Detection h1 h2 h3 h4 h5
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami /^flights? from/
flights from Boston to Miami REtag: O O B-loc.city O B-loc.city | Embed the REtag, append to input
REtag: flight Softmax Classifier
RE feat s Attention
Intent Detection h1 h2 h3 h4 h5
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami /^flights? from/
flights from Boston to Miami REtag: O O B-loc.city O B-loc.city | [] |
GEM-SciDuet-train-18#paper-1009#slide-4 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-4 | Method 2 RE Output Fusion in Output | is the NN output score for class k (before softmax)
, whether regular expression predict class k
Intent: flight logitk=l ogitk+ w kzk Softmax Classifier
Intent Detection h1 h2 h3 h4 h5
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami /^flights? from/
Slot Filling h1 h2 h3 h4 h5
flights from Boston to /from __CITY to Miami __CITY/ | is the NN output score for class k (before softmax)
, whether regular expression predict class k
Intent: flight logitk=l ogitk+ w kzk Softmax Classifier
Intent Detection h1 h2 h3 h4 h5
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami /^flights? from/
Slot Filling h1 h2 h3 h4 h5
flights from Boston to /from __CITY to Miami __CITY/ | [] |
GEM-SciDuet-train-18#paper-1009#slide-5 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-5 | Method 3 Clue Words Guide Attention | Attention should match clue words
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami Gold Att:
Positive Regular Expressions (REs) Negative REs
REs can indicate the input belong to class k, or does not belong to class k
Correction of wrong predictions
How long does it take to fly from LA to NYC? intent: abbreviation
Corresponding to positive / negative REs
Positive REs and Negative REs interconvertible
A positive RE for one class can be negative RE for other classes
flights from Boston to Tokyo intent: abbreviation | Attention should match clue words
BLSTM RE Instance x1 x2 x3 x4 x5
flights from Boston to Miami Gold Att:
Positive Regular Expressions (REs) Negative REs
REs can indicate the input belong to class k, or does not belong to class k
Correction of wrong predictions
How long does it take to fly from LA to NYC? intent: abbreviation
Corresponding to positive / negative REs
Positive REs and Negative REs interconvertible
A positive RE for one class can be negative RE for other classes
flights from Boston to Tokyo intent: abbreviation | [] |
GEM-SciDuet-train-18#paper-1009#slide-6 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-6 | Experiment Setup | Written by a paid annotator
We want to answer the following questions:
Can regular expressions (REs) improve the neural network (NN) when
data is limited (only use a small fraction of the training data)?
Can REs still improve NN when using the full dataset?
How does RE complexity influence the results? | Written by a paid annotator
We want to answer the following questions:
Can regular expressions (REs) improve the neural network (NN) when
data is limited (only use a small fraction of the training data)?
Can REs still improve NN when using the full dataset?
How does RE complexity influence the results? | [] |
GEM-SciDuet-train-18#paper-1009#slide-7 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-7 | Few Shot Learning Experiment | Using clue words to guide attention performs best for intent detection
Using RE output as feature performs best for slot filling | Using clue words to guide attention performs best for intent detection
Using RE output as feature performs best for slot filling | [] |
GEM-SciDuet-train-18#paper-1009#slide-8 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-8 | Full Dataset Experiment | Use all the training data | Use all the training data | [] |
GEM-SciDuet-train-18#paper-1009#slide-9 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-9 | Complex RE vs Simple RE | Complex RE: many semantically independant groups
Complex RE: /(_AIRCRAFT_CODE) that fly/
Complex Simple Complex Simple
Complex REs yield better results
Simple REs also clearly improves the baseline | Complex RE: many semantically independant groups
Complex RE: /(_AIRCRAFT_CODE) that fly/
Complex Simple Complex Simple
Complex REs yield better results
Simple REs also clearly improves the baseline | [] |
GEM-SciDuet-train-18#paper-1009#slide-10 | 1009 | Marrying Up Regular Expressions with Neural Networks: A Case Study for Spoken Language Understanding | The success of many natural language processing (NLP) tasks is bound by the number and quality of annotated data, but there is often a shortage of such training data. In this paper, we ask the question: "Can we combine a neural network (NN) with regular expressions (RE) to improve supervised learning for NLP?". In answer, we develop novel methods to exploit the rich expressiveness of REs at different levels within a NN, showing that the combination significantly enhances the learning effectiveness when a small number of training examples are available. We evaluate our approach by applying it to spoken language understanding for intent detection and slot filling. Experimental results show that our approach is highly effective in exploiting the available training data, giving a clear boost to the RE-unaware NN. flights from Boston to Miami Intent RE: Intent Label: flight /from (__CITY) to (__CITY)/ O O B-fromloc.city O B-toloc.city Sentence: Slot Labels: Slot RE: /^flights? from/ REtag: flight city / toloc.city REtag: city / fromloc.city | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279,
280,
281,
282,
283,
284,
285,
286,
287,
288,
289,
290,
291,
292,
293,
294
],
"paper_content_text": [
"Introduction Regular expressions (REs) are widely used in various natural language processing (NLP) tasks like pattern matching, sentence classification, sequence labeling, etc.",
"(Chang and Manning, 2014) .",
"As a technique based on human-crafted rules, it is concise, interpretable, tunable, and does not rely on much training data to generate.",
"As such, it is commonly used in industry, especially when the available training examples are limited -a problem known as few-shot learning (GC et al., 2015) .",
"While powerful, REs have a poor generalization ability because all synonyms and variations in a RE must be explicitly specified.",
"As a result, REs are often ensembled with data-driven methods, such as neural network (NN) based techniques, where a set of carefully-written REs are used to handle certain cases with high precision, leaving the rest for data-driven methods.",
"We believe the use of REs can go beyond simple pattern matching.",
"In addition to being a separate classifier to be ensembled, a RE also encodes a developer's knowledge for the problem domain.",
"The knowledge could be, for example, the informative words (clue words) within a RE's surface form.",
"We argue that such information can be utilized by data-driven methods to achieve better prediction results, especially in few-shot learning.",
"This work investigates the use of REs to improve NNs -a learning framework that is widely used in many NLP tasks (Goldberg, 2017) .",
"The combination of REs and a NN allows us to exploit the conciseness and effectiveness of REs and the strong generalization ability of NNs.",
"This also provides us an opportunity to learn from various kinds of REs, since NNs are known to be good at tolerating noises (Xie et al., 2016) .",
"This paper presents novel approaches to combine REs with a NN at different levels.",
"At the input layer, we propose to use the evaluation outcome of REs as the input features of a NN (Sec.3.2).",
"At the network module level, we show how to exploit the knowledge encoded in REs to guide the attention mechanism of a NN (Sec.",
"3.3).",
"At the output layer, we combine the evaluation outcome of a RE with the NN output in a learnable manner (Sec.",
"3.4) .",
"We evaluate our approach by applying it to two spoken language understanding (SLU) tasks, namely intent detection and slot filling, which respectively correspond to two fundamental NLP tasks: sentence classification and sequence labeling.",
"To demonstrate the usefulness of REs in realworld scenarios where the available number of annotated data can vary, we explore both the fewshot learning setting and the one with full training data.",
"Experimental results show that our approach is highly effective in utilizing the available Figure 1 : A sentence from the ATIS dataset.",
"REs can be used to detect the intent and label slots.",
"annotated data, yielding significantly better learning performance over the RE-unaware method.",
"Our contributions are as follows.",
"(1) We present the first work to systematically investigate methods for combining REs with NNs.",
"(2) The proposed methods are shown to clearly improve the NN performance in both the few-shot learning and the full annotation settings.",
"(3) We provide a set of guidance on how to combine REs with NNs and RE annotation.",
"Background Typesetting In this paper, we use italic for emphasis like intent detection, the Courier typeface for abbreviations like RE, bold italic for the first appearance of a concept like clue words, Courier surrounded by / for regular expressions like /list( the)?",
"AIRLINE/, and underlined italic for words of sentences in our dataset like Boston.",
"Problem Definition Our work targets two SLU tasks: intent detection and slot filling.",
"The former is a sentence classification task where we learn a function to map an input sentence of n words, x = [x 1 , ..., x n ], to a corresponding intent label, c. The latter is a sequence labeling task for which we learn a function to take in an input query sentence of n words, x = [x 1 , ..., x n ], to produce a corresponding labeling sequence, y = [y 1 , ..., y n ], where y i is the slot label of the corresponding word, x i .",
"Take the sentence in Fig.",
"1 as an example.",
"A successful intent detector would suggest the intent of the sentence as flight, i.e., querying about flight-related information.",
"A slot filler, on the other hand, should identify the slots fromloc.city and toloc.city by labeling Boston and Miami, respectively, using the begin-inside-outside (BIO) scheme.",
"The Use of Regular Expressions In this work, a RE defines a mapping from a text pattern to several REtags which are the same as or related to the target labels (i.e., intent and slot labels).",
"A search function takes in a RE, applies it to all sentences, and returns any texts that match the pattern.",
"We then assign the REtag (s) (that are associated with the matching RE) to either the matched sentence (for intent detection) or some matched phrases (for slot filling).",
"Specifically, our REtags for intent detection are the same as the intent labels.",
"For example, in Fig.",
"1 , we get a REtag of flight that is the same as the intent label flight.",
"For slot filling, we use two different sets of REs.",
"Given the group functionality of RE, we can assign REtags to our interested RE groups (i.e., the expressions defined inside parentheses).",
"The translation from REtags to slot labels depends on how the corresponding REs are used.",
"(1) When REs are used at the network module level (Sec.",
"3.3), the corresponding REtags are the same as the target slot labels.",
"For instance, the slot RE in Fig.",
"1 will assign fromloc.city to the first RE group and toloc.city to the second one.",
"Here, CITY is a list of city names, which can be replaced with a RE string like /Boston|Miami|LA|.../.",
"(2) If REs are used in the input (Sec.",
"3.2) and the output layers (Sec.",
"3.4) of a NN, the corresponding REtag would be different from the target slot labels.",
"In this context, the two RE groups in Fig.",
"1 would be simply tagged as city to capture the commonality of three related target slot labels: fromloc.city, toloc.city, stoploc.city.",
"Note that we could use the target slot labels as REtags for all the settings.",
"The purpose of abstracting REtags to a simplified version of the target slot labels here is to show that REs can still be useful when their evaluation outcome does not exactly match our learning objective.",
"Further, as shown in Sec.",
"4.2, using simplified REtags can also make the development of REs easier in our tasks.",
"Intuitively, complicated REs can lead to better performance but require more efforts to generate.",
"Generally, there are two aspects affecting RE complexity most: the number of RE groups 1 and or clauses (i.e., expressions separated by the disjunction operator |) in a RE group.",
"Having a larger number of RE groups often leads to better 1 When discussing complexity, we consider each semantically independent consecutive word sequence as a RE group (excluding clauses, such as \\w+, that can match any word).",
"For instance, the RE: /how long( \\w+){1,2}?",
"(it take|flight)/ has two RE groups: (how long) and (it take|flight).",
"precision but lower coverage on pattern matching, while a larger number of or clauses usually gives a higher coverage but slightly lower precision.",
"Our Approach As depicted in Fig.",
"2 , we propose to combine NNs and REs from three different angles.",
"Base Models We use the Bi-directional LSTM (BLSTM) as our base NN model because it is effective in both intent detection and slot filling (Liu and Lane, 2016) .",
"Intent Detection.",
"As shown in Fig.",
"2 , the BLSTM takes as input the word embeddings [x 1 , ..., x n ] of a n-word sentence, and produces a vector h i for each word i.",
"A self-attention layer then takes in the vectors produced by the BLSTM to compute the sentence embedding s: s = i α i h i , α i = exp(h i Wc) i exp(h i Wc) (1) where α i is the attention for word i, c is a randomly initialized trainable vector used to select informative words for classification, and W is a weight matrix.",
"Finally, s is fed to a softmax classifier for intent classification.",
"Slot Filling.",
"The model for slot filling is straightforward -the slot label prediction is generated by a softmax classier which takes in the BLSTM's output h i and produces the slot label of word i.",
"Note that attention aggregation in Fig.",
"2 is only employed by the network module level method presented in Sec.",
"3.3.",
"Using REs at the Input Level At the input level, we use the evaluation outcomes of REs as features which are fed to NN models.",
"Intent Detection.",
"Our REtag for intent detection is the same as our target intent label.",
"Because real-world REs are unlikely to be perfect, one sentence may be matched by more than one RE.",
"This may result in several REtags that are conflict with each other.",
"For instance, the sentence list the Delta airlines flights to Miami can match a RE: /list( the)?",
"AIRLINE/ that outputs tag airline, and another RE: /list( \\w+){0,3} flights?/ that outputs tag flight.",
"To resolve the conflicting situations illustrated above, we average the randomly initialized trainable tag embeddings to form an aggregated embedding as the NN input.",
"There are two ways to use the aggregated embedding.",
"We can append the aggregated embedding to either the embedding of every input word, or the input of the softmax classifier (see 1 in Fig.",
"2(a) ).",
"To determine which strategy works best, we perform a pilot study.",
"We found that the first method causes the tag embedding to be copied many times; consequently, the NN tends to heavily rely on the REtags, and the resulting performance is similar to the one given by using REs alone in few-shot settings.",
"Thus, we adopt the second approach.",
"Slot Filling.",
"Since the evaluation outcomes of slot REs are word-level tags, we can simply embed and average the REtags into a vector f i for each word, and append it to the corresponding word embedding w i (as shown in 1 in Fig.",
"2(b) ).",
"Note that we also extend the slot REtags into the BIO format, e.g., the REtags of phrase New York are B-city and I-city if its original tag is city.",
"Using REs at the Network Module Level At the network module level, we explore ways to utilize the clue words in the surface form of a RE (bold blue arrows and words in 2 of Fig.",
"2 ) to guide the attention module in NNs.",
"Intent Detection.",
"Taking the sentence in Fig.",
"1 for example, the RE: /ˆflights?",
"from/ that leads to intent flight means that flights from are the key words to decide the intent flight.",
"Therefore, the attention module in NNs should leverage these two words to get the correct prediction.",
"To this end, we extend the base intent model by making two changes to incorporate the guidance from REs.",
"First, since each intent has its own clue words, using a single sentence embedding for all intent labels would make the attention less focused.",
"Therefore, we let each intent label k use different attention a k , which is then used to generate the sentence embedding s k for that intent: s k = i α ki h i , α ki = exp(h i W a c k ) i exp(h i W a c k ) (2) where c k is a trainable vector for intent k which is used to compute attention a k , h i is the BLSTM output for word i, and W a is a weight matrix.",
"The probability p k that the input sentence expresses intent k is computed by: where w k , logit k , b k are weight vector, logit, and bias for intent k, respectively.",
"p k = exp(logit k ) k exp(logit k ) , logit k = w k s k + b k (3) x 1 x 2 h 1 h 2 x 3 h Second, apart from indicating a sentence for intent k (positive REs), a RE can also indicate that a sentence does not express intent k (negative REs).",
"We thus use a new set of attention (negative attentions, in contrast to positive attentions), to compute another set of logits for each intent with Eqs.",
"2 and 3.",
"We denote the logits computed by positive attentions as logit pk , and those by negative attentions as logit nk , the final logit for intent k can then be calculated as: logit k = logit pk − logit nk (4) To use REs to guide attention, we add an attention loss to the final loss: loss att = k i t ki log(α ki ) (5) where t ki is set to 0 when none of the matched REs (that leads to intent k) marks word i as a clue word -otherwise t ki is set to 1/l k , where l k is the number of clue words for intent k (if no matched RE leads to intent k, then t k * = 0).",
"We use Eq.",
"5 to compute the positive attention loss, loss att p , for positive REs and negative attention loss, loss att n , for negative ones.",
"The final loss is computed as: loss = loss c + β p loss att p + β n loss att n (6) where loss c is the original classification loss, β p and β n are weights for the two attention losses.",
"Slot Filling.",
"The two-side attention (positive and negative attention) mechanism introduced for intent prediction is unsuitable for slot filling.",
"Because for slot filling, we need to compute attention for each word, which demands more compu-tational and memory resources than doing that for intent detection 2 .",
"Because of the aforementioned reason, we use a simplified version of the two-side attention, where all the slot labels share the same set of positive and negative attention.",
"Specifically, to predict the slot label of word i, we use the following equations, which are similar to Eq.",
"1, to generate a sentence embedding s pi with regard to word i from positive attention: s pi = j α pij h j , α pij = exp(h j W sp h i ) j exp(h j W sp h i ) (7) where h i and h j are the BLSTM outputs for word i and j respectively, W sp is a weight matrix, and α pij is the positive attention value for word j with respect to word i.",
"Further, by replacing W sp with W sn , we use Eq.",
"7 again to compute negative attention and generate the corresponding sentence embedding s ni .",
"Finally, the prediction p i for word i can be calculated as: p i = softmax((W p [s pi ; h i ] + b p ) −(W n [s ni ; h i ] + b n )) (8) where W p , W n , b p , b n are weight matrices and bias vectors for positive and negative attention, respectively.",
"Here we append the BLSTM output h i to s pi and s ni because the word i itself also plays a crucial part in identifying its slot label.",
"Using REs at the Output Level At the output level, REs are used to amend the output of NNs.",
"At this level, we take the same approach used for intent detection and slot filling (see 3 in Fig.",
"2 ).",
"As mentioned in Sec.",
"2.3, the slot REs used in the output level only produce a simplified version of target slot labels, for which we can further annotate their corresponding target slot labels.",
"For instance, a RE that outputs city can lead to three slot labels: fromloc.city, toloc.city, stoploc.city.",
"Let z k be a 0-1 indicator of whether there is at least one matched RE that leads to target label k (intent or slot label), the final logits of label k for a sentence (or a specific word for slot filling) is: logit k = logit k + w k z k (9) where logit k is the logit produced by the original NN, and w k is a trainable weight indicating the overall confidence for REs that lead to target label k. Here we do not assign a trainable weight for each RE because it is often that only a few sentences match a RE.",
"We modify the logit instead of the final probability because a logit is an unconstrained real value, which matches the property of w k z k better than probability.",
"Actually, when performing model ensemble, ensembling with logits is often empirically better than with the final probability 3 .",
"This is also the reason why we choose to operate on logits in Sec.",
"3.3.",
"Evaluation Methodology Our experiments aim to answer three questions: Q1: Does the use of REs enhance the learning quality when the number of annotated instances is small?",
"Q2: Does the use of REs still help when using the full training data?",
"Q3: How can we choose from different combination methods?",
"Datasets We use the ATIS dataset (Hemphill et al., 1990) to evaluate our approach.",
"This dataset is widely used in SLU research.",
"It includes queries of flights, meal, etc.",
"We follow the setup of Liu and Lane (2016) by using 4,978 queries for training and 893 for testing, with 18 intent labels and 127 slot labels.",
"We also split words like Miami's into Miami 's during the tokenization phase to reduce the number of words that do not have a pre-trained word embedding.",
"This strategy is useful for fewshot learning.",
"To answer Q1 , we also exploit the full few-shot learning setting.",
"Specifically, for intent detection, we randomly select 5, 10, 20 training instances for each intent to form the few-shot training set; and for slot filling, we also explore 5, 10, 20 shots settings.",
"However, since a sentence typically contains multiple slots, the number of mentions of frequent slot labels may inevitably exceeds the target shot count.",
"To better approximate the target shot count, we select sentences for each slot label in ascending order of label frequencies.",
"That is k 1 -shot dataset will contain k 2 -shot dataset if k 1 > k 2 .",
"All settings use the original test set.",
"Since most existing few-shot learning methods require either many few-shot classes or some classes with enough data for training, we also explore the partial few-shot learning setting for intent detection to provide a fair comparison for existing few-shot learning methods.",
"Specifically, we let the 3 most frequent intents have 300 training instances, and the rest remains untouched.",
"This is also a common scenario in real world, where we often have several frequent classes and many classes with limited data.",
"As for slot filling, however, since the number of mentions of frequent slot labels already exceeds the target shot count, the original slot filling few-shot dataset can be directly used to train existing few-shot learning methods.",
"Therefore, we do not distinguish full and partial few-shot learning for slot filling.",
"Preparing REs We use the syntax of REs in Perl in this work.",
"Our REs are written by a paid annotator who is familiar with the domain.",
"It took the annotator in total less than 10 hours to develop all the REs, while a domain expert can accomplish the task faster.",
"We use the 20-shot training data to develop the REs, but word lists like cities are obtained from the full training set.",
"The development of REs is considered completed when the REs can cover most of the cases in the 20-shot training data with resonable precision.",
"After that, the REs are fixed throughout the experiments.",
"The majority of the time for writing the REs is proportional to the number of RE groups.",
"It took about 1.5 hours to write the 54 intent REs with on average 2.2 groups per RE.",
"It is straightforward to write the slot REs for the input and output level methods, for which it took around 1 hour to write the 60 REs with 1.7 groups on average.",
"By con-trast, writing slot REs to guide attention requires more efforts as the annotator needs to carefully select clue words and annotate the full slot label.",
"As a result, it took about 5.5 hours to generate 115 REs with on average 3.3 groups.",
"The performance of the REs can be found in the last line of Table 1.",
"In practice, a positive RE for intent (or slot) k can often be treated as negative REs for other intents (or slots).",
"As such, we use the positive REs for intent (or slot) k as the negative REs for other intents (or slots) in our experiments.",
"Experimental Setup Hyper-parameters.",
"Our hyper-parameters for the BLSTM are similar to the ones used by Liu and Lane (2016) .",
"Specifically, we use batch size 16, dropout probability 0.5, and BLSTM cell size 100.",
"The attention loss weight is 16 (both positive and negative) for full few-shot learning settings and 1 for other settings.",
"We use the 100d GloVe word vectors (Pennington et al., 2014) pre-trained on Wikipedia and Gigaword (Parker et al., 2011) , and the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001.",
"Evaluation Metrics.",
"We report accuracy and macro-F1 for intent detection, and micro/macro-F1 for slot filling.",
"Micro/macro-F1 are the harmonic mean of micro/macro precision and recall.",
"Macro-precision/recall are calculated by averaging precision/recall of each label, and microprecision/recall are averaged over each prediction.",
"Competitors and Naming Conventions.",
"Here, a bold Courier typeface like BLSTM denotes the notations of the models that we will compare in Sec.",
"5.",
"Specifically, we compare our methods with the baseline BLSTM model (Sec.",
"3.1).",
"Since our attention loss method (Sec.",
"3.3) uses two-side attention, we include the raw two-side attention model without attention loss (+two) for comparison as well.",
"Besides, we also evaluate the RE output (REO), which uses the REtags as prediction directly, to show the quality of the REs that we will use in the experiments.",
"4 As for our methods for combinging REs with NN, +feat refers to using REtag as input features (Sec.",
"3.2), +posi and +neg refer to using positive and negative attention loss respectively, +both refers to using both postive and negative attention losses (Sec.",
"3.3), and +logit means using REtag to modify NN output (Sec.",
"3.4).",
"Moverover, since the REs can also be formatted as first-order-logic (FOL) rules, we also compare our methods with the teacher-student framework proposed by Hu et al.",
"(2016a) , which is a general framework for distilling knowledge from FOL rules into NN (+hu16).",
"Besides, since we consider few-short learning, we also include the memory module proposed by Kaiser et al.",
"(2017) , which performs well in various few-shot datasets (+mem) 5 .",
"Finally, the state-of-art model on the ATIS dataset is also included (L&L16), which jointly models the intent detection and slot filling in a single network (Liu and Lane, 2016) .",
"Experimental Results Full Few-Shot Learning To answer Q1 , we first explore the full few-shot learning scenario.",
"Intent Detection.",
"As shown in Table 1 , except for 5-shot, all approaches improve the baseline BLSTM.",
"Our network-module-level methods give the best performance because our attention module directly receives signals from the clue words in REs that contain more meaningful information than the REtag itself used by other methods.",
"We also observe that since negative REs are derived from positive REs with some noises, posi performs better than neg when the amount of available data is limited.",
"However, neg is slightly better in 20-shot, possibly because negative REs significantly outnumbers the positive ones.",
"Besides, two alone works better than the BLSTM when there are sufficient data, confirming the advantage of our two-side attention architecture.",
"As for other proposed methods, the output level method (logit) works generally better than the input level method (feat), except for the 5-shot case.",
"We believe this is due to the fewer number of RE related parameters and the shorter distance that the gradient needs to travel from the loss to these parameters -both make logit easier to train.",
"However, since logit directly modifies the output, the final prediction is more sensitive to the insufficiently trained weights in logit, leading to the inferior results in the 5-shot setting.",
"Model Type Model Name 90 / 74.47 68.69 / 84.66 72.43 / 85.78 59.59 / 83.47 73.62 / 89.28 78.94 / 92.21 +two+neg 49.01 / 68.31 64.67 / 79.17 72.32 / 86.34 59.51 / 83.23 72.92 / 89.11 78.83 / 92.07 +two+both 54.86 / 75.36 71.23 / 85.44 75.58 / 88.80 59.47 / 83.35 73.55 / 89.54 To compare with existing methods of combining NN and rules, we also implement the teacherstudent network (Hu et al., 2016a) .",
"This method lets the NN learn from the posterior label distribution produced by FOL rules in a teacher-student framework, but requires considerable amounts of data.",
"Therefore, although both hu16 and logit operate at the output level, logit still performs better than hu16 in these few-shot settings, since logit is easier to train.",
"It can also be seen that starting from 10-shot, two+both significantly outperforms pure REO.",
"This suggests that by using our attention loss to connect the distributional representation of the NN and the clue words of REs, we can generalize RE patterns within a NN architecture by using a small amount of annotated data.",
"Slot Filling.",
"Different from intent detection, as shown in Table 1 , our attention loss does not work for slot filling.",
"The reason is that the slot label of a target word (the word for which we are trying to predict a slot label) is decided mainly by the semantic meaning of the word itself, together with 0-3 phrases in the context to provide supplementary information.",
"However, our attention mechanism can only help in recognizing clue words in the context, which is less important than the word itself and have already been captured by the BLSTM, to some extent.",
"Therefore, the attention loss and the attention related parameters are more of a burden than a benefit.",
"As is shown in Fig.",
"1 , the model recognizes Boston as fromloc.city mainly because Boston itself is a city, and its context word from may have already been captured by the BLSTM and our attention mechanism does not help much.",
"By examining the attention values of +two trained on the full dataset, we find that instead of mark-ing informative context words, the attention tends to concentrate on the target word itself.",
"This observation further reinforces our hypothesis on the attention loss.",
"On the other hand, since the REtags provide extra information, such as type, about words in the sentence, logit and feat generally work better.",
"However, different from intent detection, feat only outperforms logit by a margin.",
"This is because feat can use the REtags of all words to generate better context representations through the NN, while logit can only utilize the REtag of the target word before the final output layer.",
"As a result, feat actually gathers more information from REs and can make better use of them than logit.",
"Again, hu16 is still outperformed by logit, possibly due to the insufficient data support in this few-shot scenario.",
"We also see that even the BLSTM outperforms REO in 5-shot, indicating while it is hard to write high-quality RE patterns, using REs to boost NNs is still feasible.",
"Summary.",
"The amount of extra information that a NN can utilize from the combined REs significantly affects the resulting performance.",
"Thus, the attention loss methods work best for intent detection and feat works best for slot filling.",
"We also see that the improvements from REs decreases as having more training data.",
"This is not surprising because the implicit knowledge embedded in the REs are likely to have already been captured by a sufficient large annotated dataset and in this scenario using the REs will bring in fewer benefits.",
"Partial Few-Shot Learning To better understand the relationship between our approach and existing few-shot learning methods, we also implement the memory network method Table 3 : Results on Full Dataset.",
"The left side of '/' applies for intent, and the right side for slot.",
"(Kaiser et al., 2017) which achieves good results in various few-shot datasets.",
"We adapt their opensource code, and add their memory module (mem) to our BLSTM model.",
"Since the memory module requires to be trained on either many few-shot classes or several classes with extra data, we expand our full few-shot dataset for intent detection, so that the top 3 intent labels have 300 sentences (partial few-shot).",
"As shown in Table 2 , mem works better than BLSTM, and our attention loss can be further combined with the memory module (mem+posi), with even better performance.",
"hu16 also works here, but worse than two+both.",
"Note that, the memory module requires the input sentence to have only one embedding, thus we only use one set of positive attention for combination.",
"As for slot filling, since we already have extra data for frequent tags in the original few-shot data (see Sec.",
"4.1), we use them directly to run the memory module.",
"As shown in the bottom of Table 1 , mem also improves the base BLSTM, and gains further boost when it is combined with feat 6 .",
"Full Dataset To answer Q2, we also evaluate our methods on the full dataset.",
"As seen in Table 3 , for intent detection, while two+both still works, feat and logit no longer give improvements.",
"This shows 6 For compactness, we only combine the best method in each task with mem, but others can also be combined.",
"that since both REtag and annotated data provide intent labels for the input sentence, the value of the extra noisy tag from RE become limited as we have more annotated data.",
"However, as there is no guidance on attention in the annotations, the clue words from REs are still useful.",
"Further, since feat concatenates REtags at the input level, the powerful NN makes it more likely to overfit than logit, therefore feat performs even worse when compared to the BLSTM.",
"As for slot filling, introducing feat and logit can still bring further improvements.",
"This shows that the word type information contained in the REtags is still hard to be fully learned even when we have more annotated data.",
"Moreover, different from few-shot settings, two+both has a better macro-F1 score than the BLSTM for this task, suggesting that better attention is still useful when the base model is properly trained.",
"Again, hu16 outperforms the BLSTM in both tasks, showing that although the REtags are noisy, their teacher-student network can still distill useful information.",
"However, hu16 is a general framework to combine FOL rules, which is more indirect in transferring knowledge from rules to NN than our methods.",
"Therefore, it is still inferior to attention loss in intent detection and feat in slot filling, which are designed to combine REs.",
"Further, mem generally works in this setting, and can receive further improvement by combining our fusion methods.",
"We can also see that two+both works clearly better than the stateof-art method (L&L16) in intent detection, which jointly models the two tasks.",
"And mem+feat is comparative to L&L16 in slot filling.",
"Impact of the RE Complexity We now discuss how the RE complexity affects the performance of the combination.",
"We choose to control the RE complexity by modifying the number of groups.",
"Specifically, we reduce the number of groups for existing REs to decrease RE complexity.",
"To mimic the process of writing simple REs from scratch, we try our best to keep the key RE groups.",
"For intent detection, all the REs are reduced to at most 2 groups.",
"As for slot filling, we also reduce the REs to at most 2 groups, and for some simples case, we further reduce them into word-list patterns, e.g., ( CITY).",
"As shown in Table 4 , the simple REs already deliver clear improvements to the base NN models, which shows the effectiveness of our methods, and indicates that simple REs are quite costefficient since these simple REs only contain 1-2 RE groups and thus very easy to produce.",
"We can also see that using complex REs generally leads to better results compared to using simple REs.",
"This indicates that when considering using REs to improve a NN model, we can start with simple REs, and gradually increase the RE complexity to improve the performance over time 7 .",
"Related Work Our work builds upon the following techniques, while qualitatively differing from each NN with Rules.",
"On the initialization side, uses important n-grams to initialize the convolution filters.",
"On the input side, Wang et al.",
"(2017a) uses knowledge base rules to find relevant concepts for short texts to augment input.",
"On the output side, Hu et al.",
"(2016a; 2016b) and Guo et al.",
"(2017) use FOL rules to rectify the output probability of NN, and then let NN learn from the rectified distribution in a teacher-student framework.",
"Xiao et al.",
"(2017) , on the other hand, modifies the decoding score of NN by multiplying a weight derived from rules.",
"On the loss function side, people modify the loss function to model the relationship between premise and conclusion (Demeester et al., 2016) , and fit both human-annotated and rule-annotated labels (Alashkar et al., 2017) .",
"Since fusing in initialization or in loss function often require special properties of the task, these approaches are not applicable to our problem.",
"Our work thus offers new ways to exploit RE rules at different levels of a NN.",
"NNs and REs.",
"As for NNs and REs, previous work has tried to use RE to speed up the decoding phase of a NN (Strauß et al., 2016) and generating REs from natural language specifications of the 7 We do not include results of both for slot filling since its REs are different from feat and logit, and we have already shown that the attention loss method does not work for slot filling.",
"RE (Locascio et al., 2016) .",
"By contrast, our work aims to use REs to improve the prediction ability of a NN.",
"Few-Shot Learning.",
"Prior work either considers few-shot learning in a metric learning framework (Koch et al., 2015; Vinyals et al., 2016) , or stores instances in a memory (Santoro et al., 2016; Kaiser et al., 2017) to match similar instances in the future.",
"Wang et al.",
"(2017b) further uses the semantic meaning of the class name itself to provide extra information for few-shot learning.",
"Unlike these previous studies, we seek to use the humangenerated REs to provide additional information.",
"Natural Language Understanding.",
"Recurrent neural networks are proven to be effective in both intent detection (Ravuri and Stoicke, 2015) and slot filling (Mesnil et al., 2015) .",
"Researchers also find ways to jointly model the two tasks (Liu and Lane, 2016; Zhang and Wang, 2016) .",
"However, no work so far has combined REs and NNs to improve intent detection and slot filling.",
"Conclusions In this paper, we investigate different ways to combine NNs and REs for solving typical SLU tasks.",
"Our experiments demonstrate that the combination clearly improves the NN performance in both the few-shot learning and the full dataset settings.",
"We show that by exploiting the implicit knowledge encoded within REs, one can significantly improve the learning performance.",
"Specifically, we observe that using REs to guide the attention module works best for intent detection, and using REtags as features is an effective approach for slot filling.",
"We provide interesting insights on how REs of various forms can be employed to improve NNs, showing that while simple REs are very cost-effective, complex REs generally yield better results."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.3",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"4.1",
"4.2",
"4.3",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Typesetting",
"Problem Definition",
"The Use of Regular Expressions",
"Our Approach",
"Base Models",
"Using REs at the Input Level",
"Using REs at the Network Module Level",
"Using REs at the Output Level",
"Evaluation Methodology",
"Datasets",
"Preparing REs",
"Experimental Setup",
"Full Few-Shot Learning",
"Partial Few-Shot Learning",
"Full Dataset",
"Impact of the RE Complexity",
"Related Work",
"Conclusions"
]
} | GEM-SciDuet-train-18#paper-1009#slide-10 | Conclusion | Using REs can help to train of NN when data is limited
Guiding attention is best for intent detection (sentence classification)
RE output as feature is best for slot filling (sequence labeling)
We can start with simple REs, and increase complexity gradually | Using REs can help to train of NN when data is limited
Guiding attention is best for intent detection (sentence classification)
RE output as feature is best for slot filling (sequence labeling)
We can start with simple REs, and increase complexity gradually | [] |
GEM-SciDuet-train-19#paper-1013#slide-0 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-0 | Research Context | Domain Specific Diachronic Corpus
Example: searching vegetarian in biblical scholarship archive
Were All Men Vegetarians
God instructed Adam saying,
I have given you
every herb that yields
Of every tree of the garden
thou mayest freely eat:
and thou shalt eat the
herb of the field;
(King James Bible, Genesis)
(by Eric Lyons, M.Min.) | Domain Specific Diachronic Corpus
Example: searching vegetarian in biblical scholarship archive
Were All Men Vegetarians
God instructed Adam saying,
I have given you
every herb that yields
Of every tree of the garden
thou mayest freely eat:
and thou shalt eat the
herb of the field;
(King James Bible, Genesis)
(by Eric Lyons, M.Min.) | [] |
GEM-SciDuet-train-19#paper-1013#slide-1 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-1 | Diachronic Thesaurus | A useful tool for supporting searches in diachronic corpus
Target term vegetarian modern
Related terms tree of the garden herb of the field ancient
Users are mostly aware of modern language
Collecting relevant related terms
For given thesaurus entries
Collecting a relevant list of modern target terms | A useful tool for supporting searches in diachronic corpus
Target term vegetarian modern
Related terms tree of the garden herb of the field ancient
Users are mostly aware of modern language
Collecting relevant related terms
For given thesaurus entries
Collecting a relevant list of modern target terms | [] |
GEM-SciDuet-train-19#paper-1013#slide-2 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-2 | Diachronic Thesaurus Our Task | Utilize a given candidate list of modern terms as input
Predict which candidates are relevant for the domain corpus | Utilize a given candidate list of modern terms as input
Predict which candidates are relevant for the domain corpus | [] |
GEM-SciDuet-train-19#paper-1013#slide-3 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-3 | Background Terminology Extraction TE | 1. Automatically extract prominent terms from a given corpus
SSccoorree ccaannddiiddaattee tteerrmmss ffoorr ddoommaaiinn rreelleevvaannccyy
Statistical measures for identifying prominent terms
Frequencies in the target corpus (e.g. tf, tf-idf)
Comparison with frequencies in a reference background corpus | 1. Automatically extract prominent terms from a given corpus
SSccoorree ccaannddiiddaattee tteerrmmss ffoorr ddoommaaiinn rreelleevvaannccyy
Statistical measures for identifying prominent terms
Frequencies in the target corpus (e.g. tf, tf-idf)
Comparison with frequencies in a reference background corpus | [] |
GEM-SciDuet-train-19#paper-1013#slide-4 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-4 | Supervised framework for TE | Candidate target terms are learning instances
Calculate a set of features for each candidate
Classification predicts which candidates are suitable
Features : state-of-the-art TE scoring measures | Candidate target terms are learning instances
Calculate a set of features for each candidate
Classification predicts which candidates are suitable
Features : state-of-the-art TE scoring measures | [] |
GEM-SciDuet-train-19#paper-1013#slide-5 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-5 | Contributions | Integrating Query Performance Prediction in term scoring
2. Penetrating to ancient texts, via query expansion | Integrating Query Performance Prediction in term scoring
2. Penetrating to ancient texts, via query expansion | [] |
GEM-SciDuet-train-19#paper-1013#slide-6 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-6 | Contribution 1 | Integrating Query Performance Prediction
Penetrating to ancient texts | Integrating Query Performance Prediction
Penetrating to ancient texts | [] |
GEM-SciDuet-train-19#paper-1013#slide-7 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-7 | Query Performance Prediction QPP | Estimate the retrieval quality of search queries
Assess quality of query results on the text collection.
Our terminology scoring task
QPP scoring measures are potentially useful may capture
additional aspects of term relevancy for the collection
term is relevant for a domain term is a good query
Two types of statistical QPP methods
Analyze query terms distribution within the corpus
Additionally analyze the top search results
Integrate QPP measures as additional features
First integrated system (TE-QPPTerm)
Applies the QPP measures to the candidate term as the query
Utilizes these scores as additional classification features | Estimate the retrieval quality of search queries
Assess quality of query results on the text collection.
Our terminology scoring task
QPP scoring measures are potentially useful may capture
additional aspects of term relevancy for the collection
term is relevant for a domain term is a good query
Two types of statistical QPP methods
Analyze query terms distribution within the corpus
Additionally analyze the top search results
Integrate QPP measures as additional features
First integrated system (TE-QPPTerm)
Applies the QPP measures to the candidate term as the query
Utilizes these scores as additional classification features | [] |
GEM-SciDuet-train-19#paper-1013#slide-8 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-8 | Penetrating to ancient periods | In a diachronic corpus
A candidate term might be rare in its original modern form,
yet frequently referred to by archaic forms
query term: vegetarian Of every tree of the garden
thou mayest freely eat: every herb that yields
Were All Men Vegetarians
God instructed Adam saying, I have given you every herb that yields (Genesis 1:29) and thou shalt eat the
and thou shalt eat the (King James Bible, Genesis) herb of the field;
(by Eric Lyons, M.Min.)
Baseline (TE) and First integrated system (TE-QPPTerm)
Rely on corpus occurrences of the original candidate term
Prioritize relatively frequent terms
A post-retrieval QPP method
Query Feedback measure (Zhou and Croft, 2007)
Second integrated system (TE-QPPQE)
Utilizes Pseudo Relevance Feedback Query Expansion
QPP query QPP score | In a diachronic corpus
A candidate term might be rare in its original modern form,
yet frequently referred to by archaic forms
query term: vegetarian Of every tree of the garden
thou mayest freely eat: every herb that yields
Were All Men Vegetarians
God instructed Adam saying, I have given you every herb that yields (Genesis 1:29) and thou shalt eat the
and thou shalt eat the (King James Bible, Genesis) herb of the field;
(by Eric Lyons, M.Min.)
Baseline (TE) and First integrated system (TE-QPPTerm)
Rely on corpus occurrences of the original candidate term
Prioritize relatively frequent terms
A post-retrieval QPP method
Query Feedback measure (Zhou and Croft, 2007)
Second integrated system (TE-QPPQE)
Utilizes Pseudo Relevance Feedback Query Expansion
QPP query QPP score | [] |
GEM-SciDuet-train-19#paper-1013#slide-9 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-9 | Evaluation Setting | Diachronic corpus: the Responsa Project
Questions posed to rabbis along their detailed rabbinic answers
Written over a period of about a thousand years
Used for previous IR and NLP research
Balanced for positive and negative examples
Support Vector Machine with polynomial kernel | Diachronic corpus: the Responsa Project
Questions posed to rabbis along their detailed rabbinic answers
Written over a period of about a thousand years
Used for previous IR and NLP research
Balanced for positive and negative examples
Support Vector Machine with polynomial kernel | [] |
GEM-SciDuet-train-19#paper-1013#slide-10 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-10 | Results | Additional QPP features increase the classification accuracy
Utilizing ancient documents, via query expansion, improves
Improvement over baseline statistically significant | Additional QPP features increase the classification accuracy
Utilizing ancient documents, via query expansion, improves
Improvement over baseline statistically significant | [] |
GEM-SciDuet-train-19#paper-1013#slide-11 | 1013 | Integrating Query Performance Prediction in Term Scoring for Diachronic Thesaurus | A diachronic thesaurus is a lexical resource that aims to map between modern terms and their semantically related terms in earlier periods. In this paper, we investigate the task of collecting a list of relevant modern target terms for a domain-specific diachronic thesaurus. We propose a supervised learning scheme, which integrates features from two closely related fields: Terminology Extraction and Query Performance Prediction (QPP). Our method further expands modern candidate terms with ancient related terms, before assessing their corpus relevancy with QPP measures. We evaluate the empirical benefit of our method for a thesaurus for a diachronic Jewish corpus. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102
],
"paper_content_text": [
"Introduction In recent years, there has been growing interest in diachronic lexical resources, which comprise terms from different language periods.",
"(Borin and Forsberg, 2011; Riedl et al., 2014) .",
"These resources are mainly used for studying language change and supporting searches in historical domains, bridging the lexical gap between modern and ancient language.",
"In particular, we are interested in this paper in a certain type of diachronic thesaurus.",
"It contains entries for modern terms, denoted as target terms.",
"Each entry includes a list of ancient related terms.",
"Beyond being a historical linguistic resource, such thesaurus is useful for supporting searches in a diachronic corpus, composed of both modern and ancient documents.",
"For example, in our historical Jewish corpus, the modern Hebrew term for terminal patient 1 has only few verbatim occurrences, in modern documents, but this topic has been widely discussed in ancient periods.",
"A domain searcher needs the diachronic thesaurus to enrich the search with ancient synonyms or related terms, such as dying and living for the moment.",
"Prior work on diachronic thesauri addressed the problem of collecting relevant related terms for given thesaurus entries.",
"In this paper we focus on the complementary preceding task of collecting a relevant list of modern target terms for a diachronic thesaurus in a certain domain.",
"As a starting point, we assume that a list of meaningful terms in the modern language is given, such as titles of Wikipedia articles.",
"Then, our task is to automatically decide which of these candidate terms are likely to be relevant for the corpus domain and should be included in the thesaurus.",
"In other words, we need to decide which of the candidate modern terms corresponds to a concept that has been discussed significantly in the diachronic domain corpus.",
"Our task is closely related to term scoring in the known Terminology Extraction (TE) task in NLP.",
"The goal of corpus-based TE is to automatically extract prominent terms from a given corpus and score them for domain relevancy.",
"In our setting, since all the target terms are modern, we avoid extracting them from the diachronic corpus of modern and ancient language.",
"Instead, we use a given candidate list and apply only the term scoring phase.",
"As a starting point, we adopt a rich set of state-of-the-art TE scoring measures and integrate them as features in a common supervised classification approach (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) .",
"Given our Information Retrieval (IR) motivation, we notice a closely related task to TE, namely Query Performance Prediction (QPP).",
"QPP methods are designed to estimate the retrieval quality of search queries, by assessing their relevance to the text collection.",
"Therefore, QPP scoring measures seem to be potentially suitable also for our terminology scoring task, by considering the candidate term as a search query.",
"Some of the QPP measures are indeed similar in nature to the TE methods, analyzing the distribution of the query terms within the collection.",
"However, some of the QPP methods have different IR-biased characteristics and may provide a marginal contribution.",
"Therefore, we adopted them as additional features for our classifier and indeed observed a performance increase.",
"Most of the QPP methods prioritize query terms with high frequency in the corpus.",
"However, in a diachronic corpus, such criterion may sometimes be problematic.",
"A modern target term might appear only in few modern documents, while being referred to, via ancient terminology, also in ancient documents.",
"Therefore, we would like our prediction measure to be aware of these ancient documents as well.",
"Following a particular QPP measure (Zhou and Croft, 2007) , we address this problem through Query Expansion (QE).",
"Accordingly, our method first expands the query containing the modern candidate term, then calculates the QPP scores of the expanded query and then utilizes them as scoring features.",
"Combining the baseline features with our expansion-based QPP features yields additional improvement in the classification results.",
"Term Scoring Measures This section reviews common measures developed for Terminology Extraction (Section 2.1) and for Query Performance Prediction (Section 2.2).",
"Table 1 lists those measures that were considered as features in our system, as described in Section 3.",
"Terminology Extraction Terminology Extraction (TE) methods aim to identify terms that are frequently used in a specific domain.",
"Typically, linguistic processors (e.g.",
"POS tagger, phrase chunker) are used to filter out stop words and restrict candidate terms to nouns or noun phrases.",
"Then, statistical measures are used to rank the candidate terms.",
"There are two main terminological properties that the statistical measures identify: unithood and termhood.",
"Measures that express unithood indicate the collocation strength of units that comprise a single term.",
"Measures that express termhood indicate the statistical prominence of the term in the target do-main corpus.",
"For our task, we focus on the second property, since the candidates are taken from a key-list of terms whose coherence in the language is already known.",
"Measures expressing termhood are based either on frequency in the target corpus (1, 2, 3, 4, 9, 11, 12, 13 ) 2 , or on comparison with frequency in a reference background corpus (8, 14, 16) .",
"Recently, approaches which combine both unithood and termhood were investigated as well (7, 8, 15, 16) .",
"Query Performance Prediction Query Performance Prediction (QPP) aims to estimate the quality of answers that a search system would return in response to a particular query.",
"Statistical QPP methods are categorized into two types: pre-retrieval methods, analyzing the distribution of the query term within the document collection; and post-retrieval methods, additionally analyzing the search results.",
"Some of the preretrieval methods are similar to TE methods based on the same term frequency statistics.",
"Pre-retrieval methods measure various properties of the query: specificity (17, 18, 24, 25) , similarity to the corpus (19) , coherence of the documents containing the query terms (26), variance of the query terms' weights over the documents containing it (20); and relatedness, as good performance is expected when the query terms co-occur frequently in the collection (21).",
"Post-retrieval methods are usually more complex, where the top search results are retrieved and analyzed.",
"They are categorized into three main paradigms: clarity-based methods (28), robustness-based methods (22) and score distribution based methods (23, 29).",
"We pay special attention to two post-retrieval QPP methods; Query Feedback (22) and Clarity (23).",
"The Clarity method measures the coherence of the query's search results with respect to the corpus.",
"It is defined as the KL divergence between a language model induced from the result list and that induced from the corpus.",
"The Query Feedback method measures the robustness of the query's results to query perturbations.",
"It models retrieval as a communication channel.",
"The input is the query, the channel is the search system, and the set of results is the noisy output of the channel.",
"A new query is generated from the list of search (Liu et al., 2005) 13 Term Variance Quality (Liu et al., 2005 ) 6 TF-Disjoint Corpora Frequency (Lopes et al., 2012) 14 Weirdness (Ahmad et al., 1999) 7 C-value (Frantzi and Ananiadou, 1999) 15 NC-value (Frantzi and Ananiadou, 1999) 8 Glossex (Kozakov et al., 2004) 16 TermExtractor (Sclano and Velardi, 2007 ) Query Performance Prediction measures 17 Average IDF 24 Average ICTF (Inverse collection term frequency) (Plachouras et al., 2004) 18 Query Scope 25 Simplified Clarity Score (He and Ounis, 2004) 19 Similarity Collection Query (Zhao et al., 2008) 26 Query Coherence (He et al., 2008 ) 20 Average Variance (Zhao et al., 2008) 27 Average Entropy (Cristina, 2013) 21 Term Relatedness (Hauff et al., 2008) 28 Clarity (Cronen-Townsend et al., 2002) 22 Query Feedback (Zhou and Croft, 2007) 29 Normalized Query Commitment (Shtok et al., 2009 ) 23 Weighted Information Gain (Zhou and Croft, 2007) Table 1: Prior art measures considered in our work results, taking the terms with maximal contribution to the Clarity score, and then a second list of results is retrieved for that second query.",
"The overlap between the two lists is the robustness score.",
"Our suggested method was inspired by the Query Feedback measure, as detailed in the next section.",
"Integrated Term Scoring We adopt the supervised framework for TE (Foo and Merkel, 2010; Zhang et al., 2010; Loukachevitch, 2012) , considering each candidate target term as a learning instance.",
"For each candidate, we calculate a set of features over which learning and classification are performed.",
"The classification predicts which candidates are suitable as target terms for the diachronic thesaurus.",
"Our baseline system (TE) includes state-of-the-art TE measures as features, listed in the upper part of Table 1 .",
"Next, we introduce two system variants that integrate QPP measures as additional features.",
"The first system, TE-QPP T erm , applies the QPP measures to the candidate term as the query.",
"All QPP measures, listed in the lower part of Table 1 , are utilized except for the Query Feedback measure (22) (see below).",
"To verify which QPP features are actually beneficial for terminology scoring, we measure the marginal contribution of each feature via ablation tests in 10-fold cross validation over the training data (see Section 4.1).",
"Features which did not yield marginal contribution were not included 3 .",
"3 Removed features from TE-QPPT erm: 17, 19, 22, 23, The two systems, described so far, rely on corpus occurrences of the original candidate term, prioritizing relatively frequent terms.",
"In a diachronic corpus, however, a candidate term might be rare in its original modern form, yet frequently referred to by archaic forms.",
"Therefore, we adopt a query expansion strategy based on Pseudo Relevance Feedback, which expands a query based on analyzing the top retrieved documents.",
"In our setting, this approach takes advantage of a typical property of modern documents in a diachronic corpus, namely their temporally-mixed language.",
"Often, modern documents in a diachronic domain include ancient terms that were either preserved in modern language or appear as citations.",
"Therefore, an expanded query of a modern term, which retrieves only modern documents, is likely to pick some of these ancient terms as well.",
"Thus, the expanded query would likely retrieve both modern and ancient documents and would allow QPP measures to evaluate the query relevance across periods.",
"Therefore, our second integrated system, TE-QPP QE , utilizes the Pseudo Relevance Feedback Query Expansion approach to expand our modern candidate with topically-related terms.",
"First, similarly to the Query Feedback measure (measure (22) in the lower part of Table 1), we expand the candidate by adding terms with maximal contribution (top 5, in our experiments) to the Clarity score (Section 2.2).",
"Then, we calculate all QPP measures for the expanded query.",
"Since the expan- 24, 25. sions that we extract from the top retrieved documents typically include ancient terms as well, the new scores may better express the relevancy of the candidate's topic across the diachronic corpus.",
"We also performed feature selection, as done for the first system 4 .",
"Evaluation Evaluation Setting We applied our method to the diachronic corpus is the Responsa project Hebrew corpus 5 .",
"The Responsa corpus includes rabbinic case-law rulings which represent the historical-sociological milieu of real-life situations, collected over more than a thousand years, from the 11 th century until today.",
"The corpus consists of 81,993 documents, and was used for previous NLP and IR research (Choueka et al., 1971; Choueka et al., 1987; HaCohen-Kerner et al., 2008; Liebeskind et al., 2012; Zohar et al., 2013; .",
"The candidate target terms for our classification task were taken from the publicly available keylist of Hebrew Wikipedia entries 6 .",
"Since many of these tens of thousands entries, such as person names and place names, were not suitable as target terms, we first filtered them by Hebrew Named Entity Recognition 7 and manually.",
"Then, a list of approximately 5000 candidate target terms was manually annotated by two domain experts.",
"The experts decided which of the candidates corresponds to a concept that has been discussed significantly in our diachronic domain corpus.",
"Only candidates that the annotators agreed on their annotation were retained, and then balanced for equal number of positive and negative examples.",
"Consequently, the balanced training and test sets contain 500 and 200 candidates, respectively.",
"For classification, Weka's 8 Support Vector Machine supervised classifier with polynomial kernel was used.",
"We train the classifier with our training set and measure the accuracy on the test set.",
"Results Table 2 compares the classification performance of our baseline (TE) and integrated systems, (TE-QPP T erm ) and (TE-QPP QE ), proposed in Section 3.",
"Feature Set Accuracy (%) TE 61.5 TE-QPP T erm 65 TE-QPP QE 66.5 (McNemar, 1947) , on our diachronic corpus it seems to help.",
"Yet, when the QPP score is measured over the expanded candidate, and ancient documents are utilized, the performance increase is more notable (5 points) and the improvement over the baseline is statistically significant according to the McNemar's test with p<0.05.",
"We analyzed the false negative classifications of the baseline that were classified correctly by the QE-based configuration.",
"We found that their expanded forms contain ancient terms that help the system making the right decision.",
"For example, the Hebrew target term for slippers was expanded by the ancient expression corresponding to made of leather.",
"This is a useful expansion since in the ancient documents slippers are discussed in the context of fasts, as in two of the Jewish fasts wearing leather shoes is forbidden and people wear cloth-made slippers.",
"Conclusions and Future Work We introduced a method that combines features from two closely related tasks, terminology extraction and query performance prediction, to solve the task of target terms selection for a diachronic thesaurus.",
"In our diachronic setting, we showed that enriching TE measures with QPP measures, particularly when calculated on expanded candidates, significantly improves performance.",
"Our results suggest that it may be worth investigating this integrated approach also for other terminology extraction and QPP settings.",
"We plan to further explore the suggested method by utilizing additional query expansion algorithms.",
"In particular, to avoid expanding queries for which expansion degrade retrieval performance, we plan to investigate the selective query expansion approach (Cronen-Townsend et al., 2004) ."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5"
],
"paper_header_content": [
"Introduction",
"Term Scoring Measures",
"Terminology Extraction",
"Query Performance Prediction",
"Integrated Term Scoring",
"Evaluation Setting",
"Results",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-19#paper-1013#slide-11 | Summary | Task: target term selection for a diachronic thesaurus
Integrating Query Performance Prediction in Term Scoring
2. Penetrating to ancient texts via query expansion
Utilize additional query expansion algorithms
Investigate the selective query expansion approach | Task: target term selection for a diachronic thesaurus
Integrating Query Performance Prediction in Term Scoring
2. Penetrating to ancient texts via query expansion
Utilize additional query expansion algorithms
Investigate the selective query expansion approach | [] |
GEM-SciDuet-train-20#paper-1018#slide-0 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-0 | Reasoning for Question Answering | Reasoning is crucial for building systems that can dialogue with humans in natural language.
Reasoning: The process of forming conclusions, judgments, or inferences from facts or premises.
Inferential Reasoning: Premise 1, Premise 2 -> Conclusion
John is in the kitchen, John has the ball -> The ball is in the kitchen
Relational Reasoning: Reason about relations between entities and their properties (Santoro et al.)
Causal Reasoning, Logical Reasoning, | Reasoning is crucial for building systems that can dialogue with humans in natural language.
Reasoning: The process of forming conclusions, judgments, or inferences from facts or premises.
Inferential Reasoning: Premise 1, Premise 2 -> Conclusion
John is in the kitchen, John has the ball -> The ball is in the kitchen
Relational Reasoning: Reason about relations between entities and their properties (Santoro et al.)
Causal Reasoning, Logical Reasoning, | [] |
GEM-SciDuet-train-20#paper-1018#slide-1 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-1 | bAbI Dataset | One of the earliest datasets to measure Category 2: Two Supporting Facts. the reasoning abilities of ML systems.
Mary went to the kitchen.
Sandra journeyed to the office
Mary got the football there. Is(Football, Garden)
Mary travelled to the garden.
Where is the football? garden
Easy to evaluate different reasoning capabilities. Category 4: Path Finding.
The bedroom is south of the hallway..
Noiseless tasks: Separates The bathroom is east of the office. reasoning analysis from natural The kitchen is west of the garden. language understanding. The garden is south of the office.
N, N The office is south of the bedroom.
A thorough analysis can be found in How do you go from the garden to the bedroom?? n,n (Lee et al., 2016) | One of the earliest datasets to measure Category 2: Two Supporting Facts. the reasoning abilities of ML systems.
Mary went to the kitchen.
Sandra journeyed to the office
Mary got the football there. Is(Football, Garden)
Mary travelled to the garden.
Where is the football? garden
Easy to evaluate different reasoning capabilities. Category 4: Path Finding.
The bedroom is south of the hallway..
Noiseless tasks: Separates The bathroom is east of the office. reasoning analysis from natural The kitchen is west of the garden. language understanding. The garden is south of the office.
N, N The office is south of the bedroom.
A thorough analysis can be found in How do you go from the garden to the bedroom?? n,n (Lee et al., 2016) | [] |
GEM-SciDuet-train-20#paper-1018#slide-2 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-2 | Memory Augmented Neural Networks | Process a set of inputs and store them in memory. Then, at each hop, an important part of the memory is retrieved and used to retrieve more memories. Finally, the last retrieved memory is used i to compute the answer. y
01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
Daniel went to the bathroom. Sandra journeyed to the office. Mary got the football there. Mary travelled to the garden
q:Where is the football. u Hop
i Softmax(uTmi) o1 imi
to compute the answer. o2 imi
The attention mechanism is simple
The attention mechanism relies on embeddings.
It may be nice to separate embedding learning from attention learning (modularization, reusability).
The answer computation is too simple, it only uses one retrieved memory. Hard to see how can produce more complex reasoning based on memories. | Process a set of inputs and store them in memory. Then, at each hop, an important part of the memory is retrieved and used to retrieve more memories. Finally, the last retrieved memory is used i to compute the answer. y
01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
Daniel went to the bathroom. Sandra journeyed to the office. Mary got the football there. Mary travelled to the garden
q:Where is the football. u Hop
i Softmax(uTmi) o1 imi
to compute the answer. o2 imi
The attention mechanism is simple
The attention mechanism relies on embeddings.
It may be nice to separate embedding learning from attention learning (modularization, reusability).
The answer computation is too simple, it only uses one retrieved memory. Hard to see how can produce more complex reasoning based on memories. | [] |
GEM-SciDuet-train-20#paper-1018#slide-3 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-3 | Relational Neural Networks | Relation Networks (Santoro et al. 2017)
Neural Network with an inductive bias to learn pairwise relations of the input objects and their properties. A type of Graph Neural Networks. L(y, y) yiln( yi)
01: Daniel went to the bathroom.
02: Sandra journeyed to the office.
03: Mary got the football there.
04: Mary travelled to the garden
q:Where is the football. u oi,j g([mi; mj; u])
memories memories pairs with question g
q:Where is the football. u a f( oi,j)
The model needs to process N2 pairs where N is the number of memories.
500 memories would require 250k backward and forward computations!
Can not filter out unuseful objects that can produce spurious relations. | Relation Networks (Santoro et al. 2017)
Neural Network with an inductive bias to learn pairwise relations of the input objects and their properties. A type of Graph Neural Networks. L(y, y) yiln( yi)
01: Daniel went to the bathroom.
02: Sandra journeyed to the office.
03: Mary got the football there.
04: Mary travelled to the garden
q:Where is the football. u oi,j g([mi; mj; u])
memories memories pairs with question g
q:Where is the football. u a f( oi,j)
The model needs to process N2 pairs where N is the number of memories.
500 memories would require 250k backward and forward computations!
Can not filter out unuseful objects that can produce spurious relations. | [] |
GEM-SciDuet-train-20#paper-1018#slide-4 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-4 | Working Memory Networks | A Memory Network model with a new working memory buffer and relational reasoning module. Produces state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory.
Short-term Memory Module Attention Module Reasoning Module
01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
u Hop 1 q:Where is the football.
li Softmax((uTmli d)) hl lj ml j
Multi-head attention (Vaswani et al. 2017)
A Memory Network model with a new working memory buffer and relational reasoning module. Produces li Softmax((( ft(o1) Tmli d)) state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory. hl lj ml j
Short-term Memory Module o1 = [h1; h2; . . . ]Wo Attention Module | A Memory Network model with a new working memory buffer and relational reasoning module. Produces state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory.
Short-term Memory Module Attention Module Reasoning Module
01: Daniel went to the bathroom. 02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
02: Sandra journeyed to the office. 03: Mary got the football there. 04: Mary travelled to the garden
u Hop 1 q:Where is the football.
li Softmax((uTmli d)) hl lj ml j
Multi-head attention (Vaswani et al. 2017)
A Memory Network model with a new working memory buffer and relational reasoning module. Produces li Softmax((( ft(o1) Tmli d)) state-of-the-art results in reasoning tasks. Inspired by the Multi-component model of working memory. hl lj ml j
Short-term Memory Module o1 = [h1; h2; . . . ]Wo Attention Module | [] |
GEM-SciDuet-train-20#paper-1018#slide-5 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-5 | Results Jointly trained bAbI 10k | Note that EntNet (Henaff et al.) solves all tasks in the per-task version: A single model for each task.
LSTM MemNN MemNN-S (Sukhbaatar et al.) (Sukhbaatar et al.) (Sukhbaatar et al.) RN (Santoro et al.) SDNC (Rae et al.) WMemNN (Pavez et al.) WMemNN* (Pavez et al.) | Note that EntNet (Henaff et al.) solves all tasks in the per-task version: A single model for each task.
LSTM MemNN MemNN-S (Sukhbaatar et al.) (Sukhbaatar et al.) (Sukhbaatar et al.) RN (Santoro et al.) SDNC (Rae et al.) WMemNN (Pavez et al.) WMemNN* (Pavez et al.) | [] |
GEM-SciDuet-train-20#paper-1018#slide-6 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-6 | Ablations | complex attention patterns multiple relations
2 supporting facts 3 supporting facts counting basic induction size reasoning positional reasoning path finding | complex attention patterns multiple relations
2 supporting facts 3 supporting facts counting basic induction size reasoning positional reasoning path finding | [] |
GEM-SciDuet-train-20#paper-1018#slide-7 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-7 | Time comparison | For 30 memories there is a speedup of almost | For 30 memories there is a speedup of almost | [] |
GEM-SciDuet-train-20#paper-1018#slide-8 | 1018 | Working Memory Networks: Augmenting Memory Networks with a Relational Reasoning Module | During the last years, there has been a lot of interest in achieving some kind of complex reasoning using deep neural networks. To do that, models like Memory Networks (MemNNs) have combined external memory storages and attention mechanisms. These architectures, however, lack of more complex reasoning mechanisms that could allow, for instance, relational reasoning. Relation Networks (RNs), on the other hand, have shown outstanding results in relational reasoning tasks. Unfortunately, their computational cost grows quadratically with the number of memories, something prohibitive for larger problems. To solve these issues, we introduce the Working Memory Network, a MemNN architecture with a novel working memory storage and reasoning module. Our model retains the relational reasoning abilities of the RN while reducing its computational complexity from quadratic to linear. We tested our model on the text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than 0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in the joint version of the benchmark. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260
],
"paper_content_text": [
"Introduction A central ability needed to solve daily tasks is complex reasoning.",
"It involves the capacity to comprehend and represent the environment, retain information from past experiences, and solve problems based on the stored information.",
"Our ability to solve those problems is supported by multiple specialized components, including shortterm memory storage, long-term semantic and procedural memory, and an executive controller that, among others, controls the attention over memories (Baddeley, 1992) .",
"Many promising advances for achieving complex reasoning with neural networks have been obtained during the last years.",
"Unlike symbolic approaches to complex reasoning, deep neural networks can learn representations from perceptual information.",
"Because of that, they do not suffer from the symbol grounding problem (Harnad, 1999) , and can generalize better than classical symbolic approaches.",
"Most of these neural network models make use of an explicit memory storage and an attention mechanism.",
"For instance, Memory Networks (MemNN), Dynamic Memory Networks (DMN) or Neural Turing Machines (NTM) (Weston et al., 2014; Kumar et al., 2016; Graves et al., 2014) build explicit memories from the perceptual inputs and access these memories using learned attention mechanisms.",
"After that some memories have been attended, using a multi-step procedure, the attended memories are combined and passed through a simple output layer that produces a final answer.",
"While this allows some multi-step inferential process, these networks lack a more complex reasoning mechanism, needed for more elaborated tasks such as inferring relations among entities (relational reasoning).",
"On the contrary, Relation Networks (RNs), proposed in Santoro et al.",
"(2017) , have shown outstanding performance in relational reasoning tasks.",
"Nonetheless, a major drawback of RNs is that they consider each of the input objects in pairs, having to process a quadratic number of relations.",
"That limits the usability of the model on large problems and makes forward and backward computations quite expensive.",
"To solve these problems we propose a novel Memory Network Figure 1 : The W-MemNN model applied to textual question answering.",
"Each input fact is processed using a GRU, and the output representation is stored in the short-term memory storage.",
"Then, the attentional controller computes an output vector that summarizes relevant parts of the memories.",
"This process is repeated H hops (a dotted line delimits each hop), and each output is stored in the working memory buffer.",
"Finally, the output of each hop is passed to the reasoning module that produces the final output.",
"architecture called the Working Memory Network (W-MemNN).",
"Our model augments the original MemNN with a relational reasoning module and a new working memory buffer.",
"The attention mechanism of the Memory Network allows the filtering of irrelevant inputs, reducing a lot of the computational complexity while keeping the relational reasoning capabilities of the RN.",
"Three main components compose the W-MemNN: An input module that converts the perceptual inputs into an internal vector representation and save these representations into a short-term storage, an attentional controller that attend to these internal representations and update a working memory buffer, and a reasoning module that operates on the set of objects stored in the working memory buffer in order to produce a final answer.",
"This component-based architecture is inspired by the well-known model from cognitive sciences called the multi-component working memory model, proposed in Baddeley and Hitch (1974) .",
"We studied the proposed model on the text-based QA benchmark bAbI which consists of 20 different toy tasks that measure different reasoning skills.",
"While models such as Ent-Net (Henaff et al., 2016) have focused on the pertask training version of the benchmark (where a different model is trained for each task), we decided to focus on the jointly trained version of the task, where the model is trained on all tasks simultaneously.",
"In the jointly trained bAbI-10k benchmark we achieved state-of-the-art performance, improving the previous state-of-the-art on more than 2%.",
"Moreover, a simple ensemble of two of our models can solve all 20 tasks simultaneously.",
"Also, we tested our model on the visual QA dataset NLVR.",
"In that dataset, we obtained performance at the level of the Module Neural Networks (Andreas et al., 2016) .",
"Our model, however, achieves these results using the raw input statements, without the extra text processing used in the Module Networks.",
"Finally, qualitative and quantitative analysis shows that the inclusion of the Relational Reasoning module is crucial to improving the performance of the MemNN on tasks that involve relational reasoning.",
"We can achieve this performance by also reducing the computation times of the RN considerably.",
"Consequently, we hope that this contribution may allow applying RNs to larger problems.",
"Model Our model is based on the Memory Network architecture.",
"Unlike MemNN we have included a reasoning module that helps the network to solve more complex tasks.",
"The proposed model consists of three main modules: An input module, an at-tentional controller, and a reasoning module.",
"The model processes the input information in multiple passes or hops.",
"At each pass the output of the previous hop can condition the current pass, allowing some incremental refinement.",
"Input module: The input module converts the perceptual information into an internal feature representation.",
"The input information can be processed in chunks, and each chunk is saved into a short-term storage.",
"The definition of what is a chunk of information depends on each task.",
"For instance, for textual question answering, we define each chunk as a sentence.",
"Other options might be n-grams or full documents.",
"This short-term storage can only be accessed during the hop.",
"Attentional Controller: The attentional controller decides in which parts of the short-term storage the model should focus.",
"The attended memories are kept during all the hops in a working memory buffer.",
"The attentional controller is conditioned by the task at hand, for instance, in question answering the question can condition the attention.",
"Also, it may be conditioned by the output of previous hops, allowing the model to change its focus to new portions of the memory over time.",
"Many models compute the attention for each memory using a compatibility function between the memory and the question.",
"Then, the output is calculated as the weighted sum of the memory values, using the attention as weight.",
"A simple way to compute the attention for each memory is to use dot-product attention.",
"This kind of mechanism is used in the original Memory Network and computes the attention value as the dot product between each memory and the question.",
"Although this kind of attention is simple, it may not be enough for more complex tasks.",
"Also, since there are no learned weights in the attention mechanism, the attention relies entirely on the learned embeddings.",
"That is something that we want to avoid in order to separate the learning of the input and attention module.",
"One way to allow learning in the dot-product attention is to project the memories and query vectors linearly.",
"That is done by multiplying each vector by a learned projection matrix (or equivalently a feed-forward neural network).",
"In this way, we can set apart the attention and input embeddings learning, and also allow more complex patterns of attention.",
"Reasoning Module: The memories stored in the working memory buffer are passed to the rea-soning module.",
"The choice of reasoning mechanism is left open and may depend on the task at hand.",
"In this work, we use a Relation Network as the reasoning module.",
"The RN takes the attended memories in pairs to infer relations among the memories.",
"That can be useful, for example, in tasks that include comparisons.",
"A detailed description of the full model is shown in Figure 1 .",
"W-MemN2N for Textual Question Answering We proceed to describe an implementation of the model for textual question answering.",
"In textual question answering the input consists of a set of sentences or facts, a question, and an answer.",
"The goal is to answer the question correctly based on the given facts.",
"Let (s, q, a) represents an input sample, consisting of a set of sentences s = {x i } L i=1 , a query q and an answer a.",
"Each sentence contains M words, {w i } M i=1 , where each word is represented as a onehot vector of length |V |, being |V | the vocabulary size.",
"The question contains Q words, represented as in the input sentences.",
"Input Module Each word in each sentence is encoded into a vector representation v i using an embedding matrix W ∈ R |V |×d , where d is the embedding size.",
"Then, the sentence is converted into a memory vector m i using the final output of a gated recurrent neural network (GRU) (Chung et al., 2014) : m i = GRU([v 1 , v 2 , ..., v M ]) Each memory {m i } L i=1 , where m i ∈ R d , is stored into the short-term memory storage.",
"The question is encoded into a vector u in a similar way, using the output of a gated recurrent network.",
"Attentional Controller Our attention module is based on the Multi-Head attention mechanism proposed in Vaswani et al.",
"(2017) .",
"First, the memories are projected using a projection matrix W m ∈ R d×d , as m i = W m m i .",
"Then, the similarity between the projected memory and the question is computed using the Scaled Dot-Product attention: α i = Softmax u T m i √ d (1) = exp((u T m i )/ √ d) j exp((u T m j )/ √ d) .",
"(2) Next, the memories are combined using the attention weights α i , obtaining an output vector h = j α j m j .",
"In the Multi-Head mechanism, the memories are projected S times using different projection matrices {W s m } S s=1 .",
"For each group of projected memories, an output vector {h i } S i=1 is obtained using the Scaled Dot-Product attention (eq.",
"2).",
"Finally, all vector outputs are concatenated and projected again using a different matrix: o k = [h 1 ; h 2 ; ...; h S ]W o , where ; is the concatenation operator and W o ∈ R Sd×d .",
"The o k vector is the final response vector for the hop k. This vector is stored in the working memory buffer.",
"The attention procedure can be repeated many times (or hops).",
"At each hop, the attention can be conditioned on the previous hop by replacing the question vector u by the output of the previous hop.",
"To do that we pass the output through a simple neural network f t .",
"Then, we use the output of the network as the new conditioner: o n k = f t (o k ).",
"(3) This network allows some learning in the transition patterns between hops.",
"We found Multi-Head attention to be very useful in the joint bAbI task.",
"This can be a product of the intrinsic multi-task nature of the bAbI dataset.",
"A possibility is that each attention head is being adapted for different groups of related tasks.",
"However, we did not investigate this further.",
"Also, note that while in this section we use the same set of memories at each hop, this is not necessary.",
"For larger sequences each hop can operate in different parts of the input sequence, allowing the processing of the input in various steps.",
"Reasoning Module The outputs stored in the working memory buffer are passed to the reasoning module.",
"The reasoning module used in this work is a Relation Network (RN).",
"In the RN the output vectors are concatenated in pairs together with the question vector.",
"Each pair is passed through a neural network g θ and all the outputs of the network are added to produce a single vector.",
"Then, the sum is passed to a final neural network f φ : r = f φ i,j g θ ([o i ; o j ; u]) , (4) The output of the Relation Network is then passed through a final weight matrix and a softmax to produce the predicted answer: a = Softmax(V r), (5) where V ∈ R |A|×d φ , |A| is the number of possible answers and d φ is the dimension of the output of f φ .",
"The full network is trained end-to-end using standard cross-entropy betweenâ and the true label a.",
"3 Related Work Memory Augmented Neural Networks During the last years, there has been plenty of work on achieving complex reasoning with deep neural networks.",
"An important part of these developments has used some kind of explicit memory and attention mechanisms.",
"One of the earliest recent work is that of Memory Networks (Weston et al., 2014) .",
"Memory Networks work by building an addressable memory from the inputs and then accessing those memories in a series of reading operations.",
"Another, similar, line of work is the one of Neural Turing Machines.",
"They were proposed in Graves et al.",
"(2014) and are the basis for recent neural architectures including the Differentiable Neural Computer (DNC) and the Sparse Access Memory (SAM) Rae et al., 2016) .",
"The NTM model also uses a content addressable memory, as in the Memory Network, but adds a write operation that allows updating the memory over time.",
"The management of the memory, however, is different from the one of the MemNN.",
"While the MemNN model pre-load the memories using all the inputs, the NTM writes and read the memory one input at a time.",
"An additional model that makes use of explicit external memory is the Dynamic Memory Network (DMN) (Kumar et al., 2016; Xiong et al., 2016) .",
"The model shares some similarities with the Memory Network model.",
"However, unlike the MemNN model, it operates in the input sequentially (as in the NTM model).",
"The model defines an Episodic Memory module that makes use of a Gated Recurrent Neural Network (GRU) to store and update an internal state that represents the episodic storage.",
"Memory Networks Since our model is based on the MemNN architecture, we proceed to describe it in more detail.",
"The Memory Network model was introduced in Weston et al.",
"(2014) .",
"In that work, the authors proposed a model composed of four components: The input feature map that converts the input into an internal vector representation, the generalization module that updates the memories given the input, the output feature map that produces a new output using the stored memories, and the response module that produces the final answer.",
"The model, as initially proposed, needed some strong supervision that explicitly tells the model which memories to attend.",
"In order to solve that limitation, the End-To-End Memory Network (MemN2N) was proposed in Sukhbaatar et al.",
"(2015) .",
"The model replaced the hard-attention mechanism used in the original MemNN by a softattention mechanism that allowed to train it endto-end without strong supervision.",
"In our model, we use a component-based approach, as in the original MemNN architecture.",
"However, there are some differences: First, our model makes use of two external storages: a short-term storage, and a working memory buffer.",
"The first is equivalent to the one updated by the input and generalization module of the MemNN.",
"The working memory buffer, on the other hand, does not have a counterpart in the original model.",
"Second, our model replaces the response module by a reasoning module.",
"Unlike the original MemNN, our reasoning module is intended to make more complex work than the response module, that was only designed to produce a final answer.",
"Relation Networks The ability to infer and learn relations between entities is fundamental to solve many complex reasoning problems.",
"Recently, a number of neural network models have been proposed for this task.",
"These include Interaction Networks, Graph Neural Networks, and Relation Networks (Battaglia et al., 2016; Scarselli et al., 2009; Santoro et al., 2017) .",
"In specific, Relation Networks (RNs) have shown excellent results in solving textual and visual question answering tasks requiring relational reasoning.",
"The model is relatively simple: First, all the inputs are grouped in pairs and each pair is passed through a neural network.",
"Then, the outputs of the first network are added, and another neural network processes the final vector.",
"The role of the first network is to infer relations among each pair of objects.",
"In Palm et al.",
"(2017) the authors propose a recurrent extension to the RN.",
"By allowing multiple steps of relational reasoning, the model can learn to solve more complex tasks.",
"The main issue with the RN architecture is that its scale very poorly for larger problems.",
"That is because it operates on O(n 2 ) pairs, where n is the number of input objects (for instance, sentences in the case of textual question answering).",
"This becomes quickly prohibitive for tasks involving many input objects.",
"Cognitive Science The concept of working memory has been extensively developed in cognitive psychology.",
"It consists of a limited capacity system that allows temporary storage and manipulation of information and is crucial to any reasoning task.",
"One of the most influential models of working memory is the multi-component model of working memory proposed by Baddeley and Hitch (1974) .",
"This model is composed both of a supervisory attentional controller (the central executive) and two short-term storage systems: The phonological loop, capable of holding speech-based information, and the visuospatial sketchpad, concerned with visual storage.",
"The central executive plays various functions, including the capacity to focus attention, to divide attention and to control access to long-term memory.",
"Later modifications to the model (Baddeley, 2000) include an episodic buffer that is capable of integrating and holding information from different sources.",
"Connections of the working memory model to memory augmented neural networks have been already studied in Graves et al.",
"(2014) .",
"We follow this effort and subdivide our model into components that resemble (in a basic way) the multi-component model of working memory.",
"Note, however, that we use the term working memory buffer instead of episodic buffer.",
"That is because the episodic buffer has an integration function that our model does not cover.",
"However, that can be an interesting source of inspiration for next versions of the model that integrate both visual and textual information for question answering.",
"Experiments Textual Question Answering To evaluate our model on textual question answering we used the Facebook bAbI-10k dataset .",
"The bAbI dataset is a textual LSTM Sukhbaatar et al.",
"(2015) .",
"Results for SDNC are took from Rae et al.",
"(2016) .",
"WMN † is an ensemble of two Working Memory Networks.",
"MN-S MN SDNC WMN WMN QA benchmark composed of 20 different tasks.",
"Each task is designed to test a different reasoning skill, such as deduction, induction, and coreference resolution.",
"Some of the tasks need relational reasoning, for instance, to compare the size of different entities.",
"Each sample is composed of a question, an answer, and a set of facts.",
"There are two versions of the dataset, referring to different dataset sizes: bAbI-1k and bAbI-10k.",
"In this work, we focus on the bAbI-10k version of the dataset which consists of 10, 000 training samples per task.",
"A task is considered solved if a model achieves greater than 95% accuracy.",
"Note that training can be done per-task or joint (by training the model on all tasks at the same time).",
"Some models (Liu and Perez, 2017) have focused in the per-task training performance, including the Ent-Net model (Henaff et al., 2016) that solves all the tasks in the per-task training version.",
"We choose to focus on the joint training version since we think is more indicative of the generalization properties of the model.",
"A detailed analysis of the dataset can be found in Lee et al.",
"(2015) .",
"Model Details To encode the input facts we used a word embedding that projected each word in a sentence into a real vector of size d. We defined d = 30 and used a GRU with 30 units to process each sentence.",
"We used the 30 sentences in the support set that were immediately prior to the question.",
"The question was processed using the same configuration but with a different GRU.",
"We used 8 heads in the Multi-Head attention mechanism.",
"For the transition networks f t , which operates in the output of each hop, we used a two-layer MLP consisting of 15 and 30 hidden units (so the output preserves the memory dimension).",
"We used H = 4 hops (or equivalently, a working memory buffer of size 4).",
"In the reasoning module, we used a 3layer MLP consisting of 128 units in each layer and with ReLU non-linearities for g θ .",
"We omitted the f φ network since we did not observe improvements when using it.",
"The final layer was a linear layer that produced logits for a softmax over the answer vocabulary.",
"Training Details We trained our model end-to-end with a crossentropy loss function and using the Adam optimizer (Kingma and Ba, 2014).",
"We used a learning rate of ν = 1e −3 .",
"We trained the model during 400 epochs.",
"For training, we used a batch size of 32.",
"As in Sukhbaatar et al.",
"(2015) we did not average the loss over a batch.",
"Also, we clipped gradients with norm larger than 40 (Pascanu et al., 2013) .",
"For all the dense layers we used 2 regularization with value 1e −3 .",
"All weights were initialized using Glorot normal initialization (Glorot and Bengio, 2010) .",
"10% of the training set was heldout to form a validation set that we used to select the architecture and for hyperparameter tunning.",
"In some cases, we found useful to restart training after the 400 epochs with a smaller learning rate of 1e −5 and anneals every 5 epochs by ν/2 until 20 epochs were reached.",
"bAbI-10k Results On the jointly trained bAbI-10k dataset our best model (out of 10 runs) achieves an accuracy of 99.58%.",
"That is a 2.38% improvement over the previous state-of-the-art that was obtained by the Sparse Differential Neural Computer (SDNC) (Rae et al., 2016) .",
"The best model of the 10 runs solves almost all tasks of the bAbI-10k dataset (by a 0.3% margin).",
"However, a simple ensemble of the best two models solves all 20 tasks and achieves an almost perfect accuracy of 99.7%.",
"We list the results for each task in Table 1 .",
"Other authors have reported high variance in the results, for instance, the authors of the SDNC report a mean accuracy and standard deviation over 15 runs of 93.6 ± 2.5 (with 15.9 ± 1.6 passed tasks).",
"In contrast, our model achieves a mean accuracy of 98.3 ± 1.2 (with 18.6 ± 0.4 passed tasks), which is better and more stable than the average results obtained by the SDNC.",
"The Relation Network solves 18/20 tasks.",
"We achieve even better performance, and with considerably fewer computations, as is explained in Section 4.3.",
"We think that by including the attention mechanism, the relation reasoning module can focus on learning the relation among relevant objects, instead of learning spurious relations among irrelevant objects.",
"For that, the Multi-Head attention mechanism was very helpful.",
"The Effect of the Relational Reasoning Module When compared to the original Memory Network, our model substantially improves the accuracy of tasks 17 (positional reasoning) and 19 (path finding).",
"Both tasks require the analysis of multiple relations (Lee et al., 2015) .",
"For instance, the task 19 needs that the model reasons about the relation of different positions of the entities, and in that way find a path to arrive from one to another.",
"The accuracy improves in 75.1% for task 19 and in 41.5% for task 17 when compared with the MemN2N model.",
"Since both tasks require reasoning about relations, we hypothesize that the relational reasoning module of the W-MemNN was of great help to improve the performance on both tasks.",
"The Relation Network, on the other hand, fails in the tasks 2 (2 supporting facts) and 3 (3 supporting facts).",
"Both tasks require handling a significant number of facts, especially in task 3.",
"In those cases, the attention mechanism is crucial to filter out irrelevant facts.",
"Visual Question Answering To further study our model we evaluated its performance on a visual question answering dataset.",
"For that, we used the recently proposed NLVR dataset (Suhr et al., 2017) .",
"Each sample in the NLVR dataset is composed of an image with three sub-images and a statement.",
"The task consists in judging if the statement is true or false for that image.",
"Evaluating the statement requires reasoning about the sets of objects in the image, comparing objects properties, and reasoning about spatial relations.",
"The dataset is interesting for us for two reasons.",
"First, the statements evaluation requires complex relational reasoning about the objects in the image.",
"Second, unlike the bAbI dataset, the statements are written in natural language.",
"Because of that, each statement displays a range of syntactic and semantic phenomena that are not present in the bAbI dataset.",
"Model details Our model can be easily adapted to deal with visual information.",
"Following the idea from Santoro et al.",
"(2017) , instead of processing each input using a recurrent neural network, we use a Convolutional Neural Network (CNN).",
"The CNN takes as input each sub-image and convolved them through convolutional layers.",
"The output of the CNN consists of k feature maps (where k is the number of kernels in the final convolutional layer) of size d × d. Then, each memory is built from the vector composed by the concatenation of the cells in the same position of each feature map.",
"Consequently, d × d memories of size k are stored in the shortterm storage.",
"The statement is processed using a GRU neural network as in the textual reasoning task.",
"Then, we can proceed using the same architecture for the reasoning and attention module that the one used in the textual QA model.",
"However, for the visual QA task, we used an additive attention mechanism.",
"The additive attention computes the attention weight using a feed-forward neural network applied to the concatenation of the memory vector and statement vector.",
"Results Our model achieves a validation / test accuracy of 65.6%/65.8%.",
"Notably, we achieved a performance comparable to the results of the Module Neural Networks (Andreas et al., 2016 ) that make use of standard NLP tools to process the statements into structured representations.",
"Unlike the Module Neural Networks, we achieved our results using only raw input statements, allowing the model to learn how to process the textual input by itself.",
"Note that given the more complex nature of the language used in the NLVR dataset we needed to use a larger embedding size and GRU hidden layer than in the bAbI dataset (100 and 128 respectively).",
"That, however, is a nice feature of separating the input from the reasoning and attention component: One way to process more complex language statements is increasing the capacity of the input module.",
"From O(n 2 ) to O(n) One of the major limitations of RNs is that they need to process each one of the memories in pairs.",
"To do that, the RN must perform O(n 2 ) forward and backward passes (where n is the number of memories).",
"That becomes quickly prohibitive for a larger number of memories.",
"In contrast, the dependence of the W-MemNN run times on the number of memories is linear.",
"Note, however, that computation times in the W-MemNN depend quadratically on the size of the working memory buffer.",
"Nonetheless, this number is expected to be much smaller than the number of memories.",
"To compare both models we measured the wall-clock time for a forward and backward pass for a single batch of size 32.",
"We performed these experiments on a GPU NVIDIA K80.",
"Figure 2 shows the results.",
"Memory Visualizations One nice feature from Memory Networks is that they allow some interpretability of the reasoning procedure by looking at the attention weights.",
"At each hop, the attention weights show which parts of the memory the model found relevant to produce the output.",
"RNs, on the contrary, lack of this feature.",
"Table 2 shows the attention values for visual and textual question answering.",
"Relation Network W-MemNN Figure 2 : Wall-clock times for a forward and backward pass for a single batch.",
"The batch size used is 32.",
"While for 5 memories the times are comparable, for 30 memories the W-MemNN takes around 50s while the RN takes 930s, a speedup of almost 20×.",
"Conclusion We have proposed a novel Working Memory Network architecture that introduces improved reasoning abilities to the original MemNN model.",
"We demonstrated that by augmenting the MemNN architecture with a Relation Network, the computational complexity of the RN can be reduced, without loss of performance.",
"That opens the opportunity for using RNs in larger problems, something that may be very useful, given the many tasks requiring a significant amount of memories.",
"Although we have used RN as the reasoning module in this work, other options can be tested.",
"It might be interesting to analyze how other reasoning modules can improve different weaknesses of the model.",
"We presented results on the jointly trained bAbI-10k dataset, where we achieve a new state-of-theart, with an average error of less than 0.5%.",
"Also, we showed that our model can be easily adapted for visual question answering.",
"Our architecture combines perceptual input processing, short-term memory storage, an attention mechanism, and a reasoning module.",
"While other models have focused on different parts of these components, we think that is important to find ways to combine these different mechanisms if we want to build models capable of complex reasoning.",
"Evidence from cognitive sciences seems to show that all these abilities are needed in order to achieve human-level complex reasoning."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"3.1",
"3.2",
"3.3",
"3.4",
"4.1",
"4.2",
"4.3",
"4.4",
"5"
],
"paper_header_content": [
"Introduction",
"Model",
"W-MemN2N for Textual Question Answering",
"Memory Augmented Neural Networks",
"Memory Networks",
"Relation Networks",
"Cognitive Science",
"Textual Question Answering",
"Visual Question Answering",
"From O(n 2 ) to O(n)",
"Memory Visualizations",
"Conclusion"
]
} | GEM-SciDuet-train-20#paper-1018#slide-8 | Conclusions | We presented the Working Memory Neural Network, a Memory Network model augmented with a new working memory buffer and relational reasoning module.
It retains the relational reasoning capabilities of the relation network while reducing it computation times considerably.
We hope that this contribution may help applying the relation network in larger problems.
It is a very general framework. We argue that it should include:
Embedding + Short-term storage
Embedding + Short-term Attentional controller + storage Working memory buffer
Embedding + Short-term Attentional controller + Reasoning module storage Working memory buffer
Multi-head attention Relational Reasoning
There is exactly one black triangle not touching any edge GRU Module
16. Why, what are YOUR shoes done with? 17. said the Gryphon. 18. I mean, what makes then so shiny? 19. Alice looked down at then, and considered a little before she gave her answer. 20. They are done with blacking, I believe .
biGRU Scaled Dot-product Attention Sum
biGRU Attention Reasoning Module q. Boots and shoes under the sea. the went in a deep voice, are done with a whiting.
Elmo ResNet LSTM BiAtt GA AoA Attention Sum | We presented the Working Memory Neural Network, a Memory Network model augmented with a new working memory buffer and relational reasoning module.
It retains the relational reasoning capabilities of the relation network while reducing it computation times considerably.
We hope that this contribution may help applying the relation network in larger problems.
It is a very general framework. We argue that it should include:
Embedding + Short-term storage
Embedding + Short-term Attentional controller + storage Working memory buffer
Embedding + Short-term Attentional controller + Reasoning module storage Working memory buffer
Multi-head attention Relational Reasoning
There is exactly one black triangle not touching any edge GRU Module
16. Why, what are YOUR shoes done with? 17. said the Gryphon. 18. I mean, what makes then so shiny? 19. Alice looked down at then, and considered a little before she gave her answer. 20. They are done with blacking, I believe .
biGRU Scaled Dot-product Attention Sum
biGRU Attention Reasoning Module q. Boots and shoes under the sea. the went in a deep voice, are done with a whiting.
Elmo ResNet LSTM BiAtt GA AoA Attention Sum | [] |
GEM-SciDuet-train-21#paper-1019#slide-0 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-0 | This talk in one slide | Training semantic parsing with denotation-only supervision is challenging because of spuriousness: incorrect logical forms can yield correct denotations.
Iterative training: Online search with initialization MML over offline search output
Coverage during online search
State-of-the-art single model performances:
WikiTableQuestions with comparable supervision
NLVR semantic parsing with significantly less supervision | Training semantic parsing with denotation-only supervision is challenging because of spuriousness: incorrect logical forms can yield correct denotations.
Iterative training: Online search with initialization MML over offline search output
Coverage during online search
State-of-the-art single model performances:
WikiTableQuestions with comparable supervision
NLVR semantic parsing with significantly less supervision | [] |
GEM-SciDuet-train-21#paper-1019#slide-1 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-1 | Semantic Parsing for Question Answering | Question: Which athlete was from South Korea
Get rows where Nation is South Korea
Filter rows where value in Olympics
Get value from Athlete column
Kim Yu-na South Korea (KOR)
south_korea) athlete) Patrick Chan Canada (CAN)
WikiTableQuestions, Pasupat and Liang (2015) | Question: Which athlete was from South Korea
Get rows where Nation is South Korea
Filter rows where value in Olympics
Get value from Athlete column
Kim Yu-na South Korea (KOR)
south_korea) athlete) Patrick Chan Canada (CAN)
WikiTableQuestions, Pasupat and Liang (2015) | [] |
GEM-SciDuet-train-21#paper-1019#slide-2 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-2 | Weakly Supervised Semantic Parsing | xi: Which athlete was from South Korea after 2010?
wi: Kim Yu-na South Korea
Tenley Albright United States
Test: Given find such that | xi: Which athlete was from South Korea after 2010?
wi: Kim Yu-na South Korea
Tenley Albright United States
Test: Given find such that | [] |
GEM-SciDuet-train-21#paper-1019#slide-3 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-3 | Challenge Spurious logical forms | Which athletes are from South Korea after
Logical forms that lead to answer:
(reverse athlete)(and(nation s outh_korea)(year ((reverse date) Athlete from South Korea after 2010
Plushenko Russia (RUS) (reverse athlete)(and(nation Athlete from South Korea with 2 medals s outh_korea)(medals 2)))
Kim Yu-na South Korea (KOR) (reverse athlete)(row.index (min
(reverse row.index) (medals 2))))) First athlete in the table with 2 medals
Patrick Chan Canada (CAN)
(reverse athlete) (row.index 4)) Athlete in row 4
There is exactly one square touching the bottom of a box True
Due to binary denot a ti ons, 50% of
count_equals(s quare Count of squares touching bottom of boxes
logical forms give co r r ect answer! touch_bottom a ll_objects)) 1) is 1
count_equals (yellow (square a ll_objects)) 1) Count of yellow squares is 1
object_exists (yellow (triangle There exists a yellow triangle all_objects))))
object_exists all_objects) There exists an object
Cornell Natural Language Visual Reasoning, Suhr et al., 2017 | Which athletes are from South Korea after
Logical forms that lead to answer:
(reverse athlete)(and(nation s outh_korea)(year ((reverse date) Athlete from South Korea after 2010
Plushenko Russia (RUS) (reverse athlete)(and(nation Athlete from South Korea with 2 medals s outh_korea)(medals 2)))
Kim Yu-na South Korea (KOR) (reverse athlete)(row.index (min
(reverse row.index) (medals 2))))) First athlete in the table with 2 medals
Patrick Chan Canada (CAN)
(reverse athlete) (row.index 4)) Athlete in row 4
There is exactly one square touching the bottom of a box True
Due to binary denot a ti ons, 50% of
count_equals(s quare Count of squares touching bottom of boxes
logical forms give co r r ect answer! touch_bottom a ll_objects)) 1) is 1
count_equals (yellow (square a ll_objects)) 1) Count of yellow squares is 1
object_exists (yellow (triangle There exists a yellow triangle all_objects))))
object_exists all_objects) There exists an object
Cornell Natural Language Visual Reasoning, Suhr et al., 2017 | [] |
GEM-SciDuet-train-21#paper-1019#slide-4 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-4 | Training Objectives | Maximum Marginal Likelihood Reward/Cost -based approaches
Krishn amurthy et al. (2017), and others Proposal: Alternate between the two objectives while gradually and others
increasing the s earch space!
Maxim ize the marginal likelihood of an approximate set of logical forms
Minimum Bayes Risk training: Minimize the expected value of a cost
but we need a good set of approximate
but random initialization can cause the search to get stuck in the exponential search space logical forms | Maximum Marginal Likelihood Reward/Cost -based approaches
Krishn amurthy et al. (2017), and others Proposal: Alternate between the two objectives while gradually and others
increasing the s earch space!
Maxim ize the marginal likelihood of an approximate set of logical forms
Minimum Bayes Risk training: Minimize the expected value of a cost
but we need a good set of approximate
but random initialization can cause the search to get stuck in the exponential search space logical forms | [] |
GEM-SciDuet-train-21#paper-1019#slide-5 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-5 | Spuriousness solution 1 Iterative search | Limited depth exhaustive search
Step 0: Get seed set of logical forms till depth k
Max logical form depth = k + s
Step 1: Train model using MML on seed set
Step 2: Train using MBR on all data till a greater depth k + s
Minimum Bayes Risk training till depth k + s
Step 3: Replace offline search with trained MBR and update seed set
Maximum Marginal Likelihood Iterate till dev. accuracy stops increasing | Limited depth exhaustive search
Step 0: Get seed set of logical forms till depth k
Max logical form depth = k + s
Step 1: Train model using MML on seed set
Step 2: Train using MBR on all data till a greater depth k + s
Minimum Bayes Risk training till depth k + s
Step 3: Replace offline search with trained MBR and update seed set
Maximum Marginal Likelihood Iterate till dev. accuracy stops increasing | [] |
GEM-SciDuet-train-21#paper-1019#slide-6 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-6 | Spuriousness Solution 2 Coverage guidance | There is exactly one square touching the bottom of a box.
(count_equals (square (touch_bottom all_objects))
Insight: There is a significant amount of trivial overlap
Solution: Use overlap as a measure guide search | There is exactly one square touching the bottom of a box.
(count_equals (square (touch_bottom all_objects))
Insight: There is a significant amount of trivial overlap
Solution: Use overlap as a measure guide search | [] |
GEM-SciDuet-train-21#paper-1019#slide-7 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-7 | Training with Coverage Guidance | Augment the reward-based objective: | Augment the reward-based objective: | [] |
GEM-SciDuet-train-21#paper-1019#slide-10 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-10 | Results of using coverage guided training on NLVR | Model does not learn without coverage! Coverage helps even with strong initialization
when trained from scratch when model initialized from an MML model trained on a seed set of offline searched paths
* using structured representations | Model does not learn without coverage! Coverage helps even with strong initialization
when trained from scratch when model initialized from an MML model trained on a seed set of offline searched paths
* using structured representations | [] |
GEM-SciDuet-train-21#paper-1019#slide-11 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-11 | Comparison with previous approaches on NLVR | MaxEnt, BiAttPonter are not semantic parsers
Abs. supervision + Rerank uses manually labeled abstractions of utterance - logical form pairs to get training data for a supervised system, and reranking
Our work outperforms Goldman et al., 2018 with fewer resources
* using structured representations | MaxEnt, BiAttPonter are not semantic parsers
Abs. supervision + Rerank uses manually labeled abstractions of utterance - logical form pairs to get training data for a supervised system, and reranking
Our work outperforms Goldman et al., 2018 with fewer resources
* using structured representations | [] |
GEM-SciDuet-train-21#paper-1019#slide-12 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-12 | Comparison with previous approaches on WikiTableQuestions | Non-neural models Reinforcement Learning Non-RL Neural Models models | Non-neural models Reinforcement Learning Non-RL Neural Models models | [] |
GEM-SciDuet-train-21#paper-1019#slide-13 | 1019 | Iterative Search for Weakly Supervised Semantic Parsing | Training semantic parsers from questionanswer pairs typically involves searching over an exponentially large space of logical forms, and an unguided search can easily be misled by spurious logical forms that coincidentally evaluate to the correct answer. We propose a novel iterative training algorithm that alternates between searching for consistent logical forms and maximizing the marginal likelihood of the retrieved ones. This training scheme lets us iteratively train models that provide guidance to subsequent ones to search for logical forms of increasing complexity, thus dealing with the problem of spuriousness. We evaluate these techniques on two hard datasets: WIKITABLEQUESTIONS (WTQ) and Cornell Natural Language Visual Reasoning (NLVR), and show that our training algorithm outperforms the previous best systems, on WTQ in a comparable setting, and on NLVR with significantly less supervision. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202
],
"paper_content_text": [
"Introduction Semantic parsing is the task of translating natural language utterances into machine-executable meaning representations, often called programs or logical forms.",
"These logical forms can be executed against some representation of the context in which the utterance occurs, to produce a denotation.",
"This setup allows for complex reasoning over contextual knowledge, and it has been successfully used in several natural language understanding problems such as question answering (Berant et al., 2013) , program synthesis (Yin and Neubig, 2017) and building natural language interfaces (Suhr et al., 2018) .",
"Recent work has focused on training semantic parses via weak supervision from denotations alone (Liang et al., 2011; Berant et al., 2013) .",
"This is because obtaining logical form annotations is generally expensive (although recent work has addressed this issue to some extent (Yih et al., 2016) ), and not assuming full supervision lets us be agnostic about the logical form language.",
"The second reason is more important in open-domain semantic parsing tasks where it may not be possible to arrive at a complete set of operators required by the task.",
"However, training semantic parsers with weak supervision requires not only searching over an exponentially large space of logical forms (Berant et al., 2013; Artzi and Zettlemoyer, 2013; Pasupat and Liang, 2015; Guu et al., 2017, inter alia) but also dealing with spurious logical forms that evaluate to the correct denotation while not being semantically equivalent to the utterance.",
"For example, if the denotations are binary, 50% of all syntactically valid logical forms evaluate to the correct answer, regardless of their semantics.",
"This problem renders the training signal extremely noisy, making it hard for the model to learn anything without some additional guidance during search.",
"We introduce two innovations to improve learning from denotations.",
"Firstly, we propose an iterative search procedure for gradually increasing the complexity of candidate logical forms for each training instance, leading to better training data and better parsing accuracy.",
"This procedure is implemented via training our model with two interleaving objectives, one that involves searching for logical forms of limited complexity during training (online search), and another that maximizes the marginal likelihood of retrieved logical forms.",
"Second, we include a notion of coverage over the question in the search step to guide the training algorithm towards logical forms that not only evaluate to the correct denotation, but also have some connection to the words in the utterance.",
"We demonstrate the effectiveness of these two techniques on two difficult reasoning tasks: WIK-ITABLEQUESTIONS(WTQ) (Pasupat and Liang, 2015) , an open domain task with significant lexical variation, and Cornell Natural Language Visual Reasoning (NLVR) (Suhr et al., 2017 ), a closed domain task with binary denotations, and thus far less supervision.",
"We show that: 1) interleaving online search and MML over retrieved logical forms ( §4) is a more effective training algorithm than each of those objectives alone; 2) coverage guidance during search ( §3) is helpful for dealing with weak supervision, more so in the case of NLVR where the supervision is weaker; 3) a combination of the two techniques yields 44.3% test accuracy on WTQ, outperforming the previous best single model in a comparable setting, and 82.9% test accuracy on NLVR, outperforming the best prior model, which also relies on greater supervision.",
"Background Weakly supervised semantic parsing We formally define semantic parsing in a weakly supervised setup as follows.",
"Given a dataset where the i th instance is the triple {x i , w i , d i }, representing a sentence x i , the world w i associated with the sentence, and the corresponding denotation d i , our goal is to find y i , the translation of x i in an appropriate logical form language (see §5.3), such that y i w i = d i ; i.e., the execution of y i in world w i produces the correct denotation d i .",
"A semantic parser defines a distribution over logical forms given an input utterance: p(Y |x i ; θ).",
"Training algorithms In this section we describe prior techniques for training semantic parsers with weak supervision: maximizing marginal likelihood, and rewardbased methods.",
"Maximum marginal likelihood Most work on training semantic parsers from denotations maximizes the likelihood of the denotation given the utterance.",
"The semantic parsing model itself defines a distribution over logical forms, however, not denotations, so this maximization must be recast as a marginalization over logical forms that evaluate to the correct denotation: max θ x i ,d i ∈D y i ∈Y | y i w i =d i p(y i |x i ; θ) (1) This objective function is called maximum marginal likelihood (MML).",
"The inner summation is in general intractable to perform during training, so it is only approximated.",
"Most prior work (Berant et al., 2013; Goldman et al., 2018 , inter alia) approximate the intractable marginalization by summing over logical forms obtained via beam search during training.",
"This typically results in frequent search failures early during training when model parameters are close to random, and in general may only yield spurious logical forms in the absence of any guidance.",
"Since modern semantic parsers typically operate without a lexicon, new techniques are essential to provide guidance to the search procedure (Goldman et al., 2018) .",
"One way of providing this guidance during search is to perform some kind of heuristic search up front to find a set of logical forms that evaluate to the correct denotation, and use those logical forms to approximate the inner summation (Liang et al., 2011; Krishnamurthy et al., 2017) .",
"The particulars of the heuristic search can have a large impact on performance; a smaller candidate set has lower noise, while a larger set makes it more likely that the correct logical form is in it, and one needs to strike the right balance.",
"In this paper, we refer to the MML that does search during training as dynamic MML, and the one that does an offline search as static MML.",
"The main benefit of dynamic MML is that it adapts its training signal over time.",
"As the model learns, it can increasingly focus its probability mass on a small set of very likely logical forms.",
"The main benefit of static MML is that there is no need to search during training, so there is a consistent training signal even at the start of training, and it is typically more computationally efficient than dynamic MML.",
"Reward-based methods When training weakly supervised semantic parsers, it is often desirable to inject some prior knowledge into the training procedure by defining arbitrary reward or cost functions.",
"There exists prior work that use such methods, both in a reinforcement learning setting (Liang et al., , 2018 , and otherwise (Iyyer et al., 2017; Guu et al., 2017) .",
"In our work, we define a customized cost function that includes a coverage term, and use a Minimum Bayes Risk (MBR) (Goodman, 1996; Goel and Byrne, 2000; Smith and Eisner, 2006) training scheme, which we describe in §3.",
"Coverage-guided search Weakly-supervised training of semantic parsers relies heavily on lexical cues to guide the initial stages of learning to good logical forms.",
"Traditionally, these lexical cues were provided in the parser's lexicon.",
"Neural semantic parsers remove the lexicon, however, and so need another mechanism for obtaining these lexical cues.",
"In this section we introduce the use of coverage to inject lexicon-like information into neural semantic parsers.",
"Coverage is a measure of relevance of the candidate logical form y i to the input x i , in terms of how well the productions in y i map to parts of x i .",
"We use a small manually specified lexicon as a mapping from source language to the target language productions, and define coverage of y i as the number of productions triggered by the input utterance, according to the lexicon, that are included in y i .",
"We use this measure of coverage to augment our loss function, and train using an MBR based algorithm as follows.",
"We use beam search to train a model to minimize the expected value of a cost function C: min θ N i=1 Ep (y i |x i ;θ) C(x i , y i , w i , d i ) (2) wherep is a re-normalization 1 of the probabilities assigned to all logical forms on the beam.",
"We define the cost function C as: C(x i , y i , w i , d i ) = λS(y i , x i )+(1−λ)T (y i , w i , d i ) (3) where the function S measures the number of items that y i is missing from the actions (or grammar production rules) triggered by the input utterance x i given the lexicon; and the function T measures the consistency of the evaluation of y i in w i , meaning that it is 0 if y i w i = d i , or a value e otherwise.",
"We set e as the maximum possible value of the coverage cost for the corresponding instance, to make the two costs comparable in magnitude.",
"λ is a hyperparameter that gives the relative weight of the coverage cost.",
"Iterative search In this section we describe the iterative technique for refining the set of candidate logical forms associated with each training instance.",
"As discussed in §2.2, most prior work on weakly-supervised training of semantic parsers uses dynamic MML.",
"This is particularly problematic in domains like NLVR, where the supervision signal is binary-it is very hard for dynamic MML to bootstrap its way to finding good logical forms.",
"To solve this problem, we interleave static MML, which has a consistent supervision signal from the start of training, with the coverageaugmented MBR algorithm described in §3.",
"In order to use static MML, we need an initial set of candidate logical forms.",
"We obtain this candidate set using a bounded-length exhaustive search, filtered using heuristics based on the same lexical mapping used for coverage in §3.",
"A bounded-length search will not find logical forms for the entire training data, so we can only use a subset of the data for initial training.",
"We train a model to convergence using static MML on these logical forms, then use that model to initialize coverage-augmented MBR training.",
"This gives the model a good starting place for the dynamic learning algorithm, and the search at training time can look for logical forms that are longer than could be found with the bounded-length exhaustive search.",
"We train MBR to convergence, then use beam search on the MBR model to find a new set of candidate logical forms for static MML on the training data.",
"This set of logical forms can have a greater length than those in the initial set, because this search uses model scores to not exhaustively explore all possible paths, and thus will likely cover more of the training data.",
"In this way, we can iteratively improve the candidate logical forms used for static training, which in turn improves the starting place for the online search algorithm.",
"Algorithm 1 concretely describes this process.",
"Decode in the algorithm refers to running a beam search decoder that returns a set of consistent logical forms (i.e.",
"T = 0) for each of the input utterances.",
"We start off with a seed dataset D 0 for which consistent logical forms are available.",
"Datasets We will now describe the two datasets we use in this work to evaluate our methods -Cornell NLVR and WIKITABLEQUESTIONS.",
"Input : Dataset D = {X, W, D}; and seed set D 0 = {X 0 , Y 0 } such that X 0 ⊂ X and C(x 0 i , y 0 i , W i , D i ) = 0 Output: Model parameters θ MBR Initialize dataset D MML = D 0 ; while Acc(D dev ) is increasing do θ MML = MML(D MML ); Initialize θ MBR = θ MML ; Update θ MBR = MBR(D; θ MBR ); Update D MML = Decode(D; θ MBR ); end Algorithm 1: Iterative coverage-guided search Cornell NLVR Cornell NLVR is a language-grounding dataset containing natural language sentences provided along with synthetically generated visual contexts, and a label for each sentence-image pair indicating whether the sentence is true or false in the given context.",
"Figure 1 shows two example sentenceimage pairs from the dataset (with the same sentence).",
"The dataset also comes with structured representations of images, indicating the color, shape, size, and x-and y-coordinates of each of the objects in the image.",
"While we show images in Figure 1 for ease of exposition, we use the structured representations in this work.",
"Following the notation introduced in §2.1, x i in this example is There is a box with only one item that is blue.",
"The structured representations associated with the two images shown are two of the worlds (w 1 i and w 2 i ), in which x i could be evaluated.",
"The corresponding labels are the denotations d 1 i and d 2 i that a translation y i of the sentence x i is expected to produce, when executed in the two worlds respectively.",
"That the same sentence occurs with multiple worlds is an important property of this dataset, and we make use of it by defining the function T to be 0 only if ∀ w j i ,d j i y i w j i = d j i .",
"WIKITABLEQUESTIONS WIKITABLEQUESTIONS is a question-answering dataset where the task requires answering complex questions in the context of Wikipedia tables.",
"An example can be seen in Figure 2 .",
"Unlike NLVR, the answers are not binary.",
"They can instead be cells in the table or the result of numerical or settheoretic operations performed on them.",
"Logical form languages For NLVR, we define a typed variable-free functional query language, inspired by the GeoQuery language (Zelle and Mooney, 1996) .",
"Our language contains six basic types: box (referring to one of the three gray areas in Figure 1) , object (referring to the circles, triangles and squares in Figure 1) , shape, color, number and boolean.",
"The constants in our language are color and shape names, the set of all boxes in an image, and the set of all objects in an image.",
"The functions in our language include those for filtering objects and boxes, and making assertions, a higher order function for handling negations, and a function for querying objects in boxes.",
"This type specification of constants and functions gives us a grammar with 115 productions, of which 101 are terminal productions (see Appendix A.1 for the complete set of rules in our grammar).",
"Figure 1 shows an example of a complete logical form in our language.",
"For WTQ, we use the functional query language used by (Liang et al., 2018) as the logical form language.",
"Figure 2 shows an example logical form.",
"Lexicons for coverage The lexicon we use for the coverage measure described in §3 contains under 40 rules for each logical form language.",
"They mainly map words and phrases to constants and unary functions in the target language.",
"The complete lexicons are shown in the Appendix.",
"Figures 1 and 2 also show the actions triggered by the corresponding lexicons for the utterances shown.",
"We find that small but precise lexicons are sufficient to guide the search process away from spurious logical forms.",
"Moreover, as shown empirically in §6.4, the model for NLVR does not learn much without this simple but crucial guidance.",
"Experiments We evaluate both our contributions on NLVR and WIKITABLEQUESTIONS.",
"Model In this work, we use a grammar-constrained encoder-decoder neural semantic parser for our experiments.",
"Of the many variants of this basic architecture (see §7), all of which are essentially seq2seq models with constrained outputs and/or re-parameterizations, we choose to use the parser of Krishnamurthy et al.",
"(2017) , as it is particularly well-suited to the WIKITABLEQUESTIONS dataset, which we evaluate on.",
"The encoder in the model is a bi-directional recurrent neural network with Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) cells, and the decoder is a grammarconstrained decoder also with LSTM cells.",
"Instead of directly outputting tokens in the logical form, the decoder outputs production rules from a CFG-like grammar.",
"These production rules sequentially build up an abstract syntax tree, which determines the logical form.",
"The model also has an entity linking component for producing table entities in the logical forms; this com-ponent is only applicable to WIKITABLEQUES-TIONS, and we remove it when running experiments on NLVR.",
"The particulars of the model are not the focus of this work, so we refer the reader to the original paper for more details.",
"In addition, we slightly modify the constrained decoding architecture from (Krishnamurthy et al., 2017) to bias the predicted actions towards those that would decrease the value of S(y i , x i ).",
"This is done using a coverage vector, v S i for each training instance that keeps track of the production rules triggered by x i , and gets updated whenever one of those desired productions is produced by the decoder.",
"That is, v S i is a vector of 1s and 0s, with 1s indicating the triggered productions that are yet to be produced by the decoder.",
"This is similar to the idea of checklists used by Kiddon et al.",
"(2016) .",
"The decoder in the original architecture scores output actions at each time step by computing a dot product of the predicted action representation with the embeddings of each of the actions.",
"We add a weighted sum of all the actions that are yet to produced: s a i = e a .",
"(p i + γ * v S i .E) (4) where s a i is the score of action a at time step i, e a is the embedding of that action, p i is the predicted action representation, E is the set of embeddings of all the actions, and γ is a learned parameter for regularizing the bias towards yet-to-be produced triggered actions.",
"Experimental setup NLVR We use the standard train-dev-test split for NLVR, containing 12409, 988 and 989 sentence-image pairs respectively.",
"NLVR contains most of the sentences occurring in multiple worlds (with an average of 3.9 worlds per sentence).",
"We set the word embedding and action embedding sizes to 50, and the hidden layer size of both the encoder and the decoder to 30.",
"We initialized all the parameters, including the word and action embeddings using Glorot uniform initialization (Glorot and Bengio, 2010) .",
"We found that using pretrained word representations did not help.",
"We added a dropout (Srivastava et al., 2014) of 0.2 on the outputs of the encoder and the decoder and before predicting the next action, set the beam size to 10 both during training and at test time, and trained the model using ADAM (Kingma and Ba, 2014) with a learning rate of 0.001.",
"All the hyperparameters are tuned on the validation set.",
"WIKITABLEQUESTIONS This dataset comes with five different cross-validation folds of training data, each containing a different 80/20 split for training and development.",
"We first show results aggregated from all five folds in §6.3, and then show results from controlled experiments on fold 1.",
"We replicated the model presented in Krishnamurthy et al.",
"(2017) , and only changed the training algorithm and the language used.",
"We used a beam size of 20 for MBR during training and decoding, and 10 for MML during decoding, and trained the model using Stochastic Gradient Descent (Kiefer et al., 1952) with a learning rate of 0.1, all of which are tuned on the validation sets.",
"Specifics of iterative search For our iterative search algorithm, we obtain an initial set of candidate logical forms in both domains by exhaustively searching to a depth of 10 2 .",
"During search we retrieve the logical forms that lead to the correct denotations in all the corresponding worlds, and sort them based on their coverage cost using the coverage lexicon described in §5.4, and choose the top-k 3 .",
"At each iteration of the search step in our iterative training algorithm, we increase the maximum depth of our search with a step-size of 2, finding more complex logical forms and covering a larger proportion of the training data.",
"While exhaustive search is prohibitively expensive beyond a fixed number of steps, our training process that uses beam search based approximation can go deeper.",
"Implementation We implemented our model and training algorithms within the AllenNLP (Gardner et al., 2018) toolkit.",
"The code and models are publicly available at https://github.com/allenai/ iterative-search-semparse.",
"Main results WIKITABLEQUESTIONS Table 1 compares the performance of a single model trained using Iterative Search, with that of previously published single models.",
"We excluded ensemble models since there are differences in the way ensembles are built for this task in previous work, either in terms of size or how the individual models were chosen.",
"We show both best and aver- Approach Dev Test Pasupat and Liang (2015) 37.0 37.1 Neelakantan et al.",
"(2017) 34.",
"(Liang et al., 2018) , all trained on the official split 1 of WIKITABLEQUESTIONS and tested on the official test set.",
"age (over 5 folds) single model performance from Liang et al.",
"(2018) (Memory Augmented Policy Optimization).",
"The best model was chosen based on performance on the development set.",
"Our single model performances are computed in the same way.",
"Note that Liang et al.",
"(2018) also use a lexicon similar to ours to prune the seed set of logical forms used to initialize their memory buffer.",
"In Table 2 , we compare the performance of our iterative search algorithm with three baselines: 1) Static MML, as described in §2.2.1 trained on the candidate set of logical forms obtained through the heuristic search technique described in §6.2; 2) Iterative MML, also an iterative technique but unlike iterative search, we skip MBR and iteratively train static MML models while increasing the number of decoding steps; and 3) MAPO (Liang et al., 2018) , the current best published system on WTQ.",
"All four algorithms are trained and evaluated on the first fold, use the same language, and the bottom three use the same model and the same set of logical forms used to train static MML.",
"Table 3 , we show a comparison of the performance of our iterative coverage-guided search algorithm with the previously published approaches for NLVR.",
"The first two rows correspond to models that are not semantic parsers.",
"This shows that semantic parsing is a promising direction for this task.",
"The closest work to ours is the weakly supervised parser built by (Goldman et al., 2018) .",
"They build a lexicon similar to ours for mapping surface forms in input sentences to abstract clusters.",
"But in addition to defining a lexicon, they also manually annotate complete sentences in this abstract space, and use those annotations to perform data augmentation for training a supervised parser, which is then used to initialize a weakly supervised parser.",
"They also explicitly use the abstractions to augment the beam during decoding using caching, and a separately-trained discriminative re-ranker to re-order the logical forms on the beam.",
"As a discriminative re-ranker is orthogonal to our contributions, we show their results with and without it, with \"Abs.",
"Sup.\"",
"being more comparable to our work.",
"Our model, which uses no data augmentation, no caching during decoding, and no discriminative re-ranker, outperforms their variant without reranking on the public test set, and outperforms their best model on the hidden test set, achieving a new state-of-theart result on this dataset.",
"NLVR In Effect of coverage-guided search To evaluate the contribution of coverage-guided search, we compare the the performance of the NLVR parser in two different settings: with and without coverage guidance in the cost function.",
"We also compare the performance of the parser in the two settings, when initialized with parameters from an MML model trained to maximize the likelihood of the set of logical forms obtained from exhaustive search.",
"Table 4 shows the results of this comparison.",
"We measure accuracy and consistency of all four models on the publicly available test set, using the official evaluation script.",
"Consistency here refers to the percentage of logical forms that produce the correct denotation in all the corresponding worlds, and is hence a stricter metric than accuracy.",
"The cost weight (λ in Equation 3) was tuned based on validation set performance for the runs with coverage, and we found that λ = 0.4 worked best.",
"It can be seen that both with and without ini-tialization, coverage guidance helps by a big margin, with the gap being even more prominent in the case where there is no initialization.",
"When there is neither coverage guidance nor a good initialization, the model does not learn much from unguided search and get a test accuracy not much higher than the majority baseline of 56.2%.",
"We found that coverage guidance was not as useful for WTQ.",
"The average value of the best performing λ was around 0.2, and higher values neither helped nor hurt performance.",
"Effect of iterative search To evaluate the effect of iterative search, we present the accuracy numbers from the search (S) and maximization (M) steps from different iterations in Tables 5 and 6 , showing results on NLVR and WTQ, respectively.",
"Additionally, we also show number of decoding steps used at each iterations, and the percentage of sentences in the training data for which we were able to obtain consistent logical forms from the S step, the set that was used in the M step of the same iteration.",
"It can be seen in both tables that a better MML model gives a better initialization for MBR, and a better MBR model results in a larger set of utterances for which we can retrieve consistent logical forms, thus improving the subsequent MML model.",
"The improvement for NLVR is more pronounced (a gain of 21% absolute) than for WTQ (a gain of 3% absolute), likely because the initial exhaustive search provides a much higher percentage of spurious logical forms for NLVR, and thus the starting place is relatively worse.",
"Complexity of Logical Forms We analyzed the logical forms produced by our iterative search algorithm at different iterations to see how they differ.",
"As expected, for NLVR, allowing greater depths lets the parser explore more complex logical forms.",
"Table 7 shows examples from the validation set that indicate this trend.",
"Related Work Most of the early methods used for training semantic parsers required the training data to come with annotated logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005) .",
"The primary limitation of such methods is that manually producing these logical forms is expensive, making it hard to scale these methods across domains.",
"Dev.",
"Test-P Test-H Approach Acc.",
"Cons.",
"Acc.",
"Cons.",
"Acc.",
"Cons.",
"MaxEnt (Suhr et al., 2017) 68.0 -67.7 -67.8 -BiATT-Pointer (Tan and Bansal, 2018) 74.6 -73.9 -71.8 -Abs.",
"Sup.",
"(Goldman et al., 2018) 84.3 66.3 81.7 60.1 --Abs.",
"Sup.",
"+ ReRank (Goldman et al., 2018) More recent research has focused on training semantic parsers with weak supervision (Liang et al., 2011; Berant et al., 2013) , or trying to automatically infer logical forms from denotations (Pasupat and .",
"However, matching the performance of a fully supervised semantic parser with only weak supervision remains a significant challenge (Yih et al., 2016) .",
"The main contributions of this work deal with training semantic parsers with weak supervision, and we gave a detailed discussion of related training methods in §2.2.",
"We evaluate our contributions on the NLVR and WIKITABLEQUESTIONS datasets.",
"Other work that evaluates on on these datasets include Goldman et al.",
"(2018) , Tan and Bansal (2018) , Neelakantan et al.",
"(2017) , Krishnamurthy et al.",
"(2017) , Haug et al.",
"(2018) , and (Liang et al., 2018) .",
"These prior works generally present modeling contributions that are orthogonal (and in some cases complementary) to the contributions of this paper.",
"There has also been a lot of recent work on neural semantic parsing, most of which is also orthogonal to (and could probably benefit from) our contributions (Dong and Lapata, 2016; Jia and Liang, 2016; Yin and Neubig, 2017; Krishnamurthy et al., 2017; Rabinovich et al., 2017) .",
"Recent attempts at dealing with the problem of spuriousness include Misra et al.",
"(2018) and Guu et al.",
"(2017) .",
"Coverage has recently been used in machine translation (Tu et al., 2016) and summarization (See et al., 2017) .",
"There have also been many methods that use coverage-like mechanisms to give lexical cues to semantic parsers.",
"Goldman et al.",
"(2018) 's abstract examples is the most recent and related work, but the idea is also related to lexicons in pre-neural semantic parsers (Kwiatkowski et al., 2011) .",
"There is a tower with four blocks (box exists (member count equals all boxes 4)) 1 Atleast one black triangle is not touching the edge (object exists (black (triangle ((negate filter touch wall) all objects)))) 2 There is a yellow block as the top of a tower with exactly three blocks.",
"(object exists (yellow (top (object in box (member count equals all boxes 3))))) 3 The tower with three blocks has a yellow block over a black block (object count greater equals (yellow (above (black (object in box (member count equals all boxes 3))))) 1) Table 7 : Complexity of logical forms produced at different iterations, from iteration 0 to iteration 3; each logical form could not be produced at the previous iterations Conclusion We have presented a new technique for training semantic parsers with weak supervision.",
"Our key insights are that lexical cues are crucial for guiding search during the early stages of training, and that the particulars of the approximate marginalization in maximum marginal likelihood have a large impact on performance.",
"To address the first issue, we used a simple coverage mechanism for including lexicon-like information in neural semantic parsers that do not have lexicons.",
"For the second issue, we developed an iterative procedure that alternates between statically-computed and dynamically-computed training signals.",
"Together these two contributions greatly improve semantic parsing performance, leading to new state-ofthe-art results on NLVR and WIKITABLEQUES-TIONS.",
"As these contributions are to the learning algorithm, they are broadly applicable to many models trained with weak supervision.",
"One potential future work direction is investigating whether they extend to other structured prediction problems beyond semantic parsing."
]
} | {
"paper_header_number": [
"1",
"2.1",
"2.2",
"2.2.1",
"2.2.2",
"3",
"4",
"5",
"5.1",
"5.2",
"5.3",
"5.4",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"6.5",
"7",
"8"
],
"paper_header_content": [
"Introduction",
"Weakly supervised semantic parsing",
"Training algorithms",
"Maximum marginal likelihood",
"Reward-based methods",
"Coverage-guided search",
"Iterative search",
"Datasets",
"Cornell NLVR",
"WIKITABLEQUESTIONS",
"Logical form languages",
"Lexicons for coverage",
"Experiments",
"Model",
"Experimental setup",
"Main results",
"Effect of coverage-guided search",
"Effect of iterative search",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-21#paper-1019#slide-13 | Summary | Spuriousness is a challenge in training semantic parsers with weak supervision
Iterative training: Online search with initialization MML over offline search output
Coverage during online search
SOTA single model performances: | Spuriousness is a challenge in training semantic parsers with weak supervision
Iterative training: Online search with initialization MML over offline search output
Coverage during online search
SOTA single model performances: | [] |
GEM-SciDuet-train-22#paper-1021#slide-0 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-0 | Image Captioning | Two young kids with backpacks sitting on the porch. | Two young kids with backpacks sitting on the porch. | [] |
GEM-SciDuet-train-22#paper-1021#slide-1 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-1 | Visual Storytelling | The brother did not want to talk to his sister. The siblings made up.
They started to talk and smile. Their parents showed up. They were happy to see them.
The brother and sister were ready for the first day of school. They were excited to go to their first day and meet new friends. They told their mom how happy they were. They said they were going to make a lot of new friends. Then they got up and got ready to get in the car. | The brother did not want to talk to his sister. The siblings made up.
They started to talk and smile. Their parents showed up. They were happy to see them.
The brother and sister were ready for the first day of school. They were excited to go to their first day and meet new friends. They told their mom how happy they were. They said they were going to make a lot of new friends. Then they got up and got ready to get in the car. | [] |
GEM-SciDuet-train-22#paper-1021#slide-2 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-2 | Reinforcement Learning | o Directly optimize the existing metrics
BLEU, METEOR, ROUGE, CIDEr
Rennie 2017, Self-critical Sequence Training for Image Captioning | o Directly optimize the existing metrics
BLEU, METEOR, ROUGE, CIDEr
Rennie 2017, Self-critical Sequence Training for Image Captioning | [] |
GEM-SciDuet-train-22#paper-1021#slide-3 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-3 | Inverse Reinforcement Learning | R eward Reward Inverse Reinforcement Reinforcement O ptimal Optima l
Fu nction Functio n Learning i (IRL) ( ) Policy Policy | R eward Reward Inverse Reinforcement Reinforcement O ptimal Optima l
Fu nction Functio n Learning i (IRL) ( ) Policy Policy | [] |
GEM-SciDuet-train-22#paper-1021#slide-4 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-4 | Adversarial REward Learning AREL | Reward Model Story Policy Model | Reward Model Story Policy Model | [] |
GEM-SciDuet-train-22#paper-1021#slide-5 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-5 | Policy Model | My brother recently graduated college.
CNN It was a formal cap and gown event.
My mom and dad attended.
Later, my aunt and grandma showed up.
When the event was over he even got congratulated by the mascot. | My brother recently graduated college.
CNN It was a formal cap and gown event.
My mom and dad attended.
Later, my aunt and grandma showed up.
When the event was over he even got congratulated by the mascot. | [] |
GEM-SciDuet-train-22#paper-1021#slide-6 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-6 | Reward Model | my mom and dad attended
Story Convolution Pooling FC layer
Kim 2014, Convolutional Neural Networks for Sentence Classification | my mom and dad attended
Story Convolution Pooling FC layer
Kim 2014, Convolutional Neural Networks for Sentence Classification | [] |
GEM-SciDuet-train-22#paper-1021#slide-7 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-7 | Associating Reward with Story | Energy-based models associate an energy value with a sample modeling the data as a Boltzmann distribution
Approximate data distribution Partition function
Optimal reward function is achieved when
LeCun et al. 2006, A tutorial on energy-based learning | Energy-based models associate an energy value with a sample modeling the data as a Boltzmann distribution
Approximate data distribution Partition function
Optimal reward function is achieved when
LeCun et al. 2006, A tutorial on energy-based learning | [] |
GEM-SciDuet-train-22#paper-1021#slide-8 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-8 | AREL Objective | Therefore, we define an adversarial objective with KL-divergence
Empirical distribution Policy distribution
The objective of Reward Model
The objective of Policy Model | Therefore, we define an adversarial objective with KL-divergence
Empirical distribution Policy distribution
The objective of Reward Model
The objective of Policy Model | [] |
GEM-SciDuet-train-22#paper-1021#slide-10 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-10 | Automatic Evaluation | Method BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr
Seq2seq (Huang et al.)
HierAttRNN (Yu et al.)
ABL REUL E -(RoL urs)
Huang et al. 2016, Visual Storytelling Yu et al. 2017, Hierarchically-Attentive RNN for Album Summarization and Storytelling | Method BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE CIDEr
Seq2seq (Huang et al.)
HierAttRNN (Yu et al.)
ABL REUL E -(RoL urs)
Huang et al. 2016, Visual Storytelling Yu et al. 2017, Hierarchically-Attentive RNN for Album Summarization and Storytelling | [] |
GEM-SciDuet-train-22#paper-1021#slide-11 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-11 | Human Evaluation | XE BLEU-RL CIDEr-RL GAN AREL
Relevance: the story accurately describes what is happening in the photo stream and covers the main objects.
Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style.
Concreteness: the story should narrate concretely what is in the images rather than giving very general descriptions. | XE BLEU-RL CIDEr-RL GAN AREL
Relevance: the story accurately describes what is happening in the photo stream and covers the main objects.
Expressiveness: coherence, grammatically and semantically correct, no repetition, expressive language style.
Concreteness: the story should narrate concretely what is in the images rather than giving very general descriptions. | [] |
GEM-SciDuet-train-22#paper-1021#slide-12 | 1021 | No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling | Though impressive results have been achieved in visual captioning, the task of generating abstract stories from photo streams is still a little-tapped problem. Different from captions, stories have more expressive language styles and contain many imaginary concepts that do not appear in the images. Thus it poses challenges to behavioral cloning algorithms. Furthermore, due to the limitations of automatic metrics on evaluating story quality, reinforcement learning methods with hand-crafted rewards also face difficulties in gaining an overall performance boost. Therefore, we propose an Adversarial REward Learning (AREL) framework to learn an implicit reward function from human demonstrations, and then optimize policy search with the learned reward function. Though automatic evaluation indicates slight performance boost over state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation shows that our approach achieves significant improvement in generating more human-like stories than SOTA systems. Code will be made available here 1 . * Equal contribution 1 https://github.com/littlekobe/AREL | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237
],
"paper_content_text": [
"Introduction Recently, increasing attention has been focused on visual captioning (Chen et al., 2015; Wang et al., 2018c) , which aims at describing the content of an image or a video.",
"Though it has achieved impressive results, its capability of performing human-like understanding is still restrictive.",
"To further investigate machine's capa-Story #1: The brother and sister were ready for the first day of school.",
"They were excited to go to their first day and meet new friends.",
"They told their mom how happy they were.",
"They said they were going to make a lot of new friends .",
"Then they got up and got ready to get in the car .",
"Story #2: The brother did not want to talk to his sister.",
"The siblings made up.",
"They started to talk and smile.",
"Their parents showed up.",
"They were happy to see them.",
"shown here: each image is captioned with one sentence, and we also demonstrate two diversified stories that match the same image sequence.",
"bilities in understanding more complicated visual scenarios and composing more structured expressions, visual storytelling (Huang et al., 2016) has been proposed.",
"Visual captioning is aimed at depicting the concrete content of the images, and its expression style is rather simple.",
"In contrast, visual storytelling goes one step further: it summarizes the idea of a photo stream and tells a story about it.",
"Figure 1 shows an example of visual captioning and visual storytelling.",
"We have observed that stories contain rich emotions (excited, happy, not want) and imagination (siblings, parents, school, car) .",
"It, therefore, requires the capability to associate with concepts that do not explicitly appear in the images.",
"Moreover, stories are more subjective, so there barely exists standard templates for storytelling.",
"As shown in Figure 1 , the same photo stream can be paired with diverse stories, different from each other.",
"This heavily increases the evaluation difficulty.",
"So far, prior work for visual storytelling (Huang et al., 2016; Yu et al., 2017b) is mainly inspired by the success of visual captioning.",
"Nevertheless, because these methods are trained by maximizing the likelihood of the observed data pairs, they are restricted to generate simple and plain description with limited expressive patterns.",
"In order to cope with the challenges and produce more human-like descriptions, Rennie et al.",
"(2016) have proposed a reinforcement learning framework.",
"However, in the scenario of visual storytelling, the common reinforced captioning methods are facing great challenges since the hand-crafted rewards based on string matches are either too biased or too sparse to drive the policy search.",
"For instance, we used the METEOR (Banerjee and Lavie, 2005) score as the reward to reinforce our policy and found that though the METEOR score is significantly improved, the other scores are severely harmed.",
"Here we showcase an adversarial example with an average METEOR score as high as 40.2: We had a great time to have a lot of the.",
"They were to be a of the.",
"They were to be in the.",
"The and it were to be the.",
"The, and it were to be the.",
"Apparently, the machine is gaming the metrics.",
"Conversely, when using some other metrics (e.g.",
"BLEU, CIDEr) to evaluate the stories, we observe an opposite behavior: many relevant and coherent stories are receiving a very low score (nearly zero).",
"In order to resolve the strong bias brought by the hand-coded evaluation metrics in RL training and produce more human-like stories, we propose an Adversarial REward Learning (AREL) framework for visual storytelling.",
"We draw our inspiration from recent progress in inverse reinforcement learning (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017) and propose the AREL algorithm to learn a more intelligent reward function.",
"Specifically, we first incorporate a Boltzmann distribution to associate reward learning with distribution approximation, then design the adversarial process with two models -a policy model and a reward model.",
"The policy model performs the primitive actions and produces the story sequence, while the reward model is responsible for learning the implicit reward function from human demonstrations.",
"The learned reward function would be employed to optimize the policy in return.",
"For evaluation, we conduct both automatic metrics and human evaluation but observe a poor correlation between them.",
"Particularly, our method gains slight performance boost over the baseline systems on automatic metrics; human evaluation, however, indicates significant performance boost.",
"Thus we further discuss the limitations of the metrics and validate the superiority of our AREL method in performing more intelligent understanding of the visual scenes and generating more human-like stories.",
"Our main contributions are four-fold: • We propose an adversarial reward learning framework and apply it to boost visual story generation.",
"• We evaluate our approach on the Visual Storytelling (VIST) dataset and achieve the state-of-the-art results on automatic metrics.",
"• We empirically demonstrate that automatic metrics are not perfect for either training or evaluation.",
"• We design and perform a comprehensive human evaluation via Amazon Mechanical Turk, which demonstrates the superiority of the generated stories of our method on relevance, expressiveness, and concreteness.",
"Related Work Visual Storytelling Visual storytelling is the task of generating a narrative story from a photo stream, which requires a deeper understanding of the event flow in the stream.",
"Park and Kim (2015) has done some pioneering research on storytelling.",
"Chen et al.",
"(2017) proposed a multimodal approach for storyline generation to produce a stream of entities instead of human-like descriptions.",
"Recently, a more sophisticated dataset for visual storytelling (VIST) has been released to explore a more human-like understanding of grounded stories (Huang et al., 2016) .",
"Yu et al.",
"(2017b) proposes a multi-task learning algorithm for both album summarization and paragraph generation, achieving the best results on the VIST dataset.",
"But these methods are still based on behavioral cloning and lack the ability to generate more structured stories.",
"Reinforcement Learning in Sequence Generation Recently, reinforcement learning (RL) has gained its popularity in many sequence generation tasks such as machine translation (Bahdanau et al., 2016) , visual captioning (Ren et al., 2017; Wang et al., 2018b) , summarization (Paulus et al., 2017; Chen et al., 2018) , etc.",
"The common wisdom of using RL is to view generating a word as an action and aim at maximizing the expected return by optimizing its policy.",
"As pointed in (Ranzato et al., 2015) , traditional maximum likelihood algorithm is prone to exposure bias and label bias, while the RL agent exposes the generative model to its own distribution and thus can perform better.",
"But these works usually utilize hand-crafted metric scores as the reward to optimize the model, which fails to learn more implicit semantics due to the limitations of automatic metrics.",
"Rethinking Automatic Metrics Automatic metrics, including BLEU (Papineni et al., 2002) , CIDEr , METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) , have been widely applied to the sequence generation tasks.",
"Using automatic metrics can ensure rapid prototyping and testing new models with fewer expensive human evaluation.",
"However, they have been criticized to be biased and correlate poorly with human judgments, especially in many generative tasks like response generation (Lowe et al., 2017; Liu et al., 2016) , dialogue system (Bruni and Fernández, 2017) and machine translation (Callison-Burch et al., 2006) .",
"The naive overlap-counting methods are not able to reflect many semantic properties in natural language, such as coherence, expressiveness, etc.",
"Generative Adversarial Network Generative adversarial network (GAN) (Goodfellow et al., 2014 ) is a very popular approach for estimating intractable probabilities, which sidestep the difficulty by alternately training two models to play a min-max two-player game: min D max G E x∼p data [log D(x)] + E z∼pz [log D(G(z))] , where G is the generator and D is the discriminator, and z is the latent variable.",
"Recently, GAN has quickly been adopted to tackle discrete problems (Yu et al., 2017a; Wang et al., 2018a) .",
"The basic idea is to use Monte Carlo policy gradient estimation (Williams, 1992) to update the parameters of the generator.",
"Inverse Reinforcement Learning Reinforcement learning is known to be hindered by the need for an extensive feature and reward engineering, especially under the unknown dynamics.",
"Therefore, inverse reinforcement learning (IRL) has been proposed to infer expert's reward function.",
"Previous IRL approaches include maximum margin approaches (Abbeel and Ng, 2004; Ratliff et al., 2006) and probabilistic approaches (Ziebart, 2010; Ziebart et al., 2008) .",
"Recently, adversarial inverse reinforcement learning methods provide an efficient and scalable promise for automatic reward acquisition (Ho and Ermon, 2016; Finn et al., 2016; Fu et al., 2017; Henderson et al., 2017) .",
"These approaches utilize the connection between IRL and energy-based model and associate every data with a scalar energy value by using Boltzmann distribution p θ (x) ∝ exp(−E θ (x)).",
"Inspired by these methods, we propose a practical AREL approach for visual storytelling to uncover a robust reward function from human demonstrations and thus help produce human-like stories.",
"3 Our Approach Problem Statement Here we consider the task of visual storytelling, whose objective is to output a word sequence W = (w 1 , w 1 , · · · , w T ), w t ∈ V given an input image stream of 5 ordered images I = (I 1 , I 2 , · · · , I 5 ), where V is the vocabulary of all output token.",
"We formulate the generation as a markov decision process and design a reinforcement learning framework to tackle it.",
"As described in Figure 2 , our AREL framework is mainly composed of two modules: a policy model π β (W ) and a reward model R θ (W ).",
"The policy model takes an image sequence I as the input and performs sequential actions (choosing words w from the vocabulary V) to form a narrative story W .",
"The reward model CNN My brother recently graduated college.",
"It was a formal cap and gown event.",
"My mom and dad attended.",
"Later, my aunt and grandma showed up.",
"When the event was over he even got congratulated by the mascot.",
"Figure 3 : Overview of the policy model.",
"The visual encoder is a bidirectional GRU, which encodes the high-level visual features extracted from the input images.",
"Its outputs are then fed into the RNN decoders to generate sentences in parallel.",
"Encoder Decoder Finally, we concatenate all the generated sentences as a full story.",
"Note that the five decoders share the same weights.",
"is optimized by the adversarial objective (see Section 3.3) and aims at deriving a human-like reward from both human-annotated stories and sampled predictions.",
"Model Policy Model As is shown in Figure 3 , the policy model is a CNN-RNN architecture.",
"We fist feed the photo stream I = (I 1 , · · · , I 5 ) into a pretrained CNN and extract their high-level image features.",
"We then employ a visual encoder to further encode the image features as context vectors h i = [ ← − h i ; − → h i ].",
"The visual encoder is a bidirectional gated recurrent units (GRU).",
"In the decoding stage, we feed each context vector h i into a GRU-RNN decoder to generate a substory W i .",
"Formally, the generation process can be written as: s i t = GRU(s i t−1 , [w i t−1 , h i ]) , (1) π β (w i t |w i 1:t−1 ) = sof tmax(W s s i t + b s ) , (2) where s i t denotes the t-th hidden state of i-th decoder.",
"We concatenate the previous token w i t−1 and the context vector h i as the input.",
"W s and b s are the projection matrix and bias, which output a probability distribution over the whole vocabulary V. Eventually, the final story W is the concatenation of the sub-stories W i .",
"β denotes all the parameters of the encoder, the decoder, and the output layer.",
"Figure 4 : Overview of the reward model.",
"Our reward model is a CNN-based architecture, which utilizes convolution kernels with size 2, 3 and 4 to extract bigram, trigram and 4-gram representations from the input sequence embeddings.",
"Once the sentence representation is learned, it will be concatenated with the visual representation of the input image, and then be fed into the final FC layer to obtain the reward.",
"Reward Model The reward model R θ (W ) is a CNN-based architecture (see Figure 4 ).",
"Instead of giving an overall score for the whole story, we apply the reward model to different story parts (substories) W i and compute partial rewards, where i = 1, · · · , 5.",
"We observe that the partial rewards are more fine-grained and can provide better guidance for the policy model.",
"We first query the word embeddings of the substory (one sentence in most cases).",
"Next, multiple convolutional layers with different kernel sizes are used to extract the n-grams features, which are then projected into the sentence-level representation space by pooling layers (the design here is inspired by Kim (2014) ).",
"In addition to the textual features, evaluating the quality of a story should also consider the image features for relevance.",
"Therefore, we then combine the sentence representation with the visual feature of the input image through concatenation and feed them into the final fully connected decision layer.",
"In the end, the reward model outputs an estimated reward value R θ (W ).",
"The process can be written in formula: R θ (W ) = W r (f conv (W ) + W i I CN N ) + b r , (3) where W r , b r denotes the weights in the output layer, and f conv denotes the operations in CNN.",
"I CN N is the high-level visual feature extracted from the image, and W i projects it into the sentence representation space.",
"θ includes all the pa-rameters above.",
"Learning Reward Boltzmann Distribution In order to associate story distribution with reward function, we apply EBM to define a Reward Boltzmann distribution: p θ (W ) = exp(R θ (W )) Z θ , (4) Where W is the word sequence of the story and p θ (W ) is the approximate data distribution, and Z θ = W exp(R θ (W )) denotes the partition function.",
"According to the energy-based model (Le-Cun et al., 2006) , the optimal reward function R * (W ) is achieved when the Reward-Boltzmann distribution equals to the \"real\" data distribution p θ (W ) = p * (W ).",
"Adversarial Reward Learning We first introduce an empirical distribution p e (W ) = 1(W ∈D) |D| to represent the empirical distribution of the training data, where D denotes the dataset with |D| stories and 1 denotes an indicator function.",
"We use this empirical distribution as the \"good\" examples, which provides the evidence for the reward function to learn from.",
"In order to approximate the Reward Boltzmann distribution towards the \"real\" data distribution p * (W ), we design a min-max two-player game, where the Reward Boltzmann distribution p θ aims at maximizing the its similarity with empirical distribution p e while minimizing that with the \"faked\" data generated from policy model π β .",
"On the contrary, the policy distribution π β tries to maximize its similarity with the Boltzmann distribution p θ .",
"Formally, the adversarial objective function is defined as max β min θ KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) .",
"(5) We further decompose it into two parts.",
"First, because the objective J β of the story generation policy is to minimize its similarity with the Boltzmann distribution p θ , the optimal policy that minimizes KL-divergence is thus π(W ) ∼ exp(R θ (W )), meaning if R θ is optimal, the optimal π β = π * .",
"In formula, J β = − KL(π β (W )||p θ (W )) = E W ∼π β (W ) [R θ (W )] + H(π β (W )) , (6) Algorithm where H denotes the entropy of the policy model.",
"On the other hand, the objective J θ of the reward function is to distinguish between humanannotated stories and machine-generated stories.",
"Hence it is trying to minimize the KL-divergence with the empirical distribution p e and maximize the KL-divergence with the approximated policy distribution π β : J θ =KL(pe(W )||p θ (W )) − KL(π β (W )||p θ (W )) = W [pe(W )R θ (W ) − π β (W )R θ (W )] − H(pe) + H(π β ) , (7) Since H(π β ) and H(p e ) are irrelevant to θ, we denote them as constant C. Therefore, the objective J θ can be further derived as J θ = E W ∼pe(W ) [R θ (W )] − E W ∼π β (W ) [R θ (W )] + C .",
"(8) Here we propose to use stochastic gradient descent to optimize these two models alternately.",
"Formally, the gradients can be written as ∂J θ ∂θ = E W ∼pe(W ) ∂R θ (W ) ∂θ − E W ∼π β (W ) ∂R θ (W ) ∂θ , ∂J β ∂β = E W ∼π β (W ) (R θ (W ) + log π θ (W ) − b) ∂ log π β (W ) ∂β , (9) where b is the estimated baseline to reduce the variance.",
"Training & Testing As described in Algorithm 1, we introduce an alternating algorithm to train these two models using stochastic gradient descent.",
"During testing, the policy model is used with beam search to produce the story.",
"Experiments and Analysis Experimental Setup VIST Dataset The VIST dataset (Huang et al., 2016) is the first dataset for sequential vision-tolanguage tasks including visual storytelling, which consists of 10,117 Flickr albums with 210,819 unique photos.",
"In this paper, we mainly evaluate our AREL method on this dataset.",
"After filtering the broken images 2 , there are 40,098 training, 4,988 validation, and 5,050 testing samples.",
"Each sample contains one story that describes 5 selected images from a photo album (mostly one sentence per image).",
"And the same album is paired with 5 different stories as references.",
"In our experiments, we used the same split settings as in (Huang et al., 2016; Yu et al., 2017b) for a fair comparison.",
"Evaluation Metrics In order to comprehensively evaluate our method on storytelling dataset, we adopted both the automatic metrics and human evaluation as our criterion.",
"Four diverse automatic metrics were used in our experiments: BLEU, METEOR, ROUGE-L, and CIDEr.",
"We utilized the open source evaluation code 3 used in (Yu et al., 2017b) .",
"For human evaluation, we employed the Amazon Mechanical Turk to perform two kinds of user studies (see Section 4.3 for more details).",
"Training Details We employ pretrained ResNet-152 model to extract image features from the photo stream.",
"We built a vocabulary of size 9,837 to include words appearing more than three times in the training set.",
"More training details can be found at Appendix B.",
"Automatic Evaluation In this section, we compare our AREL method with the state-of-the-art methods as well as standard reinforcement learning algorithms on automatic evaluation metrics.",
"Then we further discuss the limitations of the hand-crafted metrics on evaluating human-like stories.",
"Comparison with SOTA on Automatic Metrics In Table 1 , we compare our method with Huang et al.",
"(2016) and Yu et al.",
"(2017b) , which report achieving best-known results on the VIST dataset.",
"We first implement a strong baseline model (XEss), which share the same architecture with our policy model but is trained with cross-entropy loss and scheduled sampling.",
"Besides, we adopt the traditional generative adversarial training for comparison (GAN).",
"As shown in Table 1 , our XEss model already outperforms the best-known re- Table 1 : Automatic evaluation on the VIST dataset.",
"We report BLEU (B), METEOR (M), ROUGH-L (R), and CIDEr (C) scores of the SOTA systems and the models we implemented, including XE-ss, GAN and AREL.",
"AREL-s-N denotes AREL models with sigmoid as output activation and alternate frequency as N, while ARELt-N denoting AREL models with tahn as the output activation (N = 50 or 100).",
"sults on the VIST dataset, and the GAN model can bring a performance boost.",
"We then use the XEss model to initialize our policy model and further train it with AREL.",
"Evidently, our AREL model performs the best and achieves the new state-ofthe-art results across all metrics.",
"But, compared with the XE-ss model, the performance gain is minor, especially on METEOR and ROUGE-L scores.",
"However, in Sec.",
"4.3, the extensive human evaluation has indicated that our AREL framework brings a significant improvement on generating human-like stories over the XE-ss model.",
"The inconsistency of automatic evaluation and human evaluation lead to a suspect that these hand-crafted metrics lack the ability to fully evaluate stories' quality due to the complicated characteristics of the stories.",
"Therefore, we conduct experiments to analyze and discuss the defects of the automatic metrics in section 4.2.",
"Limitations of Automatic Metrics As we claimed in the introduction, string-match-based automatic metrics are not perfect and fail to evaluate some semantic characteristics of the stories, like the expressiveness and coherence of the stories.",
"In order to confirm our conjecture, we utilize automatic metrics as rewards to reinforce the visual storytelling model by adopting policy gradient with baseline to train the policy model.",
"The quantitative results are demonstrated in Table 1 .",
"Apparently, METEOR-RL and ROUGE-RL are severely ill-posed: they obtain the highest scores on their own metrics but damage the other met- Table 2 : Comparison with different RL models with different metric scores as the rewards.",
"We report the average scores of the AREL models as AREL (avg).",
"Although METEOR-RL and ROUGE-RL models achieve very high scores on their own metrics, the underlined scores are severely damaged.",
"Actually, they are gaming their own metrics with nonsense sentences.",
"rics severely.",
"We observe that these models are actually overfitting to a given metric while losing the overall coherence and semantical correctness.",
"Same as METEOR score, there is also an adversarial example for ROUGE-L 4 , which is nonsense but achieves an average ROUGE-L score of 33.8.",
"Besides, as can be seen in Table 1 , after reinforced training, BLEU-RL and CIDEr-RL do not bring a consistent improvement over the XE-ss model.",
"We plot the histogram distributions of both BLEU-3 and CIDEr scores on the test set in Figure 5 .",
"An interesting fact is that there are a large number of samples with nearly zero score on both metrics.",
"However, we observed those \"zero-score\" samples are not pointless results; instead, lots of them make sense and deserve a better score than zero.",
"Here is a \"zero-score\" example on BLEU-3: I had a great time at the restaurant today.",
"The food was delicious.",
"I had a lot of food.",
"The food was delicious.",
"T had a great time.",
"The corresponding reference is The table of food was a pleasure to see!",
"Our food is both nutritious and beautiful!",
"Our chicken was especially tasty!",
"We love greens as they taste great and are healthy!",
"The fruit was a colorful display that tantalized our palette.. theme \"food and eating\", which showcases the defeats of using BLEU and CIDEr scores as a reward for RL training.",
"Moreover, we compare the human evaluation scores with these two metric scores in Figure 5 .",
"Noticeably, both BLEU-3 and CIDEr have a poor correlation with the human evaluation scores.",
"Their distributions are more biased and thus cannot fully reflect the quality of the generated stories.",
"In terms of BLEU, it is extremely hard for machines to produce the exact 3-gram or 4-gram matching, so the scores are too low to provide useful guidance.",
"CIDEr measures the similarity of a sentence to the majority of the references.",
"However, the references to the same image sequence are photostream different from each other, so the score is very low and not suitable for this task.",
"In contrast, our AREL framework can lean a more robust reward function from human-annotated stories, which is able to provide better guidance to the policy and thus improves its performances over different metrics.",
"Although the prediction is not as good as the reference, it is actually coherent and relevant to the Comparison with GAN We here compare our method with traditional GAN (Goodfellow et al., 2014) , the update rule for generator can be generally classified into two categories.",
"We demonstrate their corresponding objectives and ours as follows: GAN 1 : J β = E W ∼p β [− log R θ (W )] , GAN 2 : J β = E W ∼p β [log(1 − R θ (W ))] , ours : J β = E W ∼p β [−R θ (W )] .",
"As discussed in Arjovsky et al.",
"(2017) , GAN 1 is prone to the unstable gradient issue and GAN 2 is prone to the vanishing gradient issue.",
"Analytically, our method does not suffer from these two common issues and thus is able converge to optimum solutions more easily.",
"From Table 1 , we can observe slight gains of using AREL over GAN Figure 5 : Metric score distributions.",
"We plot the histogram distributions of BLEU-3 and CIDEr scores on the test set, as well as the human evaluation score distribution on the test samples.",
"For a fair comparison, we use the Turing test results to calculate the human evaluation scores (see Section 4.3).",
"Basically, 0.2 score is given if the generated story wins the Turing test, 0.1 for tie, and 0 if losing.",
"Each sample has 5 scores from 5 judges, and we use the sum as the human evaluation score, so it is in the range [0, 1].",
"with automatic metrics, therefore we further deploy human evaluation for a better comparison.",
"Human Evaluation Automatic metrics cannot fully evaluate the capability of our AREL method.",
"Therefore, we perform two different kinds of human evaluation studies on Amazon Mechanical Turk: Turing test and pairwise human evaluation.",
"For both tasks, we use 150 stories (750 images) sampled from the test set, each assigned to 5 workers to eliminate human variance.",
"We batch six items as one assignment and insert an additional assignment as a sanity check.",
"Besides, the order of the options within each item is shuffled to make a fair comparison.",
"Turing Test We first conduct five independent Turing tests for XE-ss, BLEU-RL, CIDEr-RL, GAN, and AREL models, during which the worker is given one human-annotated sample and one machine-generated sample, and needs to decide which is human-annotated.",
"As shown in Table 3, our AREL model significantly outperforms all the other baseline models in the Turing test: it has much more chances to fool AMT worker (the ratio is AREL:XE-ss:BLEU-RL:CIDEr-RL:GAN = 45.8%:28.3%:32.1%:19.7%:39.5%), which confirms the superiority of our AREL framework in generating human-like stories.",
"Unlike automatic metric evaluation, the Turing test has indicated a much larger margin between AREL and other competing algorithms.",
"Thus, we empirically confirm that metrics are not perfect in evaluating many implicit semantic properties of natural language.",
"Besides, the Turing test of our AREL model reveals that nearly half of the workers are fooled by our machine generation, indicating a preliminary success toward generating human-like stories.",
"Pairwise Comparison In order to have a clear comparison with competing algorithms with respect to different semantic features of the stories, we further perform four pairwise comparison tests: AREL vs XE-ss/BLEU-RL/CIDEr-RL/GAN.",
"For each photo stream, the worker is presented with two generated stories and asked to make decisions from the three aspects: relevance 5 , expressiveness 6 and concreteness 7 .",
"This head-tohead compete is designed to help us understand in what aspect our model outperforms the competing algorithms, which is displayed in Table 4 .",
"Consistently on all the three comparisons, a large majority of the AREL stories trumps the competing systems with respect to their relevance, XE-ss We took a trip to the mountains.",
"There were many different kinds of different kinds.",
"We had a great time.",
"He was a great time.",
"It was a beautiful day.",
"AREL The family decided to take a trip to the countryside.",
"There were so many different kinds of things to see.",
"The family decided to go on a hike.",
"I had a great time.",
"At the end of the day, we were able to take a picture of the beautiful scenery.",
"Humancreated Story We went on a hike yesterday.",
"There were a lot of strange plants there.",
"I had a great time.",
"We drank a lot of water while we were hiking.",
"The view was spectacular.",
"expressiveness, and concreteness.",
"Therefore, it empirically confirms that our generated stories are more relevant to the image sequences, more coherent and concrete than the other algorithms, which however is not explicitly reflected by the automatic metric evaluation.",
"Figure 6 gives a qualitative comparison example between AREL and XE-ss models.",
"Looking at the individual sentences, it is obvious that our results are more grammatically and semantically correct.",
"Then connecting the sentences together, we observe that the AREL story is more coherent and describes the photo stream more accurately.",
"Thus, our AREL model significantly surpasses the XEss model on all the three aspects of the qualitative example.",
"Besides, it won the Turing test (3 out 5 AMT workers think the AREL story is created by a human).",
"In the appendix, we also show a negative case that fails the Turing test.",
"Qualitative Analysis Conclusion In this paper, we not only introduce a novel adversarial reward learning algorithm to generate more human-like stories given image sequences, but also empirically analyze the limitations of the automatic metrics for story evaluation.",
"We believe there are still lots of improvement space in the narrative paragraph generation tasks, like how to better simulate human imagination to create more vivid and diversified stories."
]
} | {
"paper_header_number": [
"1",
"2",
"3.1",
"3.2",
"3.3",
"4.1",
"4.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"Model",
"Learning",
"Experimental Setup",
"Automatic Evaluation",
"Human Evaluation",
"Conclusion"
]
} | GEM-SciDuet-train-22#paper-1021#slide-12 | Takeaway | o Generating and evaluating stories are both challenging due
to the complicated nature of stories
o No existing metrics are perfect for either training or testing o AREL is a better learning framework for visual storytelling
Can be applied to other generation tasks o Our approach is model-agnostic
Advanced models better performance | o Generating and evaluating stories are both challenging due
to the complicated nature of stories
o No existing metrics are perfect for either training or testing o AREL is a better learning framework for visual storytelling
Can be applied to other generation tasks o Our approach is model-agnostic
Advanced models better performance | [] |
GEM-SciDuet-train-23#paper-1024#slide-0 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-0 | Multimodal Machine Translation | Practical application of machine translation
Translate a source sentence along with related nonlinguistic information
two young girls are sitting on the street eating corn .
deux jeunes filles sont assises dans la rue , mangeant du mais .
NAACL SRW 2019, Minneapolis | Practical application of machine translation
Translate a source sentence along with related nonlinguistic information
two young girls are sitting on the street eating corn .
deux jeunes filles sont assises dans la rue , mangeant du mais .
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-1 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-1 | Issue of MMT | Multi30k [Elliott et al., 2016] has only small mount of data
Statistic of training data
Hard to train rare word translation
Tend to output synonyms guided by language model
Source deux jeunes filles sont assises dans la rue , mangeant du mais .
Reference two young girls are sitting on the street eating corn .
NMT two young girls are sitting on the street eating food .
NAACL SRW 2019, Minneapolis | Multi30k [Elliott et al., 2016] has only small mount of data
Statistic of training data
Hard to train rare word translation
Tend to output synonyms guided by language model
Source deux jeunes filles sont assises dans la rue , mangeant du mais .
Reference two young girls are sitting on the street eating corn .
NMT two young girls are sitting on the street eating food .
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-2 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-2 | Previous Solutions | Parallel corpus without images [Elliott and Kadar, 2017; Gronroos et al., 2018]
Pseudo in-domain data by filtering general domain data
Back-translation of caption/monolingual data
NAACL SRW 2019, Minneapolis | Parallel corpus without images [Elliott and Kadar, 2017; Gronroos et al., 2018]
Pseudo in-domain data by filtering general domain data
Back-translation of caption/monolingual data
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-3 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-3 | Motivation | Introduce pretrained word embedding to MMT
Improve rare word translation in MMT
Pretrained word embeddings with conventional MMT?
Pretrained Word Embedding in text-only NMT
Initialize embedding layers in encoder/decoder [Qi et al., 2018]
Improve overall performance in low-resource domain
Search-based decoder with continuous output [Kumar and Tsvetkov, 2019]
NAACL SRW 2019, Minneapolis | Introduce pretrained word embedding to MMT
Improve rare word translation in MMT
Pretrained word embeddings with conventional MMT?
Pretrained Word Embedding in text-only NMT
Initialize embedding layers in encoder/decoder [Qi et al., 2018]
Improve overall performance in low-resource domain
Search-based decoder with continuous output [Kumar and Tsvetkov, 2019]
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-4 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-4 | Baseline IMAGINATION | While validating, testing Bahdanau et al., 2015
Train both MT task and shared space learning task to improve the shared encoder.
NAACL SRW 2019, Minneapolis | While validating, testing Bahdanau et al., 2015
Train both MT task and shared space learning task to improve the shared encoder.
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-5 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-5 | MMT with Embedding Prediction | 1. Use embedding prediction
While validating, testing in decoder
2. Initialize embedding layers in encoder/decoder with pretrained word embeddings
While training 3. Shift visual features to make the mean vector be a zero
NAACL SRW 2019, Minneapolis | 1. Use embedding prediction
While validating, testing in decoder
2. Initialize embedding layers in encoder/decoder with pretrained word embeddings
While training 3. Shift visual features to make the mean vector be a zero
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-6 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-6 | Embedding Prediction | i.e. Continuous Output [Kumar and Tsvetkov, 2019]
Predict a word embedding and search for the nearest word
1. Predict a word embedding of next word.
2. Compute cosine similarities with each word in pretrained word embedding.
3. Find and output the most similar word as system output.
Pretrained word embedding will NOT be updated during training.
NAACL SRW 2019, Minneapolis | i.e. Continuous Output [Kumar and Tsvetkov, 2019]
Predict a word embedding and search for the nearest word
1. Predict a word embedding of next word.
2. Compute cosine similarities with each word in pretrained word embedding.
3. Find and output the most similar word as system output.
Pretrained word embedding will NOT be updated during training.
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-7 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-7 | Embedding Layer Initialization | Initialize embedding layer with pretrained word embedding
Fine-tune the embedding layer in encoder
DO NOT update the embedding layer in decoder
NAACL SRW 2019, Minneapolis | Initialize embedding layer with pretrained word embedding
Fine-tune the embedding layer in encoder
DO NOT update the embedding layer in decoder
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-8 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-8 | Loss Function | Model loss: Interpolation of each loss [Elliot and Kadaar, 2017]
MT task: Max-margin with negative sampling [Lazaridou et al., 2015]
Shared space learning task: Max-margin [Elliot and Kadaar, 2017]
NAACL SRW 2019, Minneapolis | Model loss: Interpolation of each loss [Elliot and Kadaar, 2017]
MT task: Max-margin with negative sampling [Lazaridou et al., 2015]
Shared space learning task: Max-margin [Elliot and Kadaar, 2017]
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-9 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-9 | Hubness Problem | Certain words (hubs) appear frequently in the neighbors of other words
Even of the word that has entirely no relationship with hubs
Prevent the embedding prediction model from searching for correct output words
Incorrectly output the hub word
NAACL SRW 2019, Minneapolis | Certain words (hubs) appear frequently in the neighbors of other words
Even of the word that has entirely no relationship with hubs
Prevent the embedding prediction model from searching for correct output words
Incorrectly output the hub word
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-10 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-10 | All but the Top | Address hubness problem in other NLP tasks
Debias a pretrained word embedding based on its global bias
1. Shift all word embeddings to make their mean vector into a zero vector
2. Subtract top 5 PCA components from each shifted word embedding
Applied to pretrained word embeddings for encoder/decoder
NAACL SRW 2019, Minneapolis | Address hubness problem in other NLP tasks
Debias a pretrained word embedding based on its global bias
1. Shift all word embeddings to make their mean vector into a zero vector
2. Subtract top 5 PCA components from each shifted word embedding
Applied to pretrained word embeddings for encoder/decoder
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-11 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-11 | Implementation and Dataset | Multi30k (French to English)
Pretrained ResNet50 for visual encoder
Trained on Common Crawl and Wikipedia
Our code is here: https://github.com/toshohirasawa/nmtpytorch-emb-pred
NAACL SRW 2019, Minneapolis | Multi30k (French to English)
Pretrained ResNet50 for visual encoder
Trained on Common Crawl and Wikipedia
Our code is here: https://github.com/toshohirasawa/nmtpytorch-emb-pred
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-12 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-12 | Hyper Parameters | dimension of hidden state: 256
RNN type: GRU dimension of word embedding: 300 dimension of shared space: 2048
NAACL SRW 2019, Minneapolis | dimension of hidden state: 256
RNN type: GRU dimension of word embedding: 300 dimension of shared space: 2048
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-13 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-13 | Word level F1 score | Frequency in training data
NAACL SRW 2019, Minneapolis | Frequency in training data
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-14 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-14 | Ablation wrt Embedding Layers | Encoder Decoder Fixed BLEU METEOR
FastText FastText Yes random
Encoder/Decoder: Initialize embedding layer with random values or FastText word embedding.
Fixed (Yes/No): Whether fix the embedding layer in decoder or fine-tune that while training.
Fixing the embedding layer in decoder is essential
Keep word embeddings in input/output layers consistent
NAACL SRW 2019, Minneapolis | Encoder Decoder Fixed BLEU METEOR
FastText FastText Yes random
Encoder/Decoder: Initialize embedding layer with random values or FastText word embedding.
Fixed (Yes/No): Whether fix the embedding layer in decoder or fine-tune that while training.
Fixing the embedding layer in decoder is essential
Keep word embeddings in input/output layers consistent
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-15 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-15 | Overall Performance | Model (+ pretrained): Apply embedding layer initialization and All-but-the-Top debiasing.
Our model performs better than baselines
Even those with embedding layer initialization
NAACL SRW 2019, Minneapolis | Model (+ pretrained): Apply embedding layer initialization and All-but-the-Top debiasing.
Our model performs better than baselines
Even those with embedding layer initialization
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-16 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-16 | Ablation wrt Visual Features | Visual Features BLEU METEOR
Visual Features (Centered/Raw/No): Use centered visual features or raw visual features to train model.
No show the result of text-only NMT with embedding prediction model.
Centering visual features is required to train our model
NAACL SRW 2019, Minneapolis | Visual Features BLEU METEOR
Visual Features (Centered/Raw/No): Use centered visual features or raw visual features to train model.
No show the result of text-only NMT with embedding prediction model.
Centering visual features is required to train our model
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-17 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-17 | Conclusion and Future Works | MMT with embedding prediction improves ...
It is essential for embedding prediction model to ...
Fix the embedding in decoder
Debias the pretrained word embedding
Center the visual feature for multitask learning
Better training corpora for embedding learning in MMT domain
Incorporate visual features into contextualized word embeddings
NAACTL hSRaWn 20k1 9y, Moinune! apolis | MMT with embedding prediction improves ...
It is essential for embedding prediction model to ...
Fix the embedding in decoder
Debias the pretrained word embedding
Center the visual feature for multitask learning
Better training corpora for embedding learning in MMT domain
Incorporate visual features into contextualized word embeddings
NAACTL hSRaWn 20k1 9y, Moinune! apolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-18 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-18 | Translation Example | un homme en velo pedale devant une voute .
a man on a bicycle pedals through an archway .
a man on a bicycle pedal past an arch .
Source a man on a bicycle pedals outside a monument .
IMAGINATION a man on a bicycle pedals in front of a archway .
NAACL SRW 2019, Minneapolis | un homme en velo pedale devant une voute .
a man on a bicycle pedals through an archway .
a man on a bicycle pedal past an arch .
Source a man on a bicycle pedals outside a monument .
IMAGINATION a man on a bicycle pedals in front of a archway .
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-23#paper-1024#slide-19 | 1024 | Multimodal Machine Translation with Embedding Prediction | Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109
],
"paper_content_text": [
"Introduction In multimodal machine translation, a target sentence is translated from a source sentence together with related nonlinguistic information such as visual information.",
"Recently, neural machine translation (NMT) has superseded traditional statistical machine translation owing to the introduction of the attentional encoder-decoder model, in which machine translation is treated as a sequence-tosequence learning problem and is trained to pay attention to the source sentence while decoding (Bahdanau et al., 2015) .",
"Most previous studies on multimodal machine translation are classified into two categories: visual feature adaptation and data augmentation.",
"In visual feature adaptation, multitask learning (Elliott and Kádár, 2017) and feature integration architecture (Caglayan et al., 2017a; Calixto et al., 2017) are proposed to improve neural network models.",
"Data augmentation aims to deal with the fact that the size of available datasets for multimodal translation is quite small.",
"To alleviate this problem, parallel corpora without a visual source (Elliott and Kádár, 2017; Grönroos et al., 2018) and pseudo-parallel corpora obtained using back-translation (Helcl et al., 2018) are used as additional learning resources.",
"Due to the availability of parallel corpora for NMT, Qi et al.",
"(2018) suggested that initializing the encoder with pretrained word embedding improves the translation performance in low-resource language pairs.",
"Recently, Kumar and Tsvetkov (2019) proposed an NMT model that predicts the embedding of output words and searches for the output word instead of calculating the probability using the softmax function.",
"This model performed as well as conventional NMT, and it significantly improved the translation accuracy for rare words.",
"In this study, we introduce an NMT model with embedding prediction for multimodal machine translation that fully uses pretrained embeddings to improve the translation accuracy for rare words.",
"The main contributions of this study are as follows: 1.",
"We propose a novel multimodal machine translation model with embedding prediction and explore various settings to take full advantage of word embeddings.",
"2.",
"We show that pretrained word embeddings improve the model performance, especially when translating rare words.",
"Multimodal Machine Translation with Embedding Prediction We integrate an embedding prediction framework (Kumar and Tsvetkov, 2019) with the multimodal machine translation model and take advantage of pretrained word embeddings.",
"To highlight the effect of pretrained word embeddings and embedding prediction architecture, we adopt IMAGINA-TION (Elliott and Kádár, 2017) as a simple multimodal baseline.",
"IMAGINATION jointly learns machine translation and visual latent space models.",
"It is based on a conventional NMT model for a machine translation task.",
"In latent space learning, a source sentence and the paired image are mapped closely in the latent space.",
"We use the latent space learning model as it is, except for the preprocessing of images.",
"The models for each task share the same textual encoder in a multitask scenario.",
"The loss function for multitask learning is the linear interpolation of loss functions for each task.",
"J = λJ T (θ, ϕ T ) + (1 − λ)J V (θ, ϕ V ) (1) where θ is the parameter of the shared encoder; ϕ T and ϕ V are parameters of the machine translation model and latent space model, respectively; and λ is the interpolation coefficient 1 .",
"Neural Machine Translation with Embedding Prediction The machine translation part in our proposed model is an extension of Bahdanau et al.",
"(2015) .",
"However, instead of using the probability of each word in the decoder, it searches for output words based on their similarity with word embeddings.",
"Once the model predicts a word embedding, its nearest neighbor in the pretrained word embeddings is selected as the system output.",
"e j = tanh(W o s j + b o ) (2) y j = argmin w∈V {d(ê j , e(w))} (3) where s j ,ê j , andŷ j are the hidden state of the decoder, predicted embedding, and system output, respectively, for each timestep j in the decoding process.",
"e(w) is the pretrained word embedding for a target word w. d is a distance function that is used to calculate the word similarity.",
"W o and b o are parameters of the output layer.",
"We adopt margin-based ranking loss (Lazaridou et al., 2015) as the loss function 1 We use λ = 0.01 in the experiment.",
"of the machine translation model.",
"J T (θ, ϕ T ) = M ∑ j max{0, γ + d(ê j , e(w − j )) −d(ê j , e(y j ))} (4) w − j = argmax w∈V {d(ê j , e(w)) − d(ê j , e(y j )) (5) where M is the length of a target sentence and γ is the margin 2 .",
"w − j is a negative sample that is close to the predicted embedding and far from the gold embedding as measuring using d. Pretrained word embeddings are also used to initialize the embedding layers of the encoder and decoder, and the output layer of the decoder.",
"The embedding layer of the encoder is updated during training, and the embedding layer of the decoder is fixed to the initial value.",
"Visual Latent Space Learning The decoder of this model calculates the average vector over the hidden states h i in the encoder and maps it to the final vectorv in the latent space.",
"v = tanh(W v · 1 N N ∑ i h i ) (6) where N is the length of an input sentence and W v ∈ R N * M is learned parameter of the model.",
"We use max margin loss as the loss function; it learns to make corresponding latent vectors of a source sentence and the paired image closer.",
"J V (θ, ϕ V ) = ∑ v ′ ̸ =v max{0, α+d(v, v ′ )−d(v, v)} (7) where v is the latent vector of the paired image; v ′ , the image vector for other examples; and α, the margin that adjusts the sparseness of each vector in the latent space 3 .",
"Experiment Dataset We train, validate, and test our model with the Multi30k (Elliott et al., 2016) dataset published in the WMT17 Shared Task.",
"We choose French as the source language and English as the target one.",
"The vocabulary size of both the source and the target languages is 10,000.",
"Following Kumar and Tsvetkov (2019) , byte pair encoding (Sennrich et al., 2016) is not applied.",
"The source and target sentences are preprocessed with lower-casing, tokenizing and normalizing the punctuation.",
"Visual features are extracted using pretrained ResNet (He et al., 2016) .",
"Specifically, we encode all images in Multi30k with ResNet-50 and pick out the hidden state in the pool5 layer as a 2,048dimension visual feature.",
"We calculate the centroid of visual features in the training dataset as the bias vector and subtract the bias vector from all visual features in the training, validation and test datasets.",
"Model The model is implemented using nmtpytorch toolkit v3.0.0 4 (Caglayan et al., 2017b) .",
"The shared encoder has 256 hidden dimensions, and therefore the bidirectional GRU has 512 dimensions.",
"The decoder in NMT model has 256 hidden dimension.",
"The input word embedding size and output vector size is 300 each.",
"The latent space vector size is 2,048.",
"We used the Adam optimizer with learning rate of 0.0004.",
"The gradient norm is clipped to 1.0.",
"The dropout rate is 0.3.",
"BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) are used as performance metrics.",
"We also evaluated the models using the F-score of each word; this shows how accurately each word is translated into target sentences, as was proposed in Kumar and Tsvetkov (2019) .",
"The F-score is calculated as the harmonic mean of the precision (fraction of produced sentences with a word that is in the references sentences) and the recall (fraction of reference sentences with a word that is in model outputs).",
"We ran the experiment three times with different random seeds and obtained the mean and variance for each model.",
"To clarify the effect of pretrained embeddings on machine translation, we also initialized the encoder and decoder of our models with random values instead of pretrained embeddings, and investigated the effect of fixing decoder embeddings.",
"Word Embedding We use publicly available pretrained Fast-Text (Bojanowski et al., 2017) embeddings (Grave et al., 2018 trained on Wikipedia and Common Crawl using the CBOW algorithm, and the dimension is 300.",
"The embedding for unknown words is calculated as the average embedding over words that are a part of pretrained embeddings but are not included in the vocabularies.",
"Both the target and the source embeddings are preprocessed according to Mu and Viswanath (2018) , in which all embeddings are debiased to make the average embedding into a zero vector and the top five principal components are subtracted for each embedding.",
"Table 1 shows the overall performance of the proposed and baseline models.",
"Compared with randomly initialized models, our model outperforms the text-only baseline by +2.49 BLEU and +1.24 METEOR, and the multimodal baseline by +2.31 BLEU and +1.09 METEOR, respectively.",
"While pretrained embeddings improve NMT/IMAGINATION models as well, the improved models are still beyond our model.",
"Table 2 shows the results of ablation experiments of the initialization and fine-tuning methods.",
"The pretrained embedding models outperform other models by up to +2.77 BLEU and +1.37 METEOR.",
"Results Discussion Rare Words Our model shows a great improvement for low-frequency words.",
"Figure 1 shows a variety of F-score according to the word frequency in the training corpus.",
"Whereas IMAG-INATION improves the translation accuracy uniformly, our model shows substantial improvement for rare words.",
"Word Embeddings Furthermore, we found that decoder embeddings must be fixed to improve multimodal machine translation with embedding prediction.",
"When we allow fine-tuning on the embedding layer, the performance drops below the baseline.",
"It seems that fine-tuning embeddings in NMT with embedding prediction makes the model search for common words more than expected, thus preventing it from predicting rare words.",
"More interestingly, using pretrained FastText embeddings on the decoder rather than the encoder improves performance.",
"This finding is different from Qi et al.",
"(2018) , in which only the encoder benefits from pretrained embeddings.",
"Compared with the model initialized with a random value, initializing the decoder with the embedding results in an increase of +1.80 BLEU; in contrast, initializing the encoder results in an increase of only +0.11 BLEU.",
"This is caused by the multitask learning model that trains the encoder with images and takes it away from what the embedding prediction model wants to learn from the sentences.",
"Visual Feature We also investigated the effect of images and its preprocessing in NMT with embedding prediction ( Table 3 ).",
"The interesting result is that multitask learning with raw images would not help the predictive model.",
"Debiasing images is an essential preprocessing for NMT with embedding prediction to use images effectively in multitask learning scenario.",
"Translation Examples In Table 4 , we show French-English translations generated by different models.",
"In the left example, our proposed model correctly translates \"voûte\" into \"archway\" (occurs five times in the training set), Although the baseline model translates it to its synonym having higher frequency (nine times for \"arch\" and 12 times for \"monument\").",
"At the same time, our outputs tend to be less fluent for long sentences.",
"The right example shows that our model translates some words (\"patterned\" and \"carpet\") more concisely; however, it generates a less fluent sentence than the baseline.",
"Related Works Most studies on multimodal machine translation are divided into two categories: visual feature adaptation and data augmentation.",
"First, in visual feature adaptation, visual features are extracted using image processing techniques and then integrated into a machine translation model.",
"In contrast, most multitask learning models use latent space learning as their auxiliary task.",
"Elliott and Kádár (2017) proposed the IMAGINATION model that learns to construct the corresponding visual feature from the textual hidden states of a source sentence.",
"The visual model shares its encoder with the machine translation model; this helps in improving the textual encoder.",
"Second, in data augmentation, parallel corpora without images are widely used as additional train-Image Source un homme en vélo pédale devant une voûte .",
"quatre hommes , dont trois portent des kippas , sont assis sur un tapisà motifs bleu et vert olive .",
"Reference a man on a bicycle pedals through an archway .",
"four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .",
"NMT a man on a bicycle pedal past an arch .",
"four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .",
"IMAG+ a man on a bicycle pedals outside a monument .",
"four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green green seating .",
"Ours a man on a bicycle pedals in front of a archway .",
"four men , three are wearing these are wearing these are sitting on a blue and green patterned mat .",
"ing data.",
"Grönroos et al.",
"(2018) trained their multimodal model with parallel corpora and achieved state-of-the-art performance in the WMT 2018.",
"However, the use of monolingual corpora has seldom been studied in multimodal machine translation.",
"Our study proposes using word embeddings that are pretrained on monolingual corpora.",
"Conclusion We have proposed a multimodal machine translation model with embedding prediction and showed that pretrained word embeddings improve the performance in multimodal translation tasks, especially when translating rare words.",
"In the future, we will tailor the training corpora for embedding learning, especially for handling the embedding for unknown words in the context of multimodal machine translation.",
"We will also incorporate visual features into contextualized word embeddings."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3.1",
"3.2",
"3.3",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Multimodal Machine Translation with Embedding Prediction",
"Neural Machine Translation with Embedding Prediction",
"Visual Latent Space Learning",
"Dataset",
"Model",
"Word Embedding",
"Discussion",
"Related Works",
"Conclusion"
]
} | GEM-SciDuet-train-23#paper-1024#slide-19 | Translation Example long | quatre hommes , dont trois portent des kippas , sont assis sur un tapis a motifs bleu et vert olive .
four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .
four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .
four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green
Reference green seating .
four men , three are wearing these are wearing these are sitting on a blue and green patterned
NAACL SRW 2019, Minneapolis | quatre hommes , dont trois portent des kippas , sont assis sur un tapis a motifs bleu et vert olive .
four men , three of whom are wearing prayer caps , are sitting on a blue and olive green patterned mat .
four men , three of whom are wearing aprons , are sitting on a blue and green speedo carpet .
four men , three of them are wearing alaska , are sitting on a blue patterned carpet and green
Reference green seating .
four men , three are wearing these are wearing these are sitting on a blue and green patterned
NAACL SRW 2019, Minneapolis | [] |
GEM-SciDuet-train-24#paper-1025#slide-0 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-0 | Introduction | - Cross-lingual transfer learning
- Very good results - Even better results
- Tested in favorable conditions - Fail in more challenging datasets
Previous work This work
- Adversarial learning - Self-learning
- Tested in favorable conditions - Works in challenging datasets - Fail in more challenging datasets | - Cross-lingual transfer learning
- Very good results - Even better results
- Tested in favorable conditions - Fail in more challenging datasets
Previous work This work
- Adversarial learning - Self-learning
- Tested in favorable conditions - Works in challenging datasets - Fail in more challenging datasets | [] |
GEM-SciDuet-train-24#paper-1025#slide-1 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-1 | Cross lingual embedding mappings | Basque English Training dictionary
Basque arg min English
= arg min min | Basque English Training dictionary
Basque arg min English
= arg min min | [] |
GEM-SciDuet-train-24#paper-1025#slide-2 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-2 | Artetxe et al ACL 2017 | = arg min min
- 25 word pairs
- num. * none 0 a | Iteration
- Numeral list A | = arg min min
- 25 word pairs
- num. * none 0 a | Iteration
- Numeral list A | [] |
GEM-SciDuet-train-24#paper-1025#slide-3 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-3 | Proposed method | 1) Fully unsupervised initialization
for x in vocab:
two due (two) cane (dog)
= sorted = sorted
- Stochastic dictionary induction
- Frequency-based vocabulary cutoff
- Bidirectional dictionary induction
- Final symmetric re-weighting (Artetxe et al., 2018) | 1) Fully unsupervised initialization
for x in vocab:
two due (two) cane (dog)
= sorted = sorted
- Stochastic dictionary induction
- Frequency-based vocabulary cutoff
- Bidirectional dictionary induction
- Final symmetric re-weighting (Artetxe et al., 2018) | [] |
GEM-SciDuet-train-24#paper-1025#slide-4 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-4 | Experiments | 10 runs for each method
Successful runs (>5% accuracy)
Method ES-EN IT-EN TR-EN
Number of successful runs
(Hard) dataset by Dinu et al. (2016) + extensions
Supervision Method EN-IT EN-DE EN-FI EN-ES | 10 runs for each method
Successful runs (>5% accuracy)
Method ES-EN IT-EN TR-EN
Number of successful runs
(Hard) dataset by Dinu et al. (2016) + extensions
Supervision Method EN-IT EN-DE EN-FI EN-ES | [] |
GEM-SciDuet-train-24#paper-1025#slide-5 | 1025 | A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings | Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194
],
"paper_content_text": [
"Introduction Cross-lingual embedding mappings have shown to be an effective way to learn bilingual word embeddings (Mikolov et al., 2013; .",
"The underlying idea is to independently train the embeddings in different languages using monolingual corpora, and then map them to a shared space through a linear transformation.",
"This allows to learn high-quality cross-lingual representations without expensive supervision, opening new research avenues like unsupervised neural machine translation (Artetxe et al., 2018b; .",
"While most embedding mapping methods rely on a small seed dictionary, adversarial training has recently produced exciting results in fully unsu-pervised settings (Zhang et al., 2017a,b; .",
"However, their evaluation has focused on particularly favorable conditions, limited to closely-related languages or comparable Wikipedia corpora.",
"When tested on more realistic scenarios, we find that they often fail to produce meaningful results.",
"For instance, none of the existing methods works in the standard English-Finnish dataset from Artetxe et al.",
"(2017) , obtaining translation accuracies below 2% in all cases (see Section 5).",
"On another strand of work, Artetxe et al.",
"(2017) showed that an iterative self-learning method is able to bootstrap a high quality mapping from very small seed dictionaries (as little as 25 pairs of words).",
"However, their analysis reveals that the self-learning method gets stuck in poor local optima when the initial solution is not good enough, thus failing for smaller training dictionaries.",
"In this paper, we follow this second approach and propose a new unsupervised method to build an initial solution without the need of a seed dictionary, based on the observation that, given the similarity matrix of all words in the vocabulary, each word has a different distribution of similarity values.",
"Two equivalent words in different languages should have a similar distribution, and we can use this fact to induce the initial set of word pairings (see Figure 1 ).",
"We combine this initialization with a more robust self-learning method, which is able to start from the weak initial solution and iteratively improve the mapping.",
"Coupled together, we provide a fully unsupervised crosslingual mapping method that is effective in realistic settings, converges to a good solution in all cases tested, and sets a new state-of-the-art in bilingual lexicon extraction, even surpassing previous supervised methods.",
"Figure 1 : Motivating example for our unsupervised initialization method, showing the similarity distributions of three words (corresponding to the smoothed density estimates from the normalized square root of the similarity matrices as defined in Section 3.2).",
"Equivalent translations (two and due) have more similar distributions than non-related words (two and cane -meaning dog).",
"This observation is used to build an initial solution that is later improved through self-learning.",
"Related work Cross-lingual embedding mapping methods work by independently training word embeddings in two languages, and then mapping them to a shared space using a linear transformation.",
"Most of these methods are supervised, and use a bilingual dictionary of a few thousand entries to learn the mapping.",
"Existing approaches can be classified into regression methods, which map the embeddings in one language using a leastsquares objective (Mikolov et al., 2013; Shigeto et al., 2015; , canonical methods, which map the embeddings in both languages to a shared space using canonical correlation analysis and extensions of it (Faruqui and Dyer, 2014; Lu et al., 2015) , orthogonal methods, which map the embeddings in one or both languages under the constraint of the transformation being orthogonal (Xing et al., 2015; Artetxe et al., 2016; Zhang et al., 2016; Smith et al., 2017) , and margin methods, which map the embeddings in one language to maximize the margin between the correct translations and the rest of the candidates .",
"Artetxe et al.",
"(2018a) showed that many of them could be generalized as part of a multi-step framework of linear transformations.",
"A related research line is to adapt these methods to the semi-supervised scenario, where the training dictionary is much smaller and used as part of a bootstrapping process.",
"While similar ideas where already explored for traditional count-based vector space models (Peirsman and Padó, 2010; Vulić and Moens, 2013) , Artetxe et al.",
"(2017) brought this approach to pre-trained low-dimensional word embeddings, which are more widely used nowadays.",
"More concretely, they proposed a selflearning approach that alternates the mapping and dictionary induction steps iteratively, obtaining results that are comparable to those of supervised methods when starting with only 25 word pairs.",
"A practical approach for reducing the need of bilingual supervision is to design heuristics to build the seed dictionary.",
"The role of the seed lexicon in learning cross-lingual embedding mappings is analyzed in depth by Vulić and Korhonen (2016) , who propose using document-aligned corpora to extract the training dictionary.",
"A more common approach is to rely on shared words and cognates (Peirsman and Padó, 2010; Smith et al., 2017) , while Artetxe et al.",
"(2017) go further and restrict themselves to shared numerals.",
"However, while these approaches are meant to eliminate the need of bilingual data in practice, they also make strong assumptions on the writing systems of languages (e.g.",
"that they all use a common alphabet or Arabic numerals).",
"Closer to our work, a recent line of fully unsupervised approaches drops these assumptions completely, and attempts to learn cross-lingual embedding mappings based on distributional information alone.",
"For that purpose, existing methods rely on adversarial training.",
"This was first proposed by Miceli Barone (2016), who combine an encoder that maps source language embeddings into the target language, a decoder that reconstructs the source language embeddings from the mapped embeddings, and a discriminator that discriminates between the mapped embeddings and the true target language embed-dings.",
"Despite promising, they conclude that their model \"is not competitive with other cross-lingual representation approaches\".",
"Zhang et al.",
"(2017a) use a very similar architecture, but incorporate additional techniques like noise injection to aid training and report competitive results on bilingual lexicon extraction.",
"drop the reconstruction component, regularize the mapping to be orthogonal, and incorporate an iterative refinement process akin to self-learning, reporting very strong results on a large bilingual lexicon extraction dataset.",
"Finally, Zhang et al.",
"(2017b) adopt the earth mover's distance for training, optimized through a Wasserstein generative adversarial network followed by an alternating optimization procedure.",
"However, all this previous work used comparable Wikipedia corpora in most experiments and, as shown in Section 5, face difficulties in more challenging settings.",
"Proposed method Let X and Z be the word embedding matrices in two languages, so that their ith row X i * and Z i * denote the embeddings of the ith word in their respective vocabularies.",
"Our goal is to learn the linear transformation matrices W X and W Z so the mapped embeddings XW X and ZW Z are in the same cross-lingual space.",
"At the same time, we aim to build a dictionary between both languages, encoded as a sparse matrix D where D ij = 1 if the jth word in the target language is a translation of the ith word in the source language.",
"Our proposed method consists of four sequential steps: a pre-processing that normalizes the embeddings ( §3.1), a fully unsupervised initialization scheme that creates an initial solution ( §3.2), a robust self-learning procedure that iteratively improves this solution ( §3.3), and a final refinement step that further improves the resulting mapping through symmetric re-weighting ( §3.4).",
"Embedding normalization Our method starts with a pre-processing that length normalizes the embeddings, then mean centers each dimension, and then length normalizes them again.",
"The first two steps have been shown to be beneficial in previous work (Artetxe et al., 2016) , while the second length normalization guarantees the final embeddings to have a unit length.",
"As a result, the dot product of any two embeddings is equivalent to their cosine similarity and directly related to their Euclidean distance 1 , and can be taken as a measure of their similarity.",
"Fully unsupervised initialization The underlying difficulty of the mapping problem in its unsupervised variant is that the word embedding matrices X and Z are unaligned across both axes: neither the ith vocabulary item X i * and Z i * nor the jth dimension of the embeddings X * j and Z * j are aligned, so there is no direct correspondence between both languages.",
"In order to overcome this challenge and build an initial solution, we propose to first construct two alternative representations X ′ and Z ′ that are aligned across their jth dimension X ′ * j and Z ′ * j , which can later be used to build an initial dictionary that aligns their respective vocabularies.",
"Our approach is based on a simple idea: while the axes of the original embeddings X and Z are different in nature, both axes of their corresponding similarity matrices M X = XX T and M Z = ZZ T correspond to words, which can be exploited to reduce the mismatch to a single axis.",
"More concretely, assuming that the embedding spaces are perfectly isometric, the similarity matrices M X and M Z would be equivalent up to a permutation of their rows and columns, where the permutation in question defines the dictionary across both languages.",
"In practice, the isometry requirement will not hold exactly, but it can be assumed to hold approximately, as the very same problem of mapping two embedding spaces without supervision would otherwise be hopeless.",
"Based on that, one could try every possible permutation of row and column indices to find the best match between M X and M Z , but the resulting combinatorial explosion makes this approach intractable.",
"In order to overcome this problem, we propose to first sort the values in each row of M X and M Z , resulting in matrices sorted(M X ) and sorted(M Z ) 2 .",
"Under the strict isometry condition, equivalent words would get the exact same vector across languages, and thus, given a word and its row in sorted(M X ), one could apply nearest neighbor retrieval over the rows of sorted(M Z ) to find its corresponding translation.",
"On a final note, given the singular value decomposition X = U SV T , the similarity matrix is M X = U S 2 U T .",
"As such, its square root √ M X = U SU T is closer in nature to the original embeddings, and we also find it to work better in practice.",
"We thus compute sorted( √ M X ) and sorted( √ M Z ) and normalize them as described in Section 3.1, yielding the two matrices X ′ and Z ′ that are later used to build the initial solution for self-learning (see Section 3.3).",
"In practice, the isometry assumption is strong enough so the above procedure captures some cross-lingual signal.",
"In our English-Italian experiments, the average cosine similarity across the gold standard translation pairs is 0.009 for a random solution, 0.582 for the optimal supervised solution, and 0.112 for the mapping resulting from this initialization.",
"While the latter is far from being useful on its own (the accuracy of the resulting dictionary is only 0.52%), it is substantially better than chance, and it works well as an initial solution for the self-learning method described next.",
"Robust self-learning Previous work has shown that self-learning can learn high-quality bilingual embedding mappings starting with as little as 25 word pairs (Artetxe et al., 2017) .",
"In this method, training iterates through the following two steps until convergence: 1.",
"Compute the optimal orthogonal mapping maximizing the similarities for the current dictionary D: arg max W X ,W Z i j D ij ((X i * W X ) · (Z j * W Z )) An optimal solution is given by W X = U and W Z = V , where U SV T = X T DZ is the singular value decomposition of X T DZ.",
"2.",
"Compute the optimal dictionary over the similarity matrix of the mapped embeddings XW X W T Z Z T .",
"This typically uses nearest neighbor retrieval from the source language into the target language, so D ij = 1 if j = argmax k (X i * W X ) · (Z k * W Z ) and D ij = 0 otherwise.",
"The underlying optimization objective is independent from the initial dictionary, and the algorithm is guaranteed to converge to a local optimum of it.",
"However, the method does not work if starting from a completely random solution, as it tends to get stuck in poor local optima in that case.",
"For that reason, we use the unsupervised initialization procedure at Section 3.2 to build an initial solution.",
"However, simply plugging in both methods did not work in our preliminary experiments, as the quality of this initial method is not good enough to avoid poor local optima.",
"For that reason, we next propose some key improvements in the dictionary induction step to make self-learning more robust and learn better mappings: • Stochastic dictionary induction.",
"In order to encourage a wider exploration of the search space, we make the dictionary induction stochastic by randomly keeping some elements in the similarity matrix with probability p and setting the remaining ones to 0.",
"As a consequence, the smaller the value of p is, the more the induced dictionary will vary from iteration to iteration, thus enabling to escape poor local optima.",
"So as to find a fine-grained solution once the algorithm gets into a good region, we increase this value during training akin to simulated annealing, starting with p = 0.1 and doubling this value every time the objective function at step 1 above does not improve more than ǫ = 10 −6 for 50 iterations.",
"• Frequency-based vocabulary cutoff.",
"The size of the similarity matrix grows quadratically with respect to that of the vocabularies.",
"This does not only increase the cost of computing it, but it also makes the number of possible solutions grow exponentially 3 , presumably making the optimization problem harder.",
"Given that less frequent words can be expected to be noisier, we propose to restrict the dictionary induction process to the k most frequent words in each language, where we find k = 20, 000 to work well in practice.",
"• CSLS retrieval.",
"showed that nearest neighbor suffers from the hubness problem.",
"This phenomenon is known to occur as an effect of the curse of dimensionality, and causes a few points (known as hubs) to be nearest neighbors of many other points (Radovanović et al., 2010a,b) .",
"Among the existing solutions to penalize the similarity score of hubs, we adopt the Cross-domain Similarity Local Scaling (CSLS) from .",
"Given two mapped embeddings x and y, the idea of CSLS is to compute r T (x) and r S (y), the average cosine similarity of x and y for their k nearest neighbors in the other language, respectively.",
"Having done that, the corrected score CSLS(x, y) = 2 cos(x, y) − r T (x) − r S (y).",
"Following the authors, we set k = 10.",
"• Bidirectional dictionary induction.",
"When the dictionary is induced from the source into the target language, not all target language words will be present in it, and some will occur multiple times.",
"We argue that this might accentuate the problem of local optima, as repeated words might act as strong attractors from which it is difficult to escape.",
"In order to mitigate this issue and encourage diversity, we propose inducing the dictionary in both directions and taking their corresponding concatenation, so D = D X→Z + D Z→X .",
"In order to build the initial dictionary, we compute X ′ and Z ′ as detailed in Section 3.2 and apply the above procedure over them.",
"As the only difference, this first solution does not use the stochastic zeroing in the similarity matrix, as there is no need to encourage diversity (X ′ and Z ′ are only used once), and the threshold for vocabulary cutoff is set to k = 4, 000, so X ′ and Z ′ can fit in memory.",
"Having computed the initial dictionary, X ′ and Z ′ are discarded, and the remaining iterations are performed over the original embeddings X and Z. Symmetric re-weighting As part of their multi-step framework, Artetxe et al.",
"(2018a) showed that re-weighting the target language embeddings according to the crosscorrelation in each component greatly improved the quality of the induced dictionary.",
"Given the singular value decomposition U SV T = X T DZ, this is equivalent to taking W X = U and W Z = V S, where X and Z are previously whitened applying the linear transformations (X T X) − 1 2 and (Z T Z) − 1 2 , and later de-whitened applying U T (X T X) 1 2 U and V T (Z T Z) 1 2 V .",
"However, re-weighting also accentuates the problem of local optima when incorporated into self-learning as, by increasing the relevance of dimensions that best match for the current solution, it discourages to explore other regions of the search space.",
"For that reason, we propose using it as a final step once self-learning has converged to a good solution.",
"Unlike Artetxe et al.",
"(2018a) , we apply re-weighting symmetrically in both languages, taking W X = U S 1 2 and W Z = V S 1 2 .",
"This approach is neutral in the direction of the mapping, and gives good results as shown in our experiments.",
"Experimental settings Following common practice, we evaluate our method on bilingual lexicon extraction, which measures the accuracy of the induced dictionary in comparison to a gold standard.",
"As discussed before, previous evaluation has focused on favorable conditions.",
"In particular, existing unsupervised methods have almost exclusively been tested on Wikipedia corpora, which is comparable rather than monolingual, exposing a strong cross-lingual signal that is not available in strictly unsupervised settings.",
"In addition to that, some datasets comprise unusually small embeddings, with only 50 dimensions and around 5,000-10,000 vocabulary items (Zhang et al., 2017a,b) .",
"As the only exception, report positive results on the English-Italian dataset of in addition to their main experiments, which are carried out in Wikipedia.",
"While this dataset does use strictly monolingual corpora, it still corresponds to a pair of two relatively close indo-european languages.",
"In order to get a wider picture of how our method compares to previous work in different conditions, including more challenging settings, we carry out our experiments in the widely used dataset of and the subsequent extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a , which together comprise English-Italian, English-German, English-Finnish and English-Spanish.",
"More concretely, the dataset consists of 300-dimensional CBOW embeddings trained on WacKy crawling corpora (English, Italian, German), Common Crawl (Finnish) and WMT News Crawl (Spanish).",
"The gold standards were derived from dictionaries built from Europarl word alignments and available at OPUS (Tiedemann, 2012) , split in a test set of 1,500 entries and a training set of 5,000 that we do not use in our experiments.",
"The datasets are freely available.",
"As a non-european agglutinative language, the English-Finnish pair is particularly challeng- Zhang et al.",
"(2017a) .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"EN-IT EN-DE EN-FI EN-ES best avg s t best avg s t best avg s t best avg s t Proposed method 48.53 48.13 10 8.9 48.47 48.19 10 7.3 33.50 32.63 10 12.9 37.60 37.33 10 9.1 Table 2 : Results of unsupervised methods on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"ing due to the linguistic distance between them.",
"For completeness, we also test our method in the Spanish-English, Italian-English and Turkish-English datasets of Zhang et al.",
"(2017a) , which consist of 50-dimensional CBOW embeddings trained on Wikipedia, as well as gold standard dictionaries 4 from Open Multilingual WordNet (Spanish-English and Italian-English) and Google Translate (Turkish-English).",
"The lower dimensionality and comparable corpora make an easier scenario, although it also contains a challenging pair of distant languages (Turkish-English).",
"Our method is implemented in Python using NumPy and CuPy.",
"Together with it, we also test the methods of Zhang et al.",
"(2017a) and using the publicly available implementations from the authors 5 .",
"Given that Zhang et al.",
"(2017a) report using a different value of their hyperparameter λ for different language pairs (λ = 10 for English-Turkish and λ = 1 for the rest), we test both values in all our experiments to 4 The test dictionaries were obtained through personal communication with the authors.",
"The rest of the language pairs were left out due to licensing issues.",
"5 Despite our efforts, Zhang et al.",
"(2017b) was left out because: 1) it does not create a one-to-one dictionary, thus difficulting direct comparison, 2) it depends on expensive proprietary software 3) its computational cost is orders of magnitude higher (running the experiments would have taken several months).",
"better understand its effect.",
"In the case of , we test both the default hyperparameters in the source code as well as those reported in the paper, with iterative refinement activated in both cases.",
"Given the instability of these methods, we perform 10 runs for each, and report the best and average accuracies, the number of successful runs (those with >5% accuracy) and the average runtime.",
"All the experiments were run in a single Nvidia Titan Xp.",
"Results and discussion We first present the main results ( §5.1), then the comparison to the state-of-the-art ( §5.2), and finally ablation tests to measure the contribution of each component ( §5.3).",
"Main results We report the results in the dataset of Zhang et al.",
"(2017a) at Table 1 .",
"As it can be seen, the proposed method performs at par with that of both in Spanish-English and Italian-English, but gets substantially better results in the more challenging Turkish-English pair.",
"While we are able to reproduce the results reported by Zhang et al.",
"(2017a) , their method gets the worst results of all by a large margin.",
"Another disadvantage of that model is that different et al.",
"(2018a) .",
"The remaining results were reported in the original papers.",
"For methods that do not require supervision, we report the average accuracy across 10 runs.",
"‡ For meaningful comparison, runs with <5% accuracy are excluded when computing the average, but note that, unlike ours, their method often gives a degenerated solution (see Table 2 ).",
"language pairs require different hyperparameters: λ = 1 works substantially better for Spanish-English and Italian-English, but only λ = 10 works for Turkish-English.",
"The results for the more challenging dataset from and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a are given in Table 2 .",
"In this case, our proposed method obtains the best results in all metrics for all the four language pairs tested.",
"The method of Zhang et al.",
"(2017a) does not work at all in this more challenging scenario, which is in line with the negative results reported by the authors themselves for similar conditions (only %2.53 accuracy in their large Gigaword dataset).",
"The method of also fails for English-Finnish (only 1.62% in the best run), although it is able to get positive results in some runs for the rest of language pairs.",
"Between the two configurations tested, the default hyperparameters in the code show a more stable behavior.",
"These results confirm the robustness of the proposed method.",
"While the other systems succeed in some runs and fail in others, our method converges to a good solution in all runs without excep-tion and, in fact, it is the only one getting positive results for English-Finnish.",
"In addition to being more robust, our method also obtains substantially better accuracies, surpassing previous methods by at least 1-3 points in all but the easiest pairs.",
"Moreover, our method is not sensitive to hyperparameters that are difficult to tune without a development set, which is critical in realistic unsupervised conditions.",
"At the same time, our method is significantly faster than the rest.",
"In relation to that, it is interesting that, while previous methods perform a fixed number of iterations and take practically the same time for all the different language pairs, the runtime of our method adapts to the difficulty of the task thanks to the dynamic convergence criterion of our stochastic approach.",
"This way, our method tends to take longer for more challenging language pairs (1.7 vs 0.6 minutes for es-en and tr-en in one dataset, and 12.9 vs 7.3 minutes for en-fi and en-de in the other) and, in fact, our (relative) execution times correlate surprisingly well with the linguistic distance with English (closest/fastest is German, followed by Italian/Spanish, followed by Turkish/Finnish).",
"Table 4 : Ablation test on the dataset of and the extensions of Artetxe et al.",
"(2017 Artetxe et al.",
"( , 2018a .",
"We perform 10 runs for each method and report the best and average accuracies (%), the number of successful runs (those with >5% accuracy) and the average runtime (minutes).",
"Table 3 shows the results of the proposed method in comparison to previous systems, including those with different degrees of supervision.",
"We focus on the widely used English-Italian dataset of and its extensions.",
"Despite being fully unsupervised, our method achieves the best results in all language pairs but one, even surpassing previous supervised approaches.",
"The only exception is English-Finnish, where Artetxe et al.",
"Comparison with the state-of-the-art (2018a) gets marginally better results with a difference of 0.3 points, yet ours is the only unsupervised system that works for this pair.",
"At the same time, it is remarkable that the proposed system gets substantially better results than Artetxe et al.",
"(2017) , the only other system based on selflearning, with the additional advantage of being fully unsupervised.",
"Ablation test In order to better understand the role of different aspects in the proposed system, we perform an ablation test, where we separately analyze the effect of initialization, the different components of our robust self-learning algorithm, and the final symmetric re-weighting.",
"The obtained results are reported in Table 4 .",
"In concordance with previous work, our results show that self-learning does not work with random initialization.",
"However, the proposed unsupervised initialization is able to overcome this issue without the need of any additional information, performing at par with other character-level heuristics that we tested (e.g.",
"shared numerals).",
"As for the different self-learning components, we observe that the stochastic dictionary induction is necessary to overcome the problem of poor lo-cal optima for English-Finnish, although it does not make any difference for the rest of easier language pairs.",
"The frequency-based vocabulary cutoff also has a positive effect, yielding to slightly better accuracies and much faster runtimes.",
"At the same time, CSLS plays a critical role in the system, as hubness severely accentuates the problem of local optima in its absence.",
"The bidirectional dictionary induction is also beneficial, contributing to the robustness of the system as shown by English-Finnish and yielding to better accuracies in all cases.",
"Finally, these results also show that symmetric re-weighting contributes positively, bringing an improvement of around 1-2 points without any cost in the execution time.",
"Conclusions In this paper, we show that previous unsupervised mapping methods (Zhang et al., 2017a; often fail on realistic scenarios involving non-comparable corpora and/or distant languages.",
"In contrast to adversarial methods, we propose to use an initial weak mapping that exploits the structure of the embedding spaces in combination with a robust self-learning approach.",
"The results show that our method succeeds in all cases, providing the best results with respect to all previous work on unsupervised and supervised mappings.",
"The ablation analysis shows that our initial solution is instrumental for making self-learning work without supervision.",
"In order to make selflearning robust, we also added stochasticity to dictionary induction, used CSLS instead of nearest neighbor, and produced bidirectional dictionaries.",
"Results also improved using smaller in-termediate vocabularies and re-weighting the final solution.",
"Our implementation is available as an open source project at https://github.",
"com/artetxem/vecmap.",
"In the future, we would like to extend the method from the bilingual to the multilingual scenario, and go beyond the word level by incorporating embeddings of longer phrases."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"3.3",
"3.4",
"4",
"5",
"5.1",
"5.2",
"5.3",
"6"
],
"paper_header_content": [
"Introduction",
"Related work",
"Proposed method",
"Embedding normalization",
"Fully unsupervised initialization",
"Robust self-learning",
"Symmetric re-weighting",
"Experimental settings",
"Results and discussion",
"Main results",
"Comparison with the state-of-the-art",
"Ablation test",
"Conclusions"
]
} | GEM-SciDuet-train-24#paper-1025#slide-5 | Conclusions | Not a solved problem!
More robust and accurate than previous methods
Future work: from bilingual to multilingual | Not a solved problem!
More robust and accurate than previous methods
Future work: from bilingual to multilingual | [] |
GEM-SciDuet-train-25#paper-1026#slide-0 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-0 | Universal Conceptual Cognitive Annotation UCCA | Cross-linguistically applicable semantic representation (Abend and Rappoport, 2013).
Builds on Basic Linguistic Theory (R. M. W. Dixon).
Stable in translation (Sulem et al., 2015).
After graduation John moved to Paris
P D L A A
Intuitive annotation interface and guidelines (Abend et al., 2017).
The Task: UCCA parsing in English, German and French in different domains. | Cross-linguistically applicable semantic representation (Abend and Rappoport, 2013).
Builds on Basic Linguistic Theory (R. M. W. Dixon).
Stable in translation (Sulem et al., 2015).
After graduation John moved to Paris
P D L A A
Intuitive annotation interface and guidelines (Abend et al., 2017).
The Task: UCCA parsing in English, German and French in different domains. | [] |
GEM-SciDuet-train-25#paper-1026#slide-1 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-1 | Applications | Machine translation (Birch et al., 2016)
Sentence splitting for text simplification (Sulem et al., 2018b).
Grammatical error correction (Choshen and Abend, 2018)
He gve an apple for john
He gave John an apple | Machine translation (Birch et al., 2016)
Sentence splitting for text simplification (Sulem et al., 2018b).
Grammatical error correction (Choshen and Abend, 2018)
He gve an apple for john
He gave John an apple | [] |
GEM-SciDuet-train-25#paper-1026#slide-2 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-2 | Graph Structure | Labeled directed acyclic graphs (DAGs). Complex units are non-terminal nodes. Phrases may be discontinuous.
They thought - - - remote edge
R P D A
taking a short break
Remote edges enable reentrancy. | Labeled directed acyclic graphs (DAGs). Complex units are non-terminal nodes. Phrases may be discontinuous.
They thought - - - remote edge
R P D A
taking a short break
Remote edges enable reentrancy. | [] |
GEM-SciDuet-train-25#paper-1026#slide-3 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-3 | Baseline | TUPA, a transition-based UCCA parser (Hershcovich et al., 2017).
taking a short break NodeC
LSTM LSTM LSTM LSTM LSTM LSTM LSTM
They thought about taking a short break | TUPA, a transition-based UCCA parser (Hershcovich et al., 2017).
taking a short break NodeC
LSTM LSTM LSTM LSTM LSTM LSTM LSTM
They thought about taking a short break | [] |
GEM-SciDuet-train-25#paper-1026#slide-4 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-4 | Data | English Wikipedia articles (Wiki).
English-French-German parallel corpus from
Twenty Thousand Leagues Under the Sea (20K). sentences tokens | English Wikipedia articles (Wiki).
English-French-German parallel corpus from
Twenty Thousand Leagues Under the Sea (20K). sentences tokens | [] |
GEM-SciDuet-train-25#paper-1026#slide-5 | 1026 | SemEval-2019 Task 1: Cross-lingual Semantic Parsing with UCCA | We present the SemEval 2019 shared task on Universal Conceptual Cognitive Annotation (UCCA) parsing in English, German and French, and discuss the participating systems and results. UCCA is a crosslinguistically applicable framework for semantic representation, which builds on extensive typological work and supports rapid annotation. UCCA poses a challenge for existing parsing techniques, as it exhibits reentrancy (resulting in DAG structures), discontinuous structures and non-terminal nodes corresponding to complex semantic units. The shared task has yielded improvements over the state-of-the-art baseline in all languages and settings. Full results can be found in the task's website https://competitions. codalab.org/competitions/19160. 10 http://spacy.io 11 http://fasttext.cc | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159
],
"paper_content_text": [
"Overview Semantic representation is receiving growing attention in NLP in the past few years, and many proposals for semantic schemes have recently been put forth.",
"Examples include Abstract Meaning Representation (AMR; Banarescu et al., 2013) , Broad-coverage Semantic Dependencies (SDP; Oepen et al., 2016) , Universal Decompositional Semantics (UDS; White et al., 2016) , Parallel Meaning Bank (Abzianidze et al., 2017) , and Universal Conceptual Cognitive Annotation (UCCA; Abend and Rappoport, 2013) .",
"These advances in semantic representation, along with corresponding advances in semantic parsing, can potentially benefit essentially all text understanding tasks, and have already demonstrated applicability to a variety of tasks, including summarization (Liu et al., 2015; Dohare and Karnick, 2017) , paraphrase detection (Issa et al., 2018) , and semantic evaluation (using UCCA; see below).",
"In this shared task, we focus on UCCA parsing in multiple languages.",
"One of our goals is to benefit semantic parsing in languages with less annotated resources by making use of data from more resource-rich languages.",
"We refer to this approach as cross-lingual parsing, while other works (Zhang et al., 2017 (Zhang et al., , 2018 define cross-lingual parsing as the task of parsing text in one language to meaning representation in another language.",
"In addition to its potential applicative value, work on semantic parsing poses interesting algorithmic and modeling challenges, which are often different from those tackled in syntactic parsing, including reentrancy (e.g., for sharing arguments across predicates), and the modeling of the interface with lexical semantics.",
"UCCA is a cross-linguistically applicable semantic representation scheme, building on the established Basic Linguistic Theory typological framework (Dixon, 2010b (Dixon, ,a, 2012 .",
"It has demonstrated applicability to multiple languages, including English, French and German, and pilot annotation projects were conducted on a few languages more.",
"UCCA structures have been shown to be well-preserved in translation (Sulem et al., 2015) , and to support rapid annotation by nonexperts, assisted by an accessible annotation interface .",
"1 UCCA has already shown applicative value for text simplifica- Scene Elements P Process The main relation of a Scene that evolves in time (usually an action or movement).",
"S State The main relation of a Scene that does not evolve in time.",
"A Participant Scene participant (including locations, abstract entities and Scenes serving as arguments).",
"D Adverbial A secondary relation in a Scene.",
"Elements of Non-Scene Units C Center Necessary for the conceptualization of the parent unit.",
"E Elaborator A non-Scene relation applying to a single Center.",
"N Connector A non-Scene relation applying to two or more Centers, highlighting a common feature.",
"R Relator All other types of non-Scene relations: (1) Rs that relate a C to some super-ordinate relation, and (2) Rs that relate two Cs pertaining to different aspects of the parent unit.",
"Inter-Scene Relations H Parallel Scene A Scene linked to other Scenes by regular linkage (e.g., temporal, logical, purposive).",
"L Linker A relation between two or more Hs (e.g., \"when\", \"if\", \"in order to\").",
"G Ground A relation between the speech event and the uttered Scene (e.g., \"surprisingly\").",
"Other F Function Does not introduce a relation or participant.",
"Required by some structural pattern.",
"tion (Sulem et al., 2018b) , as well as for defining semantic evaluation measures for text-to-text generation tasks, including machine translation (Birch et al., 2016) , text simplification (Sulem et al., 2018a) and grammatical error correction (Choshen and Abend, 2018) .",
"The shared task defines a number of tracks, based on the different corpora and the availability of external resources (see §5).",
"It received submissions from eight research groups around the world.",
"In all settings at least one of the submitted systems improved over the state-of-the-art TUPA parser (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , used as a baseline.",
"Task Definition UCCA represents the semantics of linguistic utterances as directed acyclic graphs (DAGs), where terminal (childless) nodes correspond to the text tokens, and non-terminal nodes to semantic units that participate in some super-ordinate relation.",
"Edges are labeled, indicating the role of a child in the relation the parent represents.",
"Nodes and edges belong to one of several layers, each corresponding to a \"module\" of semantic distinctions.",
"UCCA's foundational layer covers the predicate-argument structure evoked by predicates of all grammatical categories (verbal, nominal, adjectival and others), the inter-relations between them, and other major linguistic phenomena such as semantic heads and multi-word expressions.",
"It is the only layer for which annotated corpora exist at the moment, and is thus the target of this shared task.",
"The layer's basic notion is the Scene, describing a state, action, movement or some other relation that evolves in time.",
"Each Scene contains one main relation (marked as either a Process or a State), as well as one or more Participants.",
"For example, the sentence \"After graduation, John moved to Paris\" (Figure 1 ) contains two Scenes, whose main relations are \"graduation\" and \"moved\".",
"\"John\" is a Participant in both Scenes, while \"Paris\" only in the latter.",
"Further categories account for inter-Scene relations and the internal structure of complex arguments and relations (e.g., coordination and multi-word expressions).",
"Table 1 provides a concise description of the categories used by the UCCA foundational layer.",
"UCCA distinguishes primary edges, corresponding to explicit relations, from remote edges (appear dashed in Figure 1 ) that allow for a unit to participate in several super-ordinate relations.",
"Primary edges form a tree in each layer, whereas remote edges enable reentrancy, forming a DAG.",
"UCCA graphs may contain implicit units with no correspondent in the text.",
"Figure 2 shows the annotation for the sentence \"A similar technique is almost impossible to apply to other crops, such as cotton, soybeans and rice.\"",
"2 It includes a single Scene, whose main relation is \"apply\", a secondary relation \"almost impossible\", as well as two complex arguments: \"a similar technique\" and the coordinated argument \"such as cotton, soybeans, and rice.\"",
"In addition, the Scene includes an implicit argument, which represents the agent of the \"apply\" relation.",
"While parsing technology is well-established for syntactic parsing, UCCA has several formal properties that distinguish it from syntactic representations, mostly UCCA's tendency to abstract away from syntactic detail that do not affect argument structure.",
"For instance, consider the following examples where the concept of a Scene has a different rationale from the syntactic concept of a clause.",
"First, non-verbal predicates in UCCA are represented like verbal ones, such as when they appear in copula clauses or noun phrases.",
"Indeed, in Figure 1 , \"graduation\" and \"moved\" are considered separate Scenes, despite appearing in the same clause.",
"Second, in the same example, \"John\" is marked as a (remote) Participant in the graduation Scene, despite not being explicitly mentioned.",
"Third, consider the possessive construction in \"John's trip home\".",
"While in UCCA \"trip\" evokes a Scene in which \"John\" is a Participant, a syntactic scheme would analyze this phrase similarly to \"John's shoes\".",
"The differences in the challenges posed by syntactic parsing and UCCA parsing, and more generally by semantic parsing, motivate the development of targeted parsing technology to tackle it.",
"Data & Resources All UCCA corpora are freely available.",
"3 For English, we use v1.2.3 of the Wikipedia UCCA corpus (Wiki), v1.2.2 of the UCCA Twenty Thousand Leagues Under the Sea English-French parallel corpus (20K), which includes UCCA manual annotation for the first five chapters in French and English, and v1.0.1 of the UCCA German Twenty 3 https://github.com/ UniversalConceptualCognitiveAnnotation Thousand Leagues Under the Sea corpus, which includes the entire book in German.",
"For consistent annotation, we replace any Time and Quantifier labels with Adverbial and Elaborator in these data sets.",
"The resulting training, development 4 and test sets 5 are publicly available, and the splits are given in Table 2 .",
"Statistics on various structural properties are given in Table 3 .",
"The corpora were manually annotated according to v1.2 of the UCCA guidelines, 6 and reviewed by a second annotator.",
"All data was passed through automatic validation and normalization scripts.",
"7 The goal of validation is to rule out cases that are inconsistent with the UCCA annotation guidelines.",
"For example, a Scene, defined by the presence of a Process or a State, should include at least one Participant.",
"Due to the small amount of annotated data available for French, we only provided a minimal training set of 15 sentences, in addition to the development and test set.",
"Systems for French were expected to pursue semi-supervised approaches, such as cross-lingual learning or structure projection, leveraging the parallel nature of the corpus, or to rely on datasets for related formalisms, such as Universal Dependencies (Nivre et al., 2016) .",
"The full unannotated 20K Leagues corpus in English and French was released as well, in order to facilitate pursuing cross-lingual approaches.",
"Datasets were released in an XML format, including tokenized text automatically pre- processed using spaCy (see §5), and gold-standard UCCA annotation for the train and development sets.",
"8 To facilitate the use of existing NLP tools, we also released the data in SDP, AMR, CoNLL-U and plain text formats.",
"TUPA: The Baseline Parser We use the TUPA parser, the only parser for UCCA at the time the task was announced, as a baseline (Hershcovich et al., 2017 (Hershcovich et al., , 2018 .",
"TUPA is a transition-based DAG parser based on a BiLSTM-based classifier.",
"9 TUPA in itself has been found superior to a number of conversionbased parsers that use existing parsers for other formalisms to parse UCCA by constructing a twoway conversion protocol between the formalisms.",
"It can thus be regarded as a strong baseline for sys-8 https://github.com/ UniversalConceptualCognitiveAnnotation/ docs/blob/master/FORMAT.md 9 https://github.com/huji-nlp/tupa tem submissions to the shared task.",
"Evaluation Tracks.",
"Participants in the task were evaluated in four settings: In order to allow both even ground comparison between systems and using hitherto untried resources, we held both an open and a closed track for submissions in the English and German settings.",
"Closed track submissions were only allowed to use the gold-standard UCCA annotation distributed for the task in the target language, and were limited in their use of additional resources.",
"Concretely, the only additional data they were allowed to use is that used by TUPA, which consists of automatic annotations provided by spaCy: 10 POS tags, syntactic dependency relations, and named entity types and spans.",
"In addition, the closed track only allowed the use of word embeddings provided by fastText (Bojanowski et al., 2017 ) 11 for all languages.",
"Systems in the open track, on the other hand, were allowed to use any additional resource, such as UCCA annotation in other languages, dictionaries or datasets for other tasks, provided that they make sure not to use any additional gold standard annotation over the same text used in the UCCA corpora.",
"12 In both tracks, we required that submitted systems are not trained on the development data.",
"We only held an open track for French, due to the paucity of training data.",
"The four settings and two tracks result in a total of 7 competitions.",
"Scoring.",
"The following scores an output graph G 1 = (V 1 , E 1 ) against a gold one, G 2 = (V 2 , E 2 ), over the same sequence of terminals (tokens) W .",
"For a node v in V 1 or V 2 , define yield(v) ⊆ W as is its set of terminal descendants.",
"A pair of edges (v 1 , u 1 ) ∈ E 1 and (v 2 , u 2 ) ∈ E 2 with labels (categories) 1 , 2 is matching if yield(u 1 ) = yield(u 2 ) and 1 = 2 .",
"Labeled precision and recall are defined by dividing the number of matching edges in G 1 and G 2 by |E 1 | and |E 2 |, respectively.",
"F 1 is their harmonic mean: · Precision · Recall Precision + Recall Unlabeled precision, recall and F 1 are the same, but without requiring that 1 = 2 for the edges to match.",
"We evaluate these measures for primary and remote edges separately.",
"For a more finegrained evaluation, we additionally report precision, recall and F 1 on edges of each category.",
"13 Participating Systems We received a total of eight submissions to the different tracks: MaskParse@Deskiñ 12 We are not aware of any such annotation, but include this restriction for completeness.",
"13 The official evaluation script providing both coarse-grained and fine-grained scores can be found in https://github.com/huji-nlp/ucca/blob/ master/scripts/evaluate_standard.py.",
"14 It was later discovered that CUNY-PekingU used some of the evaluation data for training in the open tracks, and they were thus disqualified from these tracks.",
"In terms of parsing approaches, the task was quite varied.",
"HLT@SUDA converted UCCA graphs to constituency trees and trained a constituency parser and a recovery mechanism of remote edges in a multi-task framework.",
"MaskParse@Deskiñ used a bidirectional GRU tagger with a masking mechanism.",
"Tüpa and XLangMo used a transition-based approach.",
"UC Davis used an encoder-decoder architecture.",
"GCN-SEM uses a BiLSTM model to predict Semantic Dependency Parsing tags, when the syntactic dependency tree is given in the input.",
"CUNY-PKU is based on an ensemble that includes different variations of the TUPA parser.",
"DAN-GNT@UIT.VNU-HCM converted syntactic dependency trees to UCCA graphs.",
"Different systems handled remote edges differently.",
"DANGNT@UIT.VNU-HCM and GCN-SEM ignored remote edges.",
"UC Davis used a different BiLSTM for remote edges.",
"HLT@SUDA marked remote edges when converting the graph to a constituency tree and trained a classification model for their recovery.",
"MaskParse@Deskiñ handles remote edges by detecting arguments that are outside of the parent's node span using a detection threshold on the output probabilities.",
"In terms of using the data, all teams but one used the UCCA XML format, two used the CoNLL-U format, which is derived by a lossy conversion process, and only one team found the other data formats helpful.",
"One of the teams (MaskParse@Deskiñ) built a new training data adapted to their model by repeating each sentence N times, N being the number of non-terminal nodes in the UCCA graphs.",
"Three of the teams adapted the baseline TUPA parser, or parts of it to form their parser, namely TüPa, CUNY-PekingU and XLangMo; HLT@SUDA used a constituency parser (Stern et al., 2017) as a component in their model; DANGNT@UIT.VNU-HCM is a rule-based system over the Stanford Parser, and the rest are newly constructed parsers.",
"All teams found it useful to use external resources beyond those provided by the Shared Task.",
"Four submissions used external embeddings, MUSE (Conneau et al., 2017) in the case of MaskParse@Deskiñ and XLangMo, ELMo (Peters et al., 2018) in the case of TüPa, 15 and BERT (Devlin et al., 2019) in the case of HLT@SUDA.",
"Other resources included additional unlabeled data (TüPa), a list of multi-word expressions (MaskParse@Deskiñ), and the Stanford parser in the case of DANGNT@UIT.VNU-HCM.",
"Only CUNY-PKU used the 20K unlabeled parallel data in English and French.",
"A common trend for many of the systems was the use of cross-lingual projection or transfer (MaskParse@Deskiñ, HLT@SUDA, TüPa, GCN-Sem, CUNY-PKU and XLangMo).",
"This was necessary for French, and was found helpful for German as well (CUNY-PKU).",
"Table 4 shows the labeled and unlabeled F1 for primary and remote edges, for each system in each track.",
"Overall F1 (All) is the F1 calculated over both primary and remote edges.",
"Full results are available online.",
"16 Figure 3 shows the fine-grained evaluation by labeled F1 per UCCA category, for each system in each track.",
"While Ground edges were uniformly 16 http://bit.ly/semeval2019task1results difficult to parse due to their sparsity in the training data, Relators were the easiest for all systems, as they are both common and predictable.",
"The Process/State distinction proved challenging, and most main relations were identified as the more common Process category.",
"The winning system in most tracks (HLT@SUDA) performed better on almost all categories.",
"Its largest advantage was on Parallel Scenes and Linkers, showing was especially successful at identifying Scene boundaries relative to the other systems, which requires a good understanding of syntax.",
"Results Discussion The HLT@SUDA system participated in all the tracks, obtaining the first place in the six English and German tracks and the second place in the French open track.",
"The system is based on the conversion of UCCA graphs into constituency trees, marking remote and discontinuous edges for recovery.",
"The classification-based recovery of the remote edges is performed simultaneously with the constituency parsing in a multi-task learning framework.",
"This work, which further connects between semantic and syntactic parsing, proposes a recovery mechanism that can be applied to other grammatical formalisms, enabling the conversion of a given formalism to another one for parsing.",
"The idea of this system is inspired by the pseudo non-projective dependency parsing approach proposed by Nivre and Nilsson (2005) .",
"The MaskParse@Deskiñ system only participated to the French open track, focusing on crosslingual parsing.",
"The system uses a semantic tagger, implemented with a bidirectional GRU and a masking mechanism to recursively extract the inner semantic structures in the graph.",
"Multilingual word embeddings are also used.",
"Using the English and German training data as well as the small French trial data for training, the parser ranked fourth in the French open track with a labeled F1 score of 65.4%, suggesting that this new model could be useful for low-resource languages.",
"The Tüpa system takes a transition-based approach, building on the TUPA transition system and oracle, but modifies its feature representations.",
"Specifically, instead of representing the parser configuration using LSTMs over the partially parsed graph, stack and buffer, they use feedforward networks with ELMo contextualized embeddings.",
"The stack and buffer are represented by the top three items on them.",
"For the partially parsed graph, they extract the rightmost and leftmost parents and children of the respective items, and represent them by the ELMo embedding of their form, the embedding of their dependency heads (for terminals, for non-terminals this is replaced with a learned embedding) and the embeddings of all terminal children.",
"Results are generally on-par with the TUPA baseline, and surpass it from the out-of-domain English setting.",
"This suggests that the TUPA architecture may be simplified, without compromising performance.",
"The UC Davis system participated only in the English closed track, where they achieved the second highest score, on par with TUPA.",
"The proposed parser has an encoder-decoder architecture, where the encoder is a simple BiLSTM encoder for each span of words.",
"The decoder iteratively and greedily traverses the sentence, and attempts to predict span boundaries.",
"The basic algorithm yields an unlabeled contiguous phrase-based tree, but additional modules predict the labels of the spans, discontiguous units (by joining together spans from the contiguous tree under a new node), and remote edges.",
"The work is inspired by Kitaev and Klein (2018) , who used similar methods for constituency parsing.",
"The GCN-SEM system uses a BiLSTM encoder, and predicts bi-lexical semantic dependencies (in the SDP format) using word, token and syntactic dependency parses.",
"The latter is incorporated into the network with a graph convolutional network (GCN).",
"The team participated in the English and German closed tracks, and were not among the highest-ranking teams.",
"However, scores on the UCCA test sets converted to the bi-lexical CoNLL-U format were rather high, implying that the lossy conversion could be much of the reason.",
"The CUNY-PKU system was based on an ensemble.",
"The ensemble included variations of TUPA parser, namely the MLP and BiLSTM models (Hershcovich et al., 2017) and the BiLSTM model with an additional MLP.",
"The system also proposes a way to aggregate the ensemble going through CKY parsing and accounting for remotes and discontinuous spans.",
"The team participated in all tracks, including additional information in the open domain, notably synthetic data based on automatically translating annotated texts.",
"Their system ranks first in the French open track.",
"The DANGNT@UIT.VNU-HCM system partic-ipated only in the English Wiki open and closed tracks.",
"The system is based on graph transformations from dependency trees into UCCA, using heuristics to create non-terminal nodes and map the dependency relations to UCCA categories.",
"The manual rules were developed based on the training and development data.",
"As the system converts trees to trees and does not add reentrancies, it does not produce remote edges.",
"While the results are not among the highest-ranking in the task, the primary labeled F1 score of 71.1% in the English open track shows that a rule-based system on top of a leading dependency parser (the Stanford parser) can obtain reasonable results for this task.",
"Conclusion The task has yielded substantial improvements to UCCA parsing in all settings.",
"Given that the best reported results were achieved with different parsing and learning approaches than the baseline model TUPA (which has been the only available parser for UCCA), the task opens a variety of paths for future improvement.",
"Cross-lingual transfer, which capitalizes on UCCA's tendency to be preserved in translation, was employed by a number of systems and has proven remarkably effective.",
"Indeed, the high scores obtained for French parsing in a low-resource setting suggest that high quality UCCA parsing can be straightforwardly extended to additional languages, with only a minimal amount of manual labor.",
"Moreover, given the conceptual similarity between the different semantic representations , it is likely the parsers developed for the shared task will directly contribute to the development of other semantic parsing technology.",
"Such a contribution is facilitated by the available conversion scripts available between UCCA and other formats."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"5",
"2",
"6",
"8",
"9"
],
"paper_header_content": [
"Overview",
"Task Definition",
"Data & Resources",
"TUPA: The Baseline Parser",
"Evaluation",
"·",
"Participating Systems",
"Discussion",
"Conclusion"
]
} | GEM-SciDuet-train-25#paper-1026#slide-5 | Tracks | French low-resource (only 15 training sentences) | French low-resource (only 15 training sentences) | [] |